A non-learning geometry-based LiDAR depth completion model with components including outlier removal, normal calculation, distance transform, etc. IEEE Robotics and Automation Letters
Hi,
First of all thank you for open-sourcing your amazing work.
I'm looking into using this repo onto a camera/lidar setup with a VLP-16 and have some questions that you hopefully can help with.
In your paper you've mentioned
Our solution outperforms the baseline for both 32-line and 64-line cases, but the self-supervised baseline starts to take the lead in 16-line case. This phenomenon verifies the condition of our algorithm - Thepoint cloud cannot be too sparse; otherwise it may lose the geometry information.
Is my understanding correct that the 16-line case still will work with this repo or did you mean that this method will not perform at all using a VLP-16 and thus another solution needs to be used?
As far as my understanding goes, an intrinsic matrix is used to convert 3D point onto the 2D camera space. However, the relative position of the camera compared to the lidar needs to be known to do it properly (extrinsic parameters). If I'm not mistaken these extrinsic parameters are defined in tools.py in the get_all_points function as extrinsic_v_2_c=[[0.007,-1,0,0],[0.0148,0,-1,-0.076],[1,0,0.0148,-0.271],[0,0,0,1]]
I've tried to look at Kitti setup to figure out what these numbers represent, but cannot wrap my head around it. Could you please explain where you got these numbers from and what they represent?
In the do_range_projection(points,proj_H=96,proj_W=2048,fov_up=3.0,fov_down=-18.0) function in tools.py I do not understand where you got the default numbers from nor what proj_H and proj_W represent. I've also looked up the datasheet of the HDL-64E and cannot see the fov_up and fov_down you use as default there. Could you please clarify what these arguments are and where you obtained the numbers from?