Code Monkey home page Code Monkey logo

camvox's Introduction

CamVox

A Low-cost and Accurate Lidar-assisted Visual SLAM System

Point cloud density of Livox lidar Horizon as a function of integration time, the pattern does not repeat itself.

show

We propose CamVox by adapting Livox lidars into visual SLAM (ORB-SLAM2) by exploring the lidars’ unique features. Based on the non-repeating nature of Livox lidars, we propose an automatic lidar-camera calibration method that will work in uncontrolled scenes.

show

The long depth detection range also benefit a more efficient mapping. Comparison of CamVox with visual SLAM (VINS-mono) and lidar SLAM (livox_horizon_loam) are evaluated on the same dataset to demonstrate the performance.

Developer: Yuewen Zhu, Chunran Zheng, Chongjian Yuan, Xu Huang.

Our related video: our related videos are now available on [YouTube Video] [bilibili Video] [OwnCloud]

Paper: our related paper has been posted on arXiv, and final ICRA2021 accepted version can be available CamVox.pdf.

Lidar-Camera Automatic calibration follow-up work: "Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments".

1. Prerequisites

1.1 Ubuntu and ROS

Ubuntu 64-bit 16.04 or 18.04. ROS Kinetic or Melodic. Follow ROS Installation.
(Recommended version Ubunutu 16.04 LTS kernel version 4.15.0-140-generic)

1.2 OpenCV

We use OpenCV to manipulate images and features. Follow Opencv Installation. Required at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.4.1.

   (1)  *** Install dependencies: ***
	sudo apt-get install build-essential libgtk2.0-dev libavcodec-dev libavformat-dev  libjpeg.dev libtiff4.dev  libswscale-dev libjasper-dev   
   (2)  *** Install opencv-3.4.1: ***
   	cd opencv-3.4.1
	mkdir build && cd build
	cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..
	make -j
	sudo make install
   (4)  *** Add opencv libraries to path: ***
	sudo gedit /etc/ld.so.conf.d/opencv.conf 
        *** add in the file's end: ***
	/usr/local/lib 
	sudo ldconfig
   (5)  *** bash configurtion: ***
	sudo gedit /etc/bash.bashrc  
	*** add in the end: ***
	PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig  
	export PKG_CONFIG_PATH  
	source configurtion:
	source /etc/bash.bashrc  
	sudo updatedb

(Recommended version OpenCV-3.4.1.zip)

1.3 PCL-1.7

We use PCL to deal with point cloud features and Lidar-camera extrinsic parameters calibration. PCL Installation version recommended as follows.

   (1)  *** update your host: ***
   	sudo apt-get update
   (2)  *** install VTK ***
        cd VTK-8.2.0
        mkdir build && cd build
        cmake ..
        make -j
        sudo make install
   (3)  *** install pcl: ***
        sudo apt-get install libpcl-dev pcl-tools
        sudo apt-get install freeglut3-dev

(Recommended VTK version VTK-8.2.0.tar.gz)

1.4 Pangolin

We use Pangolin for visualization and user interface. Follow Pangolin Installation.

   (1)  *** install dependencies: ***
   	sudo apt-get install libglew-dev libpython2.7-dev libboost-dev libboost-thread-dev libboost-filesystem-dev -y
   (3)  *** install Pangolin: ***
        cd Pangolin
        mkdir build && cd build
        cmake ..
        make -j
        sudo make install

(Recommended version Pangolin.zip)

1.5 Ceres Solver

Follow Ceres Installation.

   (1)  *** install dependencies: ***
        sudo apt-get install liblapack-dev 
        sudo apt-get install libsuitesparse-dev 
        sudo apt-get install libcxsparse3.1.4 
        sudo apt-get install libgflags-dev 
        sudo apt-get install libgoogle-glog-dev libgtest-dev
   (2)  *** install ceres: ***
        cd ceres-solver-1.14.0
        mkdir build && cd build
        cmake ..
        make -j 
        sudo make install

(Recommended version Ceres-solver-1.14.0.tar.gz)

1.6 Eigen

Follow Eigen Installation. Required at least 3.1.0.

   (1)  *** install eigen: ***
        cd eigen
        mkdir build && cd build
        cmake ..
        make -j 
        sudo make install

(Recommended version Eigen-3.2.10.tar.gz)

1.7 Livox-SDK

Follow Livox-SDK Installation. (Recommended version Livox-SDK-master.zip)

1.8 MVS camera SDK and Ros driver

Install the HIKROBOT camera SDK as follows.

    tar zxvf MVS-2.0.0_x86_64_20191126.tar.gz
    cd ./MVS-2.0.0_x86_64_20191126
    chmod +x setup.sh
    sudo ./setup.sh

In addition, we supply a (software trigger Hikvisions' compatible Ros drivers), you can run it directly just use USB to connect the camera.

    roslaunch mvs_camera mvs_camera.launch

(Recommended version MVS-2.0.0.tar.gz)

2. Build CamVox

Clone the repository and catkin_make:

    cd ~/catkin_ws/src
    git clone https://github.com/ISEE-Technology/CamVox.git
    cd CamVox/isee-camvox && chmod a+x build.sh && chmod a+x build_ros.sh
    ./build.sh
    ./build_ros.sh
    source ~/catkin_ws/devel/setup.bash
    
    cd CamVox/isee-camvox/Vocabulary
    tar zxvf ORBvoc.txt.tar.gz

3. Run with Hardware

3.1 Hardware

Platform Item Pics Shopping Link
Livox Horizon Lidar
MV-CE060-10UC Camera
Inertial Sense uINS RTK
Manifold2C Onboard-Computer
Scout-mini Robot Chassis

3.2 Hard Synchronization

Hard synchronization is performed with all of these sensors by a trigger signal of 10 Hz. The camera output at each trigger signal(10 Hz). The lidar keeps a clock (synced with GPS-RTK) and continuously outputs the scanned point with an accurate timestamp. In the meantime, the IMU outputs at a frequency of 200 Hz synced with the trigger. The Hardware Synchronization diagram is as follows.

3.3 Running

Connect to your PC to Livox Horizon lidar by following Livox-ros-driver installation.

    cd ~/catkin_ws/src/CamVox/
    chmod +x run.sh
    ./run.sh

4. Run with Rosbag Example

4.1 SUSTech Expert Flat Outdoor large scale scenes (With Loop Closure)

show

We open sourced our dataset in SUSTech campus with loop closure. You can download the bag file from google drive CamVox.bag or zenodo CamVox.bag. (Updated)

Other two main framework data formats for comparison. VINS-mono.bag | livox_loam_horizon.bag | Groundtruth.bag (Updated)

show show
The comparisons of the trajectories from CamVox, two mainstream SLAM framework and the ground truth are evaluated on our SUSTech dataset as shown in above figure .

4.2 Rosbag Example with static scenes (Automatic Calibration trigger)

show

We provide a rosbag file with static scenes to test the automatic calibration thread. calibration.bag. (Updated) When the car detects more than 10 frames of still images (about 1 second), the automatic calibration thread starts to work. The thread will be interrupted to enter the SLAM mode if the car starts to move before the end of calibration. The effects of automatic calibration is shown as follows.

show show

An example of RGB camera and point cloud overlay after calibration. (a) not calibrated. (b) automatically calibrated. (c) the best manual calibration. The automatic calibration algorithms is verified at various scenes, (d) outdoor road with natural trees and grasses, (e) outdoor artificial structures, (f) indoor underexposed structures. (g-i) represent the cost value evolution in the optimization process corresponding to the scenes on the left.

show

4.3 Running Rosbag Examples

4.3.1 running without saving trajectory and colored pcd files

    roscore
    cd CamVox/isee-camvox
    rosrun online camvox Vocabulary/ORBvoc.bin camvox/online/Livox.yaml 
    rosbag play CamVox.bag (or calibration.bag)

4.3.2 saving trajectory and colored pcd files after finishing specified frames (e.g. 1000)

    roscore
    cd CamVox/isee-camvox
    rosrun online camvox Vocabulary/ORBvoc.bin camvox/online/Livox.yaml 1000
    rosbag play CamVox.bag (or calibration.bag)

5. Acknowledgements

The authors are grateful for the pioneering work from ORB_SLAM2, ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. The authors also sincerely thank colleagues at Livox Technology for help in data aquisition and discussion. This work is from ISEE Research Group at SUSTech.

6. License

The source code is released under GPLv2.0 license.

If you use CamVox in an academic work, please cite:

(**Updated**)  @misc{zhu2020camvox,
  title={CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System}, 
  author={Yuewen Zhu and Chunran Zheng and Chongjian Yuan and Xu Huang and Xiaoping Hong},
  year={2020},
  eprint={2011.11357},
  archivePrefix={arXiv},
  primaryClass={cs.RO}
  }

camvox's People

Contributors

huiminyuan avatar xiaopinghong avatar xuankuzcr avatar xxxxhx avatar zywok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camvox's Issues

Runtime error : Segmentation fault (core dumped)

Hello,Thank you for your great work!
When I run the project with Rosbag Example .It runs normally at first, But a Segmentation fault occurs after a few minutes.
The following is the log print information,But I can't find out what went wrong. looking forward to your reply !

Camvox Copyright (C) 2020 ISEE, University of SUSTech.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 0.31s

Camera Parameters:

  • fx: 957.994
  • fy: 955.328
  • cx: 790.335
  • cy: 250.663
  • k1: -0.12
  • k2: 0.1162
  • p1: 0
  • p2: 0
  • fps: 10
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1500
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 129.998
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Robot keep 1s still !
Robot keep 1s still !
Robot keep 1s still !
Robot keep 1s still !
Robot keep 1s still !
Calibrating processing
Robot keep 1s still !
New map created with 159 points
Calibrating processing
cancel calibration !
Calibrating RELEASE
[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow.After spin:

Saving keyframe trajectory to OnlineKeyFrameTrajectory.txt ...

trajectory saved!
Robot keep 1s still !
Dense Color Pointcloud save finished
QObject::QObject: Timers cannot be stopped from another thread
Segmentation fault (core dumped)
(base) lqql@lqql:
/catkin_ws/src/CamVox/isee-camvox$ rosrun online camvox Vocabulary/ORBvoc.bin camvox/offline/Livox.yaml 1000

Camvox Copyright (C) 2020 ISEE, University of SUSTech.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 0.33s

Camera Parameters:

  • fx: 957.994
  • fy: 955.328
  • cx: 790.335
  • cy: 250.663
  • k1: -0.12
  • k2: 0.1162
  • p1: 0
  • p2: 0
  • fps: 10
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1500
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 129.998
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.New map created with 133 points
After spin:

Saving keyframe trajectory to OnlineKeyFrameTrajectory.txt ...

trajectory saved!
Robot keep 1s still !
Dense Color Pointcloud save finished
QObject::QObject: Timers cannot be stopped from another thread
Segmentation fault (core dumped)
(base) lqql@lqql:
/catkin_ws/src/CamVox/isee-camvox$ rosrun online camvox Vocabulary/ORBvoc.bin camvox/offline/Livox.yaml 1000

Camvox Copyright (C) 2020 ISEE, University of SUSTech.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded in 0.32s

Camera Parameters:

  • fx: 957.994
  • fy: 955.328
  • cx: 790.335
  • cy: 250.663
  • k1: -0.12
  • k2: 0.1162
  • p1: 0
  • p2: 0
  • fps: 10
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1500
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 129.998
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.New map created with 87 points
Track lost soon after initialisation, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Database... done
New map created with 105 points
Track lost soon after initialisation, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Database... done
New map created with 87 points
Track lost soon after initialisation, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Database... done
New map created with 86 points
Track lost soon after initialisation, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Database... done
New map created with 79 points
Track lost soon after initialisation, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Database... done
New map created with 89 points
[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow.After spin:
Robot keep 1s still !
[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow.
Saving keyframe trajectory to OnlineKeyFrameTrajectory.txt ...

trajectory saved!
Dense Color Pointcloud save finished
Segmentation fault (core dumped)

catkin_make error

/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:39:8: error: ‘isee_synchronize’ was not declared in this scope
vector<isee_synchronize::GPS> rtk_datas;
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:39:29: error: template argument 1 is invalid
vector<isee_synchronize::GPS> rtk_datas;
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:39:29: error: template argument 2 is invalid
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:75:23: error: ‘to_time’ declared as an ‘inline’ variable
inline double to_time(isee_synchronize::GPS rtk_data) // sensor_msgs::NavSatFix
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:75:23: error: ‘double to_time’ redeclared as different kind of symbol
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:70:15: note: previous declaration ‘double to_time(nav_msgs::Odometry)’
inline double to_time(nav_msgs::Odometry imu_data)
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:75:23: error: ‘isee_synchronize’ has not been declared
inline double to_time(isee_synchronize::GPS rtk_data) // sensor_msgs::NavSatFix
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:81:19: error: ‘isee_synchronize’ does not name a type
void rtkCbk(const isee_synchronize::GPS::ConstPtr &msg)
^
/home/chen/catkin_ws/src/CamVox/isee-preprocessing/src/isee_synchronize.cpp:81:40: error: expected unqualified-id before ‘::’ token
void rtkCbk(const isee_synchronize::GPS::ConstPtr &msg)

Build Error

Thanks for your work! I had trouble when build the code with ./build_ros.sh
It seems like something wrong with vtk, pcl and boost.
I reinstalled pcl1.7 and vtk6.2 via source code, but it didn't help.
Also, could you provide the way to build "libCamvox.so"? In that case I can test the code with other machines.
Thanks!!!

/usr/bin/ld: warning: libboost_system.so.1.65.1, needed by /opt/ros/melodic/lib/libroscpp.so, may conflict with libboost_system.so.1.58.0
/usr/bin/ld: warning: libboost_filesystem.so.1.65.1, needed by /opt/ros/melodic/lib/libroscpp.so, may conflict with libboost_filesystem.so.1.58.0
/usr/bin/ld: warning: libboost_thread.so.1.58.0, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingFreeTypeOpenGL-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingMatplotlib-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingFreeTypeFontConfig-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libopencv_calib3d3.so.3.3, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libopencv_features2d3.so.3.3, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libopencv_highgui3.so.3.3, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libopencv_imgcodecs3.so.3.3, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libopencv_imgproc3.so.3.3, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libGLEW.so.1.13, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkInteractionStyle-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingOpenGL-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingFreeType-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkRenderingCore-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkCommonDataModel-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkCommonMath-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libvtkCommonCore-6.2.so.6.2, needed by ../../../lib/libCamvox.so, not found (try using -rpath or -rpath-link)
../../../lib/libCamvox.so:对‘vtkRenderingFreeTypeOpenGL_AutoInit_Destruct()’未定义的引用
../../../lib/libCamvox.so:对‘pcl::search::Searchpcl::PointXYZ::getName() const’未定义的引用
../../../lib/libCamvox.so:对‘boost::this_thread::hiden::sleep_until(timespec const&)’未定义的引用
../../../lib/libCamvox.so:对‘vtkRenderingFreeTypeOpenGL_AutoInit_Construct()’未定义的引用
collect2: error: ld returned 1 exit status
CMakeFiles/camvox.dir/build.make:523: recipe for target '../camvox' failed
make[2]: *** [../camvox] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/camvox.dir/all' failed
make[1]: *** [CMakeFiles/camvox.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Lidar-Camera-Calibration

作者您好,我查看标定代码CamVox/isee-camvox/src/Calibrating.cc发现并没有使用关于位姿的优化算法,而是使用了类似CSM的位姿暴力搜索算法,这和论文中关于Lidar-Camera标定配图的ICP迭代不太相符?

Does it work on other Livox LiDAR models?

Great work! We are planning to adapt this algorithm to other Livox LiDAR-camera combinations, so does this algorithm generalize to other LiDAR models, i.e., MID-40/70/100 or TELE series yet?

Trajectory.txt

请问下是怎么输出轨迹文件KeyFrameTrajectory.txt的?谢谢

Sensor intrinsic and extrinsic

Thank you for sharing the code.
I read the code and found that the sensor extrinsic and intrinsic parameters are unclear.
The camera intrinsic in the code are different from those in yaml. #25 mentioned that the camera was changed once. So which is the accurate intrinsic?
In addition, it is mentioned in #7 that the imu information is not obtained from horizon, but from Inertial Sense uINS. So what is the extrinsic from imu to horizon or imu to camera?
So could you provide a yaml file corresponding to the Camvox.bag dataset, including the intrinsic and extrinsic of imu-camera-livox.
Thanks again, sincerely.

The problem Ubuntu18.04 compiling

I have encountered problems compiled in Ubuntu18.04,it seems to need PCL 1.7 for the project,but PCL 1.8 in ros melodic,how can i do?Thank you for your help!

Cannot access the bag files

Hi!
Hope you are well.
Thanks for sharing the code!
I am facing issues while accessing the bag files from zywok. Would it be possible to share the bag files using Google Drive?

TIA
Harshal

关于内外参问题

您好,非常感谢你们的分享,这个工作非常有意义。
我仔细阅读了你们的文章和代码,我有一点关于内外参的问题想请教。
1 外参标定优化工作:在Calibrating::Run()中优化了best_r_, best_p_, best_y_,优化旋转是为了矫正生成深度图像的偏差,但我好像没发现使用其对激光生成的深度图像进行矫正,是不是我理解的有问题呢?
2 激光生成深度图像的工作:
1)在livox_lidar2depth函数中
这些变量“float fx=1732.78774,fy=1724.88401,cx=798.426021,cy=312.570668; ”是表示相机的内参么,好像与camera.yaml文件中
Camera.fx: 957.994
Camera.fy: 955.3280
Camera.cx: 790.335
Camera.cy: 250.6631 的参数不太一致,如果不是相机内参的话是什么意思呢。
2) 这个变量“Eigen::Affine3f sensorPose = (Eigen::Affine3f)Eigen::Translation3f(-0.082984,-0.0221759,0.0727962);”是坐标中心到相机的平移量么,平移对生成深度图影响较小所以在“Calibrating::Run()”中仅优化了旋转么?

期待您的回复,谢谢啦~ 再次感谢你们的分享

Livox Avia

Hi ,is it possible use livox Avia with it?

Thanks by share

question related to construction of the range image from lidar pointcloud

Hi, thank you for the project. I have a question related to the construction of range image from livox lidar pointcloud here.
As I see in the code base, the range image has been created using pcl's createFromPointCloudWithFixedSize.

https://github.com/ISEE-Technology/CamVox/blob/main/isee-preprocessing/src/isee_synchronize.cpp#L335-L344
however, the width, height, camera matrix has been hardcoded here. Can you please elaborate about the choice of these values? Is this actually the parameters of the camera used in the experiment?

error when ./build.sh

because of the wrong VTK version?

/usr/include/vtk-6.2/vtkObjectBase.h:165:30: error: field ‘ReferenceCount’ has incomplete type ‘vtkAtomicInt’
vtkAtomicInt ReferenceCount;
^
In file included from /usr/include/vtk-6.2/vtkObjectBase.h:44:0,
from /usr/include/vtk-6.2/vtkSmartPointerBase.h:27,
from /usr/include/vtk-6.2/vtkSmartPointer.h:23,
from /usr/include/pcl-1.7/pcl/visualization/point_cloud_geometry_handlers.h:48,
from /usr/include/pcl-1.7/pcl/visualization/point_cloud_handlers.h:41,
from /usr/include/pcl-1.7/pcl/visualization/common/actor_map.h:40,
from /usr/include/pcl-1.7/pcl/visualization/pcl_visualizer.h:48,
from /usr/include/pcl-1.7/pcl/visualization/cloud_viewer.h:39,
from /home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:7:
/usr/include/vtk-6.2/vtkAtomicInt.h:307:7: note: declaration of ‘class vtkAtomicInt’
class vtkAtomicInt: public detail::vtkAtomicIntImpl
^
In file included from /home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:7:0:
/usr/include/pcl-1.7/pcl/visualization/cloud_viewer.h:202:14: warning: ‘template class std::auto_ptr’ is deprecated [-Wdeprecated-declarations]
std::auto_ptr<CloudViewer_impl> impl_;
^
In file included from /usr/include/c++/5/memory:81:0,
from /usr/include/c++/5/thread:40,
from /home/hk/ws_CamVox/src/CamVox/isee-camvox/include/System.h:19,
from /home/hk/ws_CamVox/src/CamVox/isee-camvox/include/pointcloudmapping.h:4,
from /home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:1:
/usr/include/c++/5/bits/unique_ptr.h:49:28: note: declared here
template class auto_ptr;
^
/home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc: In member function ‘void Camvox::PointCloudMapping::updatecloud()’:
/home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:168:31: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < currentvpKFs.size(); i++)
^
/home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:170:35: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int j = 0; j < pointcloud.size(); j++)
^
/home/hk/ws_CamVox/src/CamVox/isee-camvox/src/pointcloudmapping.cc:172:44: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (pointcloud[j].pcID == currentvpKFs[i]->mnFrameId)
^
CMakeFiles/Camvox.dir/build.make:315: recipe for target 'CMakeFiles/Camvox.dir/src/pointcloudmapping.cc.o' failed
make[2]: *** [CMakeFiles/Camvox.dir/src/pointcloudmapping.cc.o] Error 1
CMakeFiles/Makefile2:98: recipe for target 'CMakeFiles/Camvox.dir/all' failed
make[1]: *** [CMakeFiles/Camvox.dir/all] Error 2
Makefile:102: recipe for target 'all' failed
make: *** [all] Error 2
Converting vocabulary to binary
./tools/bin_vocabulary: error while loading shared libraries: libDBoW2.so: cannot open shared object file: No such file or directory

missing of MvErrorDefine.h

Thanks for your work.
When I tried compiling CamVox, it gave me several errors, one is

/catkin_ws/src/CamVox-main/isee-preprocessing/src/isee_synchronize.cpp:3:10: fatal error: isee_synchronize/CustomMsg.h: No such file or directory
 #include <isee_synchronize/CustomMsg.h>

I found the required file at
catkin_ws/src/CamVox-main/isee-preprocessing/build/devel/include/isee_synchronize
Thus I copy the required file to include dir, and it solved.
Another error is

/catkin_ws/src/CamVox-main/isee-preprocessing/include/camera.h:8:10: fatal error: MvErrorDefine.h: No such file or directory
 #include "MvErrorDefine.h"

I checked my workspace and there is no such file, then I searched online and found it should be a Halcon camera file, but I dont know which file you are using.

关于imu

您好,请问在这个CamVox解决方案里imu数据是从horizon激光雷达内置的imu芯片直接读取的还是从Inertial Sense uINS里读取出来的?

rosrun 运行问题

我们看到您团队的CamVox项目的实现效果非常不错,对这一项目非常感兴趣。
感谢您在github上将其开源,我们也试着将其复现。但是我在安装运行过程中,遇到了一些问题无法解决,希望请教一下

我按照步骤安装,但是没装livox-sdk与mvs camera-sdk。我想只运行一下rosbag看看效果。

我打开了roscore ,之后运行 rosrun online camvox Vocabulary/ORBvoc.bin camvox/online/Livox.yaml 时,出现segement failed。

经过调试,我发现错误出现在calibrating.cc文件中,cv::projectPoints(pts_3d, r_vec, t_vec, camera_matrix, distortion_coeff, pts_2d);这个函数出错,
pts_3d, r_vec, t_vec, camera_matrix, distortion_coeff等参数均有对应数值,但是输出的pts_2d值为nan,导致之后的程序无法运行。

由于在rosrun这一步已经出错,我没有运行rosbag。

感谢您的帮助

Dataset problem.

Thank you for your excellent work.
I downloaded the data set CamVox.bag you uploaded, but when I execute the following command:
rosbag play CamVox.bag
Display:
[ INFO] [1622026026.629494531]: Opening CamVox.bag [FATAL] [1622026026.629722490]: Required 'op' field missing
Then I execute the following command:
rosbag info CamVox.bag
Display:
ERROR bag unindexed: CamVox.bag. Run rosbag reindex.
Then I execute the command:
rosbag reindex CamVox.bag
After reindexing rosbag, I execute the command again:
rosbag info CamVox.bag
Display:
path: CamVox.bag version: 2.0 duration: 6:11s (371s) start: May 15 2020 16:14:29.30 (1589530469.30) end: May 15 2020 16:20:41.00 (1589530841.00) size: 15.0 GB messages: 7436 compression: none [7436/7436 chunks] types: sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] topics: /isee_depth 3718 msgs : sensor_msgs/Image /isee_rgb 3718 msgs : sensor_msgs/Image
As shown above, I lost /IMU and /livox_ lidar.
Can you upload the intact rosbag data again?
Thank You!

problem with build 'build_ros.sh' in Ubuntu 18.04 ROS melodic

I get a error code like this, how can i solve it?

/usr/include/pcl-1.8/pcl/visualization/point_cloud_geometry_handlers.h:48:10: fatal error: vtkSmartPointer.h: No such file or directory
#include <vtkSmartPointer.h>
^~~~~~~~~~~~~~~~~~~
compilation terminated.
CMakeFiles/camvox.dir/build.make:118: recipe for target 'CMakeFiles/camvox.dir/src/ros_rgbd.cc.o' failed
make[2]: *** [CMakeFiles/camvox.dir/src/ros_rgbd.cc.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/camvox.dir/all' failed
make[1]: *** [CMakeFiles/camvox.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Hard Synchronization

I am very interested in your work. We are doing hardware time synchronization. You should be doing this part of the work through STM32. Can you provide this part of the code?

How can I use r3live rosbag ?

thanks to release this excelent project ! I want to compare camvox with r3live, but I think I should generate isee-depth image. I find livox_lidar2depth fuction in isee_synchronize.cpp,but I don't know how to set the input paramters of createFromPointCloudWithFixedSize. I hope to get your help,thanks !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.