Code Monkey home page Code Monkey logo

acsc's Introduction

ACSC

Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems.

pipeline

System Architecture

pipeline

1. Dependency

Tested with Ubuntu 16.04 64-bit and Ubuntu 18.04 64-bit.

  • ROS (tested with kinetic / melodic)

  • Eigen 3.2.5

  • PCL 1.8

  • python 2.X / 3.X

  • python-pcl

  • opencv-python (>= 4.0)

  • scipy

  • scikit-learn

  • transforms3d

  • pyyaml

  • mayavi (optional, for debug and visualization only)

2. Preparation

2.1 Download and installation

Use the following commands to download this repo.

Notice: the SUBMODULE should also be cloned.

git clone --recurse-submodules https://github.com/HViktorTsoi/ACSC

Compile and install the normal-diff segmentation extension.

cd /path/to/your/ACSC/segmentation

python setup.py install

We developed a practical ROS tool to achieve convenient calibration data collection, which can automatically organize the data into the format in 3.1. We strongly recommend that you use this tool to simplify the calibration process.

It's ok if you don't have ROS or don't use the provided tool, just manually process the images and point clouds into 3.1's format.

First enter the directory of the collection tool and run the following command:

cd /path/to/your/ACSC/ros/livox_calibration_ws

catkin_make

source ./devel/setup.zsh # or source ./devel/setup.sh

File explanation

  • ros/: The data collection tool directory (A ros workspace);

  • configs/: The directory used to store configuration files;

  • calibration.py: The main code for solving extrinsic parameters;

  • projection_validation.py: The code for visualization and verification of calibration results;

  • utils.py: utilities.

2.2 Preparing the calibration board

chessboard

We use a common checkerboard as the calibration target.

Notice, to ensure the success rate of calibration, it is best to meet the following requirement, when making and placing the calibration board:

  1. The size of the black/white square in the checkerboard should be >= 8cm;

  2. The checkerboard should be printed out on white paper, and pasted on a rectangular surface that will not deform;

  3. There should be no extra borders around the checkerboard;

  4. The checkerboard should be placed on a thin monopod, or suspended in the air with a thin wire. And during the calibration process, the support should be as stable as possible (Due to the need for point cloud integration);

  5. When placing the checkerboard on the base, the lower edge of the board should be parallel to the ground;

  6. There are not supposed to be obstructions within 3m of the radius of the calibration board.

Checkerboard placement

calibration board placement

Sensor setup

calibration board placement

3. Extrinsic Calibration

3.1 Data format

The images and LiDAR point clouds data need to be organized into the following format:

|- data_root
|-- images
|---- 000000.png
|---- 000001.png
|---- ......
|-- pcds
|---- 000000.npy
|---- 000001.npy
|---- ......
|-- distortion
|-- intrinsic

Among them, the images directory contains images containing checkerboard at different placements, recorded by the camera ;

The pcds directory contains point clouds corresponding to the images, each point cloud is a numpy array, with the shape of N x 4, and each row is the x, y, z and reflectance information of the point;

The distortion and intrinsic are the distortion parameters and intrinsic parameters of the camera respectively (will be described in detail in 3.3).

Sample Data

The sample solid state LiDAR point clouds, images and camera intrinsic data can be downloaded (375.6 MB) on:

Google Drive | BaiduPan (Code: fws7)

If you are testing with the provided sample data, you can directly jump to 3.4.

3.2 Data collection for your own sensors

First, make sure you can receive data topics from the the Livox LiDAR ( sensor_msgs.PointCloud2 ) and Camera ( sensor_msgs.Image );

Run the launch file of the data collection tool:

mkdir /tmp/data

cd /path/to/your/ACSC/ros/livox_calibration_ws
source ./devel/setup.zsh # or source ./devel/setup.sh

roslaunch calibration_data_collection lidar_camera_calibration.launch \                                                                                
config-path:=/home/hvt/Code/livox_camera_calibration/configs/data_collection.yaml \
image-topic:=/camera/image_raw \
lidar-topic:=/livox/lidar \
saving-path:=/tmp/data

Here, config-path is the path of the configuration file, usually we use configs/data_collection.yaml, and leave it as default;

The image-topic and lidar-topic are the topic names that we receive camera images and LiDAR point clouds, respectively;

The saving-path is the directory where the calibration data is temporarily stored.

After launching, you should be able to see the following two interfaces, which are the real-time camera image, and the birdeye projection of LiDAR.

If any of these two interfaces is not displayed properly, please check yourimage-topic and lidar-topic to see if the data can be received normally.

GUI

Place the checkerboard, observe the position of the checkerboard on the LiDAR birdeye view interface, to ensure that it is within the FOVof the LiDAR and the camera.

Then, press <Enter> to record the data; you need to wait for a few seconds, after the point cloud is collected and integrated, and the screen prompts that the data recording is complete, change the position of the checkerboard and continue to record the next set of data.

To ensure the robustness of the calibration results, the placement of the checkerboard should meet the following requirement:

  1. The checkerboard should be at least 2 meters away from the LiDAR;

  2. The checkerboard should be placed in at least 6 positions, which are the left, middle, and right sides of the short distance (about 4m), and the left, middle, and right sides of the long distance (8m);

  3. In each position, the calibration plate should have 2~3 different orientations.

When all calibration data is collected, type Ctrl+c in the terminal to close the calibration tool.

At this point, you should be able to see the newly generated data folder named with saving-path that we specified, where images are saved in images, and point clouds are saved in pcds:

collection_result

3.3 Camera intrinsic parameters

There are many tools for camera intrinsic calibration, here we recommend using the Camera Calibrator App in MATLAB, or the Camera Calibration Tools in ROS, to calibrate the camera intrinsic.

Write the camera intrinsic matrix

fx s x0
0 fy y0
0  0  1

into the intrinsic file under data-root. The format should be as shown below:

intrinsic

Write the camera distortion vector

k1  k2  p1  p2  k3

into the distortion file under data-root. The format should be as shown below:

dist

3.4 Extrinsic Calibration

When you have completed all the steps in 3.1 ~ 3.3, the data-root directory should contain the following content:

data

If any files are missing, please confirm whether all the steps in 3.1~3.3 are completed.

Modify the calibration configuration file in directory config, here we take sample.yaml as an example:

  1. Modify the root under data, to the root directory of data collected in 3.1~3.3. In our example, root should be /tmp/data/1595233229.25;

  2. Modify the chessboard parameter under data, change W and H to the number of inner corners of the checkerboard that you use (note that, it is not the number of squares, but the number of inner corners. For instance, for the checkerboard in 2.2, W= 7, H=5); Modify GRID_SIZE to the side length of a single little white / black square of the checkerboard (unit is m);

Then, run the extrinsic calibration code:

python calibration.py --config ./configs/sample.yaml

After calibration, the extrinsic parameter matrix will be written into the parameter/extrinsic file under data-root. data

4. Validation of result

After extrinsic calibration of step 3, run projection_projection.py to check whether the calibration is accurate:

python projection_validation.py --config ./configs/sample.yaml

It will display the point cloud reprojection to the image with solved extrinsic parameters, the RGB-colorized point cloud, and the visualization of the detected 3D corners reprojected to the image.

Note that, the 3D point cloud colorization results will only be displayed if mayavi is installed.

Reprojection of Livox Horizon Point Cloud

data

Reprojection Result of Livox Mid100 Point Cloud

data

Reprojection Result of Livox Mid40 Point Cloud

data

Colorized Point Cloud

data

Detected Corners

data data

Appendix

I. Tested sensor combinations

No. LiDAR Camera Chessboard Pattern
1 LIVOX Horizon MYNTEYE-D 120 7x5, 0.08m
2 LIVOX Horizon MYNTEYE-D 120 7x5, 0.15m
3 LIVOX Horizon AVT Mako G-158C 7x5, 0.08m
4 LIVOX Horizon Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
5 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.08m
6 LIVOX Mid-40 MYNTEYE-D 120 7x5, 0.15m
7 LIVOX Mid-40 AVT Mako G-158C 7x5, 0.08m
8 LIVOX Mid-40 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
9 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.08m
10 LIVOX Mid-100 MYNTEYE-D 120 7x5, 0.15m
11 LIVOX Mid-100 AVT Mako G-158C 7x5, 0.08m
12 LIVOX Mid-100 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
13 RoboSense ruby MYNTEYE-D 120 7x5, 0.08m
14 RoboSense ruby AVT Mako G-158C 7x5, 0.08m
15 RoboSense ruby Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m
16 RoboSense RS32 MYNTEYE-D 120 7x5, 0.08m
17 RoboSense RS32 AVT Mako G-158C 7x5, 0.08m
18 RoboSense RS32 Pointgrey CM3-U3-31S4C-CS 7x5, 0.08m

II. Paper

ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

@misc{cui2020acsc,
      title={ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems}, 
      author={Jiahe Cui and Jianwei Niu and Zhenchao Ouyang and Yunxiang He and Dian Liu},
      year={2020},
      eprint={2011.08516},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

III. Known Issues

Updating...

acsc's People

Contributors

hviktortsoi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acsc's Issues

AttributeError: 'module' object has no attribute 'findChessboardCornersSB'

Traceback(most recent call last):
File "calibration.py", line 943, in
calibration(keep_list=None)
File "calibration.py", line 873, in calibration
corners_world, final_cost, corners_image = detection_result[idx].get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
AttributeError: 'module' object has no attribute 'findChessboardCornersSB'

python2.7安装mayavi报错

您好,请问您在使用mayavi进行三维点云的可视化的时候是使用的python2.7吗?我使用pip安装mayavi时遇到了以下问题:
Requirement already satisfied: mayavi in ./anaconda3/envs/acsc2/lib/python2.7/site-packages/mayavi-4.5.0-py2.7-linux-x86_64.egg (4.5.0)
Requirement already satisfied: apptools in ./anaconda3/envs/acsc2/lib/python2.7/site-packages (from mayavi) (5.1.0)
ERROR: Package 'apptools' requires a different Python: 2.7.18 not in '>=3.6'
请问您有遇到过此类问题吗?如果没有的话,您有相关python2.7安装mayavi的教程吗?谢谢!

Suggestions for revising the wording

非常感谢大佬的牛逼研究!

我有一个小建议哈,您在ReadMe的第二节关于标定过程的注意事项中,有这样一段关于标定板不应有边界的描述:

There should be no extra borders around the checkerboard

我的建议是把should修改成must等语气更坚决一点的词汇。

我注意到 #10 中也有关于此问题的讨论。我一开始进行实验并没有太留意这件事,标定结果可视化的时候,对于标定板距离传感器系统比较近的时候,重投影点和实际图像的角点比较接近;但当标定板举例传感器系统较远的时候,只要放大绘制重投影结果的图像就可以看到有明显的偏差。由于should,我最开始反而没想到是标定板的问题 😂 ,后来排除了很多其他问题后最终判断误差来源就是在标定板的边界上。后来更换了无边界的标定板后,问题解决。

所以我想如果能够修改这里的措辞,可以让更多像我一样的马大哈可以避免犯错 😅

setup.py install 失败

很棒的工作!
在本地配置的时候,运行python setup.py install出现以下问题:
-- looking for PCL_COMMON
-- looking for PCL_KDTREE
-- looking for PCL_OCTREE
-- looking for PCL_SEARCH
-- looking for PCL_IO
-- looking for PCL_SAMPLE_CONSENSUS
-- looking for PCL_FILTERS
-- looking for PCL_GEOMETRY
-- looking for PCL_FEATURES
-- looking for PCL_SEGMENTATION
-- looking for PCL_SURFACE
-- looking for PCL_REGISTRATION
-- looking for PCL_RECOGNITION
-- looking for PCL_KEYPOINTS
-- looking for PCL_VISUALIZATION
-- looking for PCL_PEOPLE
-- looking for PCL_OUTOFCORE
-- looking for PCL_TRACKING
-- looking for PCL_APPS
-- Could NOT find PCL_APPS (missing: PCL_APPS_LIBRARY)
-- looking for PCL_MODELER
-- looking for PCL_IN_HAND_SCANNER
-- looking for PCL_POINT_CLOUD_EDITOR
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ACSC/segmentation/build/temp.linux-x86_64-3.7
make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libproj.so', needed by '../lib.linux-x86_64-3.7/segmentation_ext.so'. Stop.
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/segmentation_ext.dir/all' failed
make[1]: *** [CMakeFiles/segmentation_ext.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

-- Could NOT find PCL_APPS (missing: PCL_APPS_LIBRARY) 这个具体是啥原因呢

sample.yaml DEBUG

Hello, thanks to your work, but I have a problem
I want to visualize 3D detection result, intensity distribution and camera calibration
So I change DEBUG to 2 in sample.yaml
But after that calibration.py doesn't work
Could I get your help?

Calibration controller node process dies immediately

I have successfully built the ACSC workspace and instead of passing parameters, have configured the lidar_camera_calibration.launch file as follows:

<arg name="config-path" default="/home/visionarymind/livox_ws/src/ACSC/configs/data_collection.yaml"/> <arg name="image-topic" default="/rgb/image_raw"/> <arg name="lidar-topic" default="/livox/lidar"/> <!-- <arg name="lidar-topic" default="/velodyne_points"/>--> <arg name="saving-path" default="/home/visionarymind/Documents/calibration"/> <arg name="data-tag" default="''"/>

I then do the following:

  1. Launch the camera driver node (Azure Kinect).
  2. Launch the Livox Avia driver node.
  3. Execute roslaunch calibration_data_collection lidar_camera_calibration.launch.

This immediately produces the following error (in red):

[calibration_controller_node-2] process has died [pid 15643, exit code -11, cmd /home/visionarymind/livox_ws/src/ACSC/ros/livox_calibration_ws/src/calibration_data_collection/scripts/calibration_controller_node.py --config-file /home/visionarymind/livox_ws/src/ACSC/configs/data_collection.yaml --data-saving-path /home/visionarymind/Documents/calibration --overlap 50 --data-tag --image-topic /rgb/image_raw --lidar-topic /livox/lidar --lidar-id -1 __name:=calibration_controller_node __log:=/home/visionarymind/.ros/log/1de314e4-56ea-11eb-8688-48b02d2b8961/calibration_controller_node-2.log]. log file: /home/visionarymind/.ros/log/1de314e4-56ea-11eb-8688-48b02d2b8961/calibration_controller_node-2*.log

Any ideas what could be the problem?

Why not extract chessboard from intensity map directly?

Thanks for your contribution. I'm interested in Lidar-RGB calibration and I noticed your excellent work. One question about your 3D point extraction method, have you ever tried to extract 3D point from the intensity map and get its depth info from the depth map? I'm working on this now, if you have ever tried that, I'll preciate you sharing the result with me no matter it's good or not. Thanks.

我在运行python calibration.py --config ./configs/sample.yaml时会报下面的错误,没找到解决的方法,请问您知道吗

Calculating frame: 0 / 21
Traceback (most recent call last):
File "calibration.py", line 943, in
calibration(keep_list=None)
File "calibration.py", line 873, in calibration
corners_world, final_cost, corners_image = detection_result[idx].get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
AttributeError: 'module' object has no attribute 'PointCloud_PointXYZI'

What is the material of the calibration board?

Hello,
I tried to print the calibre board with the common PP paper which usually used in printing poster,
but I find the reflection of it is nearly same.
What is the actuate material I should adopt?
Thanks a lot.

About the calibration chessboard

Hi @HViktorTsoi Thank you very much for the project. I have a question related to the checkerboard.
In README, you have mentioned that There should be no extra borders around the checkerboard.
Can you elaborate on the reason why it is important to remove the extra borders of the checkerboard?

The point cloud is unable to detect the checkerboard

Hello, my lidar is a 32 wire one. I manually provided the ROI of a lidar checkerboard board, but the checkerboard still cannot be detected using the regio_growning_kernel function. Is it because the lidar point cloud is too sparse? May I ask if it is possible to achieve calibration board detection by modifying some configuration parameter classes purchased in YAML? I don't quite understand this part of the principle.tankes

This is a picture of my lidar checkerboard board
1710406435934

Trouble importing segmentation_ext into Python 3

When installing with setup.py in the segmentation directory, I run into this issue when importing segmentation_ext in Python 3.6.9:

ImportError: dynamic module does not define module export function (PyInit_segmentation_ext)

It works just fine in Python 2.7. I have already tried setting the target version and the python executable in the CMakeList.txt, but it did not work

ValueError: vector::_M_default_append on segmentation_ext.region_growing_kernel

We have been using this tool on multiple projects, and it has been working splendidly. Recently, we switched to new laptops that have Ubuntu 20.04 and Python 3.8. I have gotten all libraries and am able to import them into the Python interpreter, however, there seems to be an issue, either with the new libraries or with the size of our dataset (we are now using 6K images for calibration).

Everything works as usual up until the point in the calibration script (calibration.py) where the "pc" variable is assigned to utils.voxelize(pc, voxel_size=configs['calibration']['RG_VOXEL']). This downsamples a 1,173,359 point cloud to 90,052 points. As soon as segmentation_ext.region_growing_kernel is run, calibration.py spawns 5 additional threads, and the following error is immediately thrown:

Calculating frame: 0 / 22
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/visionarymind/anaconda3/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "calibration.py", line 780, in corner_detection_task
    ROI_pc = locate_chessboard(pc)
  File "calibration.py", line 392, in locate_chessboard
    segmentation = segmentation_ext.region_growing_kernel(
ValueError: vector::_M_default_append
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "calibration.py", line 943, in <module>
    calibration(keep_list=None)
  File "calibration.py", line 873, in calibration
    corners_world, final_cost, corners_image = detection_result[idx].get()
  File "/home/visionarymind/anaconda3/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
ValueError: vector::_M_default_append

I have heard of this happening before with very large datasets, but a 90k point cloud should not be a problem. Would you have any idea how to get around this? It happens even if we setup a Conda environment with Python 2.7 and allow it to solve all dependencies.

Perhaps you could offer a pre-configured Conda environment YAML file that we could use to ensure all the right libraries are installed? I do not think this is a problem with library contention, but I want to make sure. I have already spent nearly a week attempting to get this working with variant library setups.

about ROI

hello, firstly I appreciate about your project
However I had a problem that when I calibrate while using this module, I don't know how to get ROIs files value.
Could I get your help?

Memory Error

Hi when i run calibration.py on the sample data provided, following error occurs. Please have a look into this.

`[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.
[initCompute] Failed to allocate 156063007204055648 indices.

Calculating frame: 0 / 21
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/lukkuphi/anaconda3/envs/acsc/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "calibration.py", line 786, in corner_detection_task
ROI_pc = locate_chessboard(pc)
File "calibration.py", line 402, in locate_chessboard
configs['calibration']['RG_CURV_TH']
MemoryError: std::bad_alloc
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "calibration.py", line 964, in
calibration(keep_list=None)
File "calibration.py", line 892, in calibration
corners_world, final_cost, corners_image = detection_result[idx].get()
File "/home/lukkuphi/anaconda3/envs/acsc/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
MemoryError: std::bad_alloc`

segmentation_ext.region_growing_kernel()出现段错误

我使用conda-python37编译的segmentation_ext,且导包正常
当关闭多线程,运行python calibration.py --config ./configs/sample.yaml到
pc = utils.voxelize(pc, voxel_size=configs['calibration']['RG_VOXEL'])
'''查看pc.size:为296116,有数据 '''
# region growing segmentation
segmentation = segmentation_ext.region_growing_kernel(pc,.......)
当程序执行到此,直接段错误(核心转储),难道是因为编译的segmentation_ext出了问题。

pcl版本问题

当我执行python calibration.py --config ./configs/sample.yaml 时会提示如下错误,
AttributeError:'module' object has no attribute 'PointCloud_PointXXYZI'
请问你装的pcl版本是什么

The problem of dataset

Hi:
I used the package you provided and followed the step you told me. But the problem was that the data from my LiDAR could not work with your code:
image
The LiDAR I used was: Ouster LiDAR.
Can you tell me which part should I modify? I changed the code: calibration_controller_node.py # 解析点格式, but I do not know if I have to modify some other part of the code, could you please give me some ideas? Thx and I am looking forward to your reply!

目前使用过的针对livox激光雷达和相机联合标定最好最方便的方法!

感谢作者的开源代码!

曾经尝试过6种方法进行livox雷达和相机的联合标定,包括livox官方开源的那两个。从效果上看,还是基于标定板的方法靠谱,很多自动标定的方法对采集环境的要求太高,现实很难找到理想的场地去完成标定;matlab高版本中也有集成的激光雷达相机联合标定方法,但是使用起来稍微繁琐一点,而且标定效果也不稳定。

由于作者的开源代码标定的效果太好了,完美解决了我遇到的问题,欣喜之余分享一下我使用作者代码标定的一些经验。

使用多台设备多次标定,结果都是一次成功。
如果使用该方法标定结果稍有差异(很少出现),可以将代码中生成的标定板3d点和2d点存下来,使用livox官方提供的手工标定方法,直接替换那两个存储3d和2d点的txt文件,使用同时优化外参和内参的方法,可以达到理想的标定效果。

系统环境:ubuntu20.04,pothon3.8,ROS1-noetic
激光雷达: Livox-AVIA
相机:1200W/800W像素
高质量黑白棋盘格标定板及单杆立式固定支架

我配置的环境:
pip install numpy==1.23
pip install scipy
pip install scikit-learn
pip install rospy
pip install rospkg
pip install pyyaml
pip install transforms3d
sudo apt-get install ros-noetic-ros-numpy

建立链接:
ln -s /usr/bin/python3 /usr/bin/python

由于使用的为python3,相关源码需要进行修改:
我使用的相机没有对应的ROS驱动,因此图像采集使用cheese,代码中图像采集相关的代码被我注释掉了

cd /path/to/your/ACSC/ros/livox_calibration_ws/src/calibration_data_collection/scripts

打开文件夹下唯一的py文件,修改第11、40、41、310、328、344行

Line11: import thread 修改为 import _thread

Line40、41: 将第41行注释掉,并将第40行取消注释

Line 310: 将该行注释掉

Line328: thread.start_new_thread 修改为 _thread.start_new_thread

Line344: 将此行注释掉

修改ros中的launch文件
cd path/to/your/ACSC/ros/livox_calibration_ws/src/calibration_data_collection/launch/lidar_camera_calibration.launch
将config-path 设置为 data_collection.yaml 的路径

之后catkin_make编译,按照作者的教程就可以进行标定了,我采集了30组数据左右

最后放一张标定效果图像:
Uploading 0.jpg…

segmentation_exe无法import

作者您好,就是我用setup.py运行生成segmentation_ext.so,但是在代码了,不知道怎么import segmentation_ext?恳请作者可以指教一下,感谢

Unable to get extrinsics calibration between Avia and UHD DSLR

I have been running multiple tests these past few days, and I cannot get any working extrinsics calibration with our dataset. Please, if you will have a moment, please indicate what is wrong with these images:

image

We have 24 of them at 4, 7, and 8-meter distances, covering the entire FOV of both the Avia and our DSLR camera. The camera is generating hi-res images (above) at a resolution of 5184x2920. The images shown here have a .5 and 1-inchi white border, however, we have also used calibration boards no border. Neither produces results. Corners are found in the images but not in the point cloud. Here is one of the clouds:

image

There are more than enough points to perform RANSAC without loss of the small plane. I will be trying once again tonight, this time with the checkerboard positioned farther up on the stand so that it covers the top. I shouldn't think that the small point at the top would count as an "obstruction", but perhaps this is the problem.

是否适用于机械式机械雷达

请问这种方法的代码适用于机械式激光雷达吗?以及棋盘格标定板材料是特殊定制的吗还是普通的打印的?谢谢!!!

Usage

I have a point cloud, extrinsic parameters, intrinsic parameter as well as the projection matrix. How can I use it to project the point cloud to what the camera sees?

运行报错

用pyhton3 运行 python3 calibration.py --config ./configs/sample.yaml报错
Traceback (most recent call last):
File "calibration.py", line 27, in
import segmentation_ext
ImportError: dynamic module does not define module export function (PyInit_segmentation_ext)
python2 运行python calibration.py --config ./configs/sample.yaml报错
Traceback (most recent call last):
File "calibration.py", line 27, in
import segmentation_ext
ImportError: No module named segmentation_ext
请问怎么解决

Tele-15 and Hikvision camera calibration possible?

I am trying to calibrate Livox tele-15 and Hikvision iDS camera calibration (extrinsic). Is it possible with this method? I saw that this is not been tested.

The field of view is just 15° so the calibration target needs to be kelp further away. Would this cause a problem?

a large FOV calibration

The fisheye's FOV is 180 degree,and we plane to replace it with 197 degree camera later. I wonder if this method would still work with such a large FOV?You prompt reply will be very much appreciated.

Sample data on BaiduPan is unavailable

Thank you for your great work! Sample data on BaiduPan is unavailable now, I'll appreciate it if you would like to update the link, so I can download the sample data.

在执行python projection_validation.py --config ./configs/sample.yaml 后命令行输出几行字就停止输出

输出:
Localization done. min cost=7.189008491164129

Localization done. min cost=10.974828754350037

Localization done. min cost=7.696183526641846

Localization done. min cost=5.486117499095273

Localization done. min cost=8.745949616974988

Localization done. min cost=6.122541215581003

Localization done. min cost=4.3667670171396775

Localization done. min cost=10.33130107718595

Localization done. min cost=7.663290353740171

Localization done. min cost=4.713727093038523

Localization done. min cost=10.030486626383299

Localization done. min cost=8.256724505393125

Localization done. min cost=19.50139201376781

Localization done. min cost=11.691634027177853

Localization done. min cost=8.065487603386272

Localization done. min cost=22.969422421889817

Localization done. min cost=10.21475545242713

Localization done. min cost=5.366466291371865

Localization done. min cost=11.561646084577959

Localization done. min cost=9.697920791823435

Localization done. min cost=18.51885867076172

python-pcl等库已正确安装,但我的pcl版本是1.9.1 vtk版本是8.1,不知道这个会不会有影响。

用自己录制的pcds文件提示以下错误,相机校准出错,但是用你的pcds文件却可以运行,请问是什么原因

python3 calibration.py --config ./configs/sample.yaml

Calculating frame: 0 / 6
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "calibration.py", line 780, in corner_detection_task
ROI_pc = locate_chessboard(pc)
File "calibration.py", line 389, in locate_chessboard
pc = utils.voxelize(pc, voxel_size=configs['calibration']['RG_VOXEL'])
File "/home/zehao/catkin_ws/src/ACSC/utils.py", line 129, in voxelize
cloud.from_array(pc.astype(np.float32))
File "pcl/pxi/PointCloud_PointXYZI_180.pxi", line 158, in pcl._pcl.PointCloud_PointXYZI.from_array
AssertionError
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "calibration.py", line 940, in
calibration(keep_list=None)
File "calibration.py", line 870, in calibration
corners_world, final_cost, corners_image = detection_result[idx].get()
File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
AssertionError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.