Code Monkey home page Code Monkey logo

multical's Introduction

Multical

Multiple targets for multiple IMUs, cameras and LiDARs (Lasers) spatiotemporal calibration

The dataset is available here

Introduction

Multical is a calibration toolbox to simultaneously calibrate the spatial and temporal parameters between multiple IMUs, cameras and LiDARs(laser range finders). Nowadays, camera, LiDAR and Inertial Measurement Unit (IMU) are widely used in mobile robots and autonomous cars, and they are usually equipped with two and more cameras and LiDARs to increase fields of view (FoVs) as large as possible. The calibration of these sensors becomes much more cumbersome with existing methods and toolboxes, mainly for two reasons: a) Due to the size limitation, one calibration target usually cannot cover all sensors. It is hard to calibrate all sensors simultaneously, if some sensors do not overlap with each others. b) There are methods providing LiDAR-camera, LiDAR-IMU and camera-IMU calibration, but very few approaches (if any) can jointly calibrate multiple IMUs, cameras and LiDARs, rather than in pairs. Therefore, an approach which can calibrate multiple cameras, LiDARs and IMUs simultaneously will facilitate the robotics community.

To this end, we utilize multiple calibration targets to cover the FoVs of all sensors at the same time in this toolbox. Comparing to existing calibration libraries, the core features of Multical are

  1. utilizing non-repeated April tags to distinguish multiple planar boards as calibration targets. Moreover, the mount and position of targets are not limited, which are decided by the users according to the FOV of sensors to be calibrated. The following figure shows an example layout of calibration scenario, which consists of six apriltag boards.
    scenario
  2. Multical is very flexible regarding types and mount of sensors. Besides one mandatory camera, the users can add IMUs, LiDARs and more cameras according to their demands, i.e., not only IMUs-cameras-LiDARs but reduced calibration problems, like IMUs-cameras, cameras-LiDARs even multi-camera calibration are all supported.
  3. Multical has a series of algorithms to estimate prior knowledge of the extrinsic parameters, so it does not force the users to give any initial guesses, including the relative poses of apriltag boards.

Please find more information on the wiki pages of this repository.

This library is developed based on Kalibr, we reserve all the features, functions and tools of Kalibr in this library. By reusing and extending the codes of Kalibr, we develop Multical.

Reference

The relative paper has been published at IROS2022, see the pre-print. Please consider cite our paper if you are using this tool:

@conference {332,
	title = {Multical: Spatiotemporal Calibration for Multiple IMUs, Cameras and LiDARs},
	booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
	year = {In Press},
	publisher = {IEEE Press},
	organization = {IEEE Press},
	author = {Zhi, Xiangyang and Hou, Jiawei and Lu, Yiren and Kneip, Laurent and Schwertfeger, S{\"o}ren}
}

License (BSD)

BSD 3-Clause License

Copyright (c) 2021, Xiangyang Zhi All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

We heavily reuse the code of Kalibr, and the following lines are about the license of Kalibr.

Copyright (c) 2014, Paul Furgale, Jérôme Maye and Jörn Rehder, Autonomous Systems Lab, ETH Zurich, Switzerland
Copyright (c) 2014, Thomas Schneider, Skybotix AG, Switzerland
All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. All advertising materials mentioning features or use of this software must display the following acknowledgement: This product includes software developed by the Autonomous Systems Lab and Skybotix AG.

  4. Neither the name of the Autonomous Systems Lab and Skybotix AG nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE AUTONOMOUS SYSTEMS LAB AND SKYBOTIX AG ''AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL the AUTONOMOUS SYSTEMS LAB OR SKYBOTIX AG BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

multical's People

Contributors

bergercookie avatar bryant1410 avatar burrimi avatar dymczykm avatar eggerk avatar fabianbl avatar ffurrer avatar floriantschopp avatar furgalep avatar goldbattle avatar hannessommer avatar helenol avatar jbohren avatar kaju-bubanja avatar kartikmohta avatar mbuerki avatar mfehr avatar mpitropov avatar nikolausdemmel avatar othlu avatar pizzoli avatar raghavkhanna avatar rehderj avatar simonlynen avatar skohlbr avatar vanurag avatar vladyslavusenko avatar weblucas avatar zacharytaylor avatar zhixy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

multical's Issues

ValueError: need at least one array to concatenate

When I try use multical to calibrate one camera and one lidar in my own dataset, I met this problem. The command is "multical_calibrate_sensors --cams multical_ws/hunter_sensor/cameras.yaml --lidars multical_ws/hunter_sensor/lidar.yaml --bag camera0_ouster_2024-03-29-17-19-21.bag --targets multical_ws/hunter_sensor/april_6x6.yaml
"

And the log is as follows

Initializing calibration target:
Type: aprilgrid
Tags:
Rows: 6
Cols: 6
Size: 0.1205 [m]
Spacing 0.0369935 [m]
Initializing LiDAR rosbag dataset reader:
Dataset: camera0_ouster_2024-03-29-17-19-21.bag
Topic: /ouster/points
Number of messages: 333
Reading LiDAR data (/ouster/points)
Read 333000 LiDAR readings from 333 frames over -1810558291.4 seconds, and detect target by tapes from 0 frames
Initializing camera chain:
Camera chain - cam0:
Camera model: pinhole
Focal length: [1243.2159276212237, 1242.4174264569228]
Principal point: [971.5373680187001, 472.4919541350854]
Distortion model: equidistant
Distortion coefficients: [-0.03374305997414122, -0.43771560692756084, 0.8824088121967412, -0.5562322707174343]
baseline: no data available
Initializing camera rosbag dataset reader:
Dataset: camera0_ouster_2024-03-29-17-19-21.bag
Topic: /v4l2_camera0/image_raw
Number of images: 500
Extracting calibration target corners
Extracted corners for 494 images (of 500 images)

Building the problem
Spline order: 6
Pose knots per second: 100
Do pose motion regularization: True
xddot translation variance: 1000000.000000
xddot rotation variance: 1000000.000000
Bias knots per second: 5
Do bias motion regularization: True
Blake-Zisserman on reprojection errors -1
Acceleration Huber width (sigma): -1.000000
Gyroscope Huber width (sigma): -1.000000
Do time calibration: True
Max iterations: 30
Time offset padding: 0.030000

Estimating initial extrinsic parameters between primary camera and all other sensors
time interval threshold 0.36
The data gathering will break because of too large time interval (0.466620206833s)
Time span of gathered data is 1.93314290047s

Initializing a pose spline with 97 knots (50.000000 knots per second over 1.933143 seconds)
No initial extrinsic parameter is waited to estimate

Initializing a pose spline with 3338 knots (100.000000 knots per second over 33.383380 seconds)

Adding camera error terms (/v4l2_camera0/image_raw)
Added 494 camera error terms

Before Optimization

Normalized Residuals

Reprojection error (cam0): count: 62899, mean: 0.862456711376, median: 0.780583735608, std: 0.503506325411

Residuals

Reprojection error (cam0) [px]: count: 62899, mean: 0.862456711376, median: 0.780583735608, std: 0.503506325411

Optimizing...
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Initializing
Optimization problem initialized with 3346 design variables and 66237 error terms
The Jacobian matrix is 245966 x 20065
[0.0]: J: 62732.3
[1]: J: 53591.4, dJ: 9140.91, deltaX: 0.0321084, LM - lambda:10 mu:2
[2]: J: 53586.4, dJ: 5.00457, deltaX: 0.0178859, LM - lambda:3.33333 mu:2
[3]: J: 53586.4, dJ: 0.00950571, deltaX: 0.00598802, LM - lambda:1.11111 mu:2
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Initializing
Optimization problem initialized with 3345 design variables and 66237 error terms
The Jacobian matrix is 245966 x 20064
[0.0]: J: 53586.4
[1]: J: 53586.4, dJ: 0.0277072, deltaX: 0.000194194, LM - lambda:10 mu:2
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Using the block_cholesky linear system solver
Using the levenberg_marquardt trust region policy
Initializing
Optimization problem initialized with 3346 design variables and 66237 error terms
The Jacobian matrix is 245966 x 20065
[0.0]: J: 53586.4
[1]: J: 53586.4, dJ: 0.000594912, deltaX: 0.000142072, LM - lambda:10 mu:2
Traceback (most recent call last):
File "/home/zhh2005757/multical_ws/devel/bin/multical_calibrate_sensors", line 15, in
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/zhh2005757/multical_ws/src/multical-master/aslam_offline_calibration/kalibr/python/multical_calibrate_sensors", line 361, in
main()
File "/home/zhh2005757/multical_ws/src/multical-master/aslam_offline_calibration/kalibr/python/multical_calibrate_sensors", line 302, in main
iCal.optimize(maxIterations=parsed.max_iter, recoverCov=parsed.recover_cov)
File "/home/zhh2005757/multical_ws/src/multical-master/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/calibrator.py", line 88, in optimize
lidar.filterLiDARErrorTerms(self.problem, 1.0)
File "/home/zhh2005757/multical_ws/src/multical-master/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/sensors_and_targets.py", line 309, in filterLiDARErrorTerms
residuals = np.hstack([error_terms.error() for error_terms in obs.errorTerms])
File "/home/zhh2005757/.local/lib/python2.7/site-packages/numpy/core/shape_base.py", line 340, in hstack
return _nx.concatenate(arrs, 1)
ValueError: need at least one array to concatenate

Could you help me? Thanks a lot!

Questions regarding multiple lidar calibration.

Hi,

Thanks for this great work!
Are there any plans to release the paper that you've been working on? I am quite interested in how the extrinsics between IMUs, Cameras and LiDARs are found. I would really appreciate it if you guys can shed some light here.

Thanks!

Only LiDAR and Camera calibration case

I m working on single Camera-LiDAR pair calibration. I do not have an IMU sensor. In that case, what will be the changes required while using the repository?

when I try example , I met a problem descrbed as follows,how to address this problem?

Initializing IMUs:
Model: calibrated
T_here_imu0
Update rate: 200.0
Accelerometer:
Noise density: 0.0101387794307
Noise density (discrete): 0.143383993768
Random walk: 0.0062768460716
Gyroscope:
Noise density: 0.000133668670023
Noise density (discrete): 0.00189036046012
Random walk: 0.00052621002386
Initializing imu rosbag dataset reader:
Dataset: multical_calibration_example_data.bag
Topic: /xsens_imu/data
Number of messages: 15200
Reading IMU data (/xsens_imu/data)
Read 15200 imu readings over 76.0 seconds
Initializing calibration target:
Type: aprilgrid
Tags:
Rows: 6
Cols: 6
Size: 0.08 [m]
Spacing 0.024 [m]
Initializing LiDAR rosbag dataset reader:
Dataset: multical_calibration_example_data.bag
Topic: /right_velodyne/velodyne_points
Number of messages: 753
Reading LiDAR data (/right_velodyne/velodyne_points)
Progress 10 / 753 Time remaining: 56s
Traceback (most recent call last):
File "/home/xiaobobai/multical_workspace/devel/bin/multical_calibrate_sensors", line 15, in
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/multical_calibrate_sensors", line 361, in
main()
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/multical_calibrate_sensors", line 265, in main
lidar = sens.LiDAR(config, parsed, targets)
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/sensors_and_targets.py", line 153, in init
self.loadLiDARDataAndFindTarget(config.getReservedPointsPerFrame())
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/sensors_and_targets.py", line 178, in loadLiDARDataAndFindTarget
targetPose = find_target_pose(cloud, self.showPointCloud)
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/FindTargetFromPointCloud.py", line 105, in find_target_pose
position = estimate_intersection(tape1_params, tape2_params)
File "/home/xiaobobai/multical_workspace/src/multical/aslam_offline_calibration/kalibr/python/kalibr_sensor_calibration/FindTargetFromPointCloud.py", line 69, in estimate_intersection
estimated_intersection = np.linalg.lstsq(a, b, rcond=None)[0]
File "/usr/lib/python2.7/dist-packages/numpy/linalg/linalg.py", line 1953, in lstsq
0, work, -1, iwork, 0)
TypeError: a float is required

open3d installation issue

I m trying to install this repository for calibration of a Camera with a LiDAR sensor. The open3d installation is not happening and following error is coming:

(isoEnv) suraj@suraj:~$ pip install open3d==0.9.0
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support pip 21.0 will remove support for this functionality.
Collecting open3d==0.9.0
Using cached open3d-0.9.0.0-cp27-cp27mu-manylinux1_x86_64.whl (4.9 MB)
Collecting ipywidgets
Using cached ipywidgets-7.8.0-py2.py3-none-any.whl (124 kB)
Collecting numpy
Using cached numpy-1.16.6-cp27-cp27mu-manylinux1_x86_64.whl (17.0 MB)
Collecting notebook
Using cached notebook-5.7.16-py2.py3-none-any.whl (9.6 MB)
Collecting widgetsnbextension
Using cached widgetsnbextension-3.6.5-py2.py3-none-any.whl (1.6 MB)
Collecting traitlets>=4.3.1
Using cached traitlets-4.3.3-py2.py3-none-any.whl (75 kB)
ERROR: Could not find a version that satisfies the requirement comm>=0.1.3 (from ipywidgets->open3d==0.9.0) (from versions: 0.0.1)
ERROR: No matching distribution found for comm>=0.1.3 (from ipywidgets->open3d==0.9.0)

I also tried to upgrading to python3 to get the open3d installation working but in that case the project is not working, so I have to return to python2.

Number of AprilGrids required

How to find the number of AprilGrids required for calibration? I m using this method to calibrate 1 LiDAR and 1 Camera.

Paper availability

Hello.
This work is very intriguing.
I understand it was published at IROS2022.

Is there a pre-print version we can read?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.