Working C++ code and Python Bindings to the original ScanContext repository where only matlab scripts work
- Clone this repository to a local directory
- run
make install
from within a terminal at the root of this directory
- Run
scan_context_pipeline --help
to know how to use the pipeline - You could select an existing dataloader for common datasets like KITTI, Mulran, Apollo or Newer College, or write a new one for any other dataset following the same pattern as in the provided dataloaders
- The pipeline will save the computed loop closure indices to a file in the dataset root path within the
results
folder - If provided with a ground truth closure file, the pipeline will additionally generate a Precision-Recall Table (See the dataloaders for how to provide the ground truth closures)
- Scan Context is a global descriptor for LiDAR point cloud, which is proposed in this paper and details are easily summarized in this video .
@ARTICLE { gskim-2021-tro,
AUTHOR = { Giseop Kim and Sunwook Choi and Ayoung Kim },
TITLE = { Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments },
JOURNAL = { IEEE Transactions on Robotics },
YEAR = { 2021 },
NOTE = { Accepted. To appear. },
}
@INPROCEEDINGS { gkim-2018-iros,
author = {Kim, Giseop and Kim, Ayoung},
title = { Scan Context: Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map },
booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems },
year = { 2018 },
month = { Oct. },
address = { Madrid }
}
- This point cloud descriptor is used for place retrieval problem such as place recognition and long-term localization.
- Scan Context is a global descriptor for LiDAR point cloud, which is especially designed for a sparse and noisy point cloud acquired in outdoor environment.
- It encodes egocentric visible information as below:
- A user can vary the resolution of a Scan Context. Below is the example of Scan Contexts' various resolutions for the same point cloud.
This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport of Korea (19CTAP-C142170-02), and [High-Definition Map Based Precise Vehicle Localization Using Cameras and LIDARs] project funded by NAVER LABS Corporation.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
- All codes on this page are copyrighted by KAIST and Naver Labs and published under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License. You must attribute the work in the manner specified by the author. You may not use the work for commercial purposes, and you may only distribute the resulting work under the same license if you alter, transform, or create the work.