Real-Time Dense Depth Estimation Using Semantically-Guided LIDAR Data Propagation and Motion Stereo
MATLAB implementation of our paper submission for RAL/IROS2019.
Atsuki Hirata, Ryoichi Ishikawa, Menandro Roxas, Takeshi Oishi. "Real-Time Dense Depth Estimation Using Semantically-Guided LIDAR Data Propagation and Motion Stereo", IEEE Robotics and Automation Letters, Volume 4, Issue 4, 2019.
Requirements
- Tested on MATLAB R2019a
Dataset
Our dataset can be accessed from: dataset. There are 56 image pairs with ground truth depth, simulated LIDAR data, semantic segmentation results from ICNET, motion stereo and additional flownet2 optical flow for reference. We also provided manually segmented images.
Folder Structure:
- calib - intrinsic camera matrix K, and transformation R, t between first->second image frames in OpenCV format .xml
- image_02/data - first RGB image frame
- image_03/data - second RGB image frame
- motionstereo - depth from motion stereo in 16-bit .png
- proj_depth/groundtrutn - contains groundtruth depth for both image_02 and image_03
- proj_depth/velodyne_raw - contains simulated LIDAR data for image_02
- semantic_icnet - semantic segmentation from ICNET
- semantic_manual - manual semantic segmentation (subset only)
Miscellaneous:
- flownet2, flownet2_CSS - optical flow (for reference only)
- result_ours - our results for reference
RGB | Semantic Segmentation |
Motion Stereo | LIDAR |
Ground truth depth | |
Result - Propagation | Result - Optimization |
Building Instructions
Sample program is in upsampling_main.m.
Sample Result on KITTI
To Do
- Release C++/CUDA codes.
License
This project is licensed under the MIT license
Author
Atsuki Hirata and Menandro Roxas, 3D Vision Laboratory (Oishi Laboratory), Institute of Industrial Science, The University of Tokyo