Code Monkey home page Code Monkey logo

semidense-lines's Introduction

Incremental 3D Line Segment Extraction for Surface Reconstruction from Semi-dense SLAM

This repo presents the code for my Master's thesis. Some of the work is also presented in the following paper (accepted at ICPR 2018):

  • Incremental 3D Line Segment Extraction from Semi-dense SLAM, Shida He, Xuebin Qin, Zichen Zhang, Martin Jagersand (arxiv)

Our method simplifies the per-keyframe pointcloud produced by a semi-dense SLAM system using 3D line segments. Then, we take the extracted 3D line segments and reconstruct a surface using them. In this way, surface of the scene viewed by the camera can be reconstructed while the camera is exploring. Our method produce accurate 3D line segments with few outliers, which makes the reconstructed surface more structually meaningful than surface reconstructed using feature points or random selected points. See my thesis page for more details.

This version of software is based on ORB-SLAM2. We use the semi-dense version of ORB-SLAM which is implemented based on the technique described in this paper. We release this software under GPLv3 license. See Dependencies.md for other dependencies.

If you use this software in an academic work, please cite:

@inproceedings{he18icpr,
  title={Incremental 3D Line Segment Extraction from Semi-dense SLAM},
  author={Shida He, Xuebin Qin, Zichen Zhang, Martin Jagersand},
  booktitle={International Conference on Pattern Recognition, {ICPR} 2018},
  year={2018}
}

If you want a fast real-time surface reconstruction system that works with real robots, checkout Jun's repo: https://github.com/atlas-jj/ORB-SLAM-free-space-carving (Used for long distance teleoperation, very cool)

1. Prerequisites

The software is tested in 64-bit Ubuntu 14.04.

C++11 or C++0x Compiler

We use the new thread and chrono functionalities of C++11.

Pangolin

We use Pangolin for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

OpenCV

We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 3.1.0.

Eigen3

Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

BLAS and LAPACK

BLAS and LAPACK libraries are requiered by g2o (see below). On ubuntu:

sudo apt-get install libblas-dev liblapack-dev liblapacke-dev

DBoW2 and g2o (Included in Thirdparty folder)

We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

ROS (optional)

We provide some examples to process the live input of a monocular, stereo or RGB-D camera using ROS. Building these examples is optional. In case you want to use ROS, a version Hydro or newer is needed.

Additional

Apart from the dependencies list above for ORB-SLAM2, additional libraries are needed.

CGAL:

sudo apt-get install libcgal-dev

Boost:

sudo apt-get install libboost-all-dev

EdgeDrawing (Included in Thirdparty folder):

We use EdgeDrawing edge detector in our system. The library is available as binary files. The version for 64-bit Linux is included in the Thirdparty folder.

EDLines and Line3D++ (Included in Thirdparty folder):

We compare our system against methods using EDLines and Line3D++. EDLines is avaiable as binary files and the version for 64-bit Linux is included in Thirdparty folder. We also include Line3D++ in the Thirdparty folder and it will be compiled when running the build.sh script.

2. Building

Similar to ORB-SLAM2, build.sh can build the Thirdparty libraries and semi-dense ORB-SLAM2 with 3D line segment extraction and surface reconstruction. Please make sure you have installed all required dependencies (see section 1).

Execute:

chmod +x build.sh
./build.sh

Sometimes the build will fail with errors related to c++11 standard, and simply running the script again should fix the issue.

3. Examples

The system can be run in the same way of running ORB-SLAM2. Here are examples for running on EuRoC and TUM-RGBD dataset (monocular). It can also run using ROS. See ORB-SLAM2 for more details.

EuRoC Dataset

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

  2. Execute the following first command for V1 and V2 sequences, or the second command for MH sequences. Change PATH_TO_SEQUENCE and SEQUENCE according to the sequence you want to run.

./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

4. Results

In default configuration, the sparse SLAM system run once and then all other processing (including semi-dense mapping, 3D line segment extraction and surface reconstruction) happens offline afterwards. Note the offline processing can take a long time which makes the windows unresponsive. Please wait until the processing is finished in order to save the results. Online processing can be enabled by uncomment the OnlineLoop macro in ProbabilityMapping.cc file. However, this is not recommended due to the fact that real-time performance is not yet guaranteed.

The results are saved in the directory results_line_segments under a subdirectory named by the starting time.

In each result directory:

  • info.txt reports time usage and used parameters.
  • model.obj is the reconstructed mesh of the scene.
  • semi_pointcloud.obj is the raw semi-dense pointcloud from semi-dense mapping.
  • line_segments.obj contains the extracted 3D line segments before clustering.
  • line_segments_clustered_incr.obj contains the clustered line segments.
  • line_segments_edlines.obj contains the line segments extracted using decoupled line segment fitting.
  • Files with Line3D++ in their names are created by Line3D++.

Reproduction

To reproduce the results shown in our ICPR 2018 paper (arxiv), please run the following commands. Please change PATH_TO_SEQUENCE_* to the corresponding dicrectory of the sequence. The figures are screenshots of models rendered in MeshLab.

  1. Fig 4: This figure shows results from 4 different sequences: EuRoC MAV Vicon Room 101, EuRoC MAV Machine Hall 01, TUM RGBD fr3-large-cabinet, TUM RGBD fr1-room.
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_V101/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_MH01/cam0/data Examples/Monocular/EuRoC_TimeStamps/MH01.txt 
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUM3.yaml PATH_TO_SEQUENCE_fr3_large_cabinet
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUM1.yaml PATH_TO_SEQUENCE_fr1_room
  1. Fig 5: This figure contains result from sequence TUM RGB-D fr3-structure-texture-near.
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUM3.yaml PATH_TO_SEQUENCE_fr3_structure_texture_near
  1. Fig 6: This figure shows reconstructed surfaces from sequence EuRoC MAV Vicon Room 101. In default configuration, the surface model with line segments endpoints will be reconstructed. To reconstruct the surface model with map points, please change all mpModeler->AddLineSegmentKeyFrameEntry(kf) calls in ProbabilityMapping.cc to mpModeler->AddKeyFrameEntry(kf) and build again. Running the command again will reconstruct the surface using only ORB-SLAM map points.
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_V101/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt 
  1. Table 1: This table shows results from sequences: EuRoC MAV Vicon Room 101 and EuRoC MAV Vicon Room 201. In order to calculate the distance, please use the MATLAB scripts in the eval folder.
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_V101/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_V201/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V201.txt 
  1. Table 2: The table presents the number of vertices of results from sequences: EuRoC MAV Vicon Room 101, EuRoC MAV Machine Hall 01, TUM RGBD fr3-large-cabinet, TUM RGBD fr1-room. The number of vertices in .obj files can be checked using MeshLab.
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_V101/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_MH01/cam0/data Examples/Monocular/EuRoC_TimeStamps/MH01.txt 
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUM3.yaml PATH_TO_SEQUENCE_fr3_large_cabinet
./Examples/Monocular/mono_tum Vocabulary/ORBvoc.bin Examples/Monocular/TUM1.yaml PATH_TO_SEQUENCE_fr1_room

semidense-lines's People

Contributors

katrinleinweber avatar shidahe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semidense-lines's Issues

Compilation on MacOS

Hi @shidahe,
I tried on MacOs and I obtained this error;

semidense-lines/Thirdparty/Line3Dpp/line3D.h:23:10: fatal error:
'configLIBS.h' file not found
#include "configLIBS.h"
^~~~~~~~~~~~~~
1 error generated.
make[2]: *** [CMakeFiles/ORB_SLAM2.dir/src/System.cc.o] Error 1
make[1]: *** [CMakeFiles/ORB_SLAM2.dir/all] Error 2
make: *** [all] Error 2

min memory needed ?

By testing the first line gf figure 4 I obtain a core dumped due to memory alloc.
What is the min memory needed ? (I tried it on ubuntu 14.04 TLS 32 bits, 14 Go Ram memory)
`./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Monocular

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!

Camera Parameters:

  • fx: 458.654
  • fy: 457.296
  • cx: 367.215
  • cy: 248.375
  • k1: -0.283408
  • k2: 0.0739591
  • p1: 0.00019359
  • p2: 1.76187e-05
  • fps: 20
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Start processing sequence ...
Images in the sequence: 2912

Offline semi-dense mapping and line segment extraction
updating model
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.New Map created with 96 points
Wrong initialization, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Semi Dense Mapping...updating model
done
Reseting Database... done
New Map created with 90 points
Wrong initialization, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Semi Dense Mapping...updating model
done
Reseting Database... done
New Map created with 97 points
Wrong initialization, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Semi Dense Mapping...updating model
done
Reseting Database... done
New Map created with 91 points
Wrong initialization, reseting...
System Reseting
Reseting Local Mapper... done
Reseting Loop Closing... done
Reseting Semi Dense Mapping...updating model
done
Reseting Database... done
New Map created with 109 points
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abandon (core dumped)
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml
mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt
`

Compilation error

I get many compilation errors.

  • Line3D doesn't compile, but I guess it's just for comparison purpose so I can avoid it
  • I get a compilation error (my version of eigen is 3.3.4):

/home/fab/Prog/semidense-lines/src/Optimizer.cc:1255:1: required from here
/usr/include/eigen3/Eigen/src/Core/AssignEvaluator.h:834:3: error: static assertion failed: YOU_MIXED_DIFFERENT_NUMERIC_TYPES__YOU_NEED_TO_USE_THE_CAST_METHOD_OF_MATRIXBASE_TO_CAST_NUMERIC_TYPES_EXPLICITLY
EIGEN_CHECK_BINARY_COMPATIBILIY(Func,typename ActualDstTypeCleaned::Scalar,typename Src::Scalar);

missing image in data base

It looks like that the image 1403715273262142976.png is not present in the archive:
http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.zi

./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.bin Examples/Monocular/EuRoC.yaml   mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/V101.txt 

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Monocular

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!


Camera Parameters: 
- fx: 458.654
- fy: 457.296
- cx: 367.215
- cy: 248.375
- k1: -0.283408
- k2: 0.0739591
- p1: 0.00019359
- p2: 1.76187e-05
- fps: 20
- color order: RGB (ignored if grayscale)

ORB Extractor Parameters: 
- Number of Features: 1000
- Scale Levels: 8
- Scale Factor: 1.2
- Initial Fast Threshold: 20
- Minimum Fast Threshold: 7

-------
Start processing sequence ...
Images in the sequence: 2912


Failed to load image at: mav0/cam0/data/1403715273262142976.png

run euroc dataset in lsd-slam

Hi,

when I wanted to repeat your experiment running euroc dataset room V101 and V201 in lsd-slam to get relative point clouds, I met tracking lost and couldn't get the same points as yours. I guess the camera calibration is the problem.

below it's my camera calibration for euroc room dataset:
0.608493 0.950279 0.505318 0.531746 0 752 480 none 752 480

the point cloud of euroc room v101(I cannot recognize what it is...):
1

So, do you have any idea? Do you change any code in lsd-slam or could you show your calibration file?
Thank you very much!

error on figure 4 line 2

Always on ubuntu I obtain an error when trying to reproduce line 2 of figure 4:

./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.txt Examples/Monocular/EuRoC.yaml ../DataS8/MH/cam0/data/ Examples/Monocular/EuRoC_TimeStamps/MH01.txt

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Monocular

Loading ORB Vocabulary. This could take a while...
terminate called after throwing an instance of 'std::length_error'
what(): vector::_M_default_append
Abandon (core dumped)

error on figure 4 line 3

The execution finished but the result looks empty : (here is le info data and all obj are empty)
probably due to 32 bit version ?

Edge Drawing: 0ms on 0 KFs, -nanms/KF
EDlines: 0ms on 0 KFs, -nanms/KF
Line3D: 4.71268ms on 0 KFs, infms/KF
Decoupled: 0.033274ms on 7 KFs, 0.00475343ms/KF
Fitting: 0ms on 0 KFs, -nanms/KF
Merging: 0ms on 0 KFs, -nanms/KF

Parameters:
MIN_LINE_LENGTH = 10
MAX_LINE_LENGTH = 1000
INIT_DEPTH_COUNT = 3
MIN_SEGMENT_ANGLE = 30
E1 = 1
E2 = 1.5
lambda_a = 10
lambda_d = 0.02
sigma_limit = 0.02

Modeling: 0.193296ms on 7 KFs, 0.0276137ms/KF

Problem with dependencies

Hello, i had 1 problem when i was installing your system but i solve it:

i have to edit the file semidense-line-master/Thirdparty/g2o/g2o/solvers/linear_solver_eigen.h
before
typedef Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, SparseMatrix::Index> PermutationMatrix;
after
typedef Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic, SparseMatrix::StorageIndex> PermutationMatrix

linking issue on ubuntu 14

I have an error when applying the compilation script build.sh:

[  2%] Linking CXX shared library ../lib/libORB_SLAM2.so
/usr/bin/ld : architecture i386:x86-64 du fichier d'entrée « ../Thirdparty/EDLines/EDLinesLib.a(ED.o) » est incompatible avec la sortie i386

How can I get the figure 4c,d,e,f?

I run the MH01 sequence. But the results in the dir results_line_segments only has a semi_pointcloud.obj and an empty dir L3D++_data.

use RGBD

I wonder if this project can be used for RGBD?

Results format

Hi,

I'm trying to use your algorithm to extract 3D line segments from my monocular dataset but I don't understand the format you used in your .obj segments files. (v x y z and l a b for example in line_segments_clustered_incr.obj)
As it looks like a point cloud format, when I try to open it with meshlab it just shows me a point cloud and no segments...

Do you know how I could print the segments instead of the points in meshlab and how to read the v x y z/l a b information not as point coordinates but as segments information ?

Thanks.

The imported target "CGAL::CGAL_Qt5" references the file "/usr/lib/x86_64-linux-gnu/libCGAL_Qt5.so.11.0.1" but this file does not exist.

The imported target "CGAL::CGAL_Qt5" references the file

 "/usr/lib/x86_64-linux-gnu/libCGAL_Qt5.so.11.0.1"

but this file does not exist. Possible reasons include:

  • The file was deleted, renamed, or moved to another location.

  • An install or uninstall procedure did not complete successfully.

  • The installation package was faulty and contained

    "/usr/lib/x86_64-linux-gnu/cmake/CGAL/CGALExports.cmake"

but not all the files it references.

Call Stack (most recent call first):
/usr/lib/x86_64-linux-gnu/cmake/CGAL/CGALConfig.cmake:12 (include)
CMakeLists.txt:107 (find_package)

******Solution proposal:
This is a bug in the default CGAL CMake scripts in Ubuntu.
please add libcgal-qt5-dev to the dependencies installed through apt-get.
sudo apt-get install libcgal-qt5-dev

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.