Code Monkey home page Code Monkey logo

orb_slam2's Introduction

ORB-SLAM2

Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2)

13 Jan 2017: OpenCV 3 and Eigen 3.3 are now supported.

22 Dec 2016: Added AR demo (see section 7).

ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. We also provide a ROS node to process live monocular, stereo or RGB-D streams. The library can be compiled without ROS. ORB-SLAM2 provides a GUI to change between a SLAM Mode and Localization Mode, see section 9 of this document.

ORB-SLAM2 ORB-SLAM2 ORB-SLAM2

Related Publications:

[Monocular] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF.

[Stereo and RGB-D] Raúl Mur-Artal and Juan D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017. PDF.

[DBoW2 Place Recognizer] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012. PDF

1. License

ORB-SLAM2 is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

For a closed-source version of ORB-SLAM2 for commercial purposes, please contact the authors: orbslam (at) unizar (dot) es.

If you use ORB-SLAM2 (Monocular) in an academic work, please cite:

@article{murTRO2015,
  title={{ORB-SLAM}: a Versatile and Accurate Monocular {SLAM} System},
  author={Mur-Artal, Ra\'ul, Montiel, J. M. M. and Tard\'os, Juan D.},
  journal={IEEE Transactions on Robotics},
  volume={31},
  number={5},
  pages={1147--1163},
  doi = {10.1109/TRO.2015.2463671},
  year={2015}
 }

if you use ORB-SLAM2 (Stereo or RGB-D) in an academic work, please cite:

@article{murORB2,
  title={{ORB-SLAM2}: an Open-Source {SLAM} System for Monocular, Stereo and {RGB-D} Cameras},
  author={Mur-Artal, Ra\'ul and Tard\'os, Juan D.},
  journal={IEEE Transactions on Robotics},
  volume={33},
  number={5},
  pages={1255--1262},
  doi = {10.1109/TRO.2017.2705103},
  year={2017}
 }

2. Prerequisites

We have tested the library in Ubuntu 12.04, 14.04 and 16.04, but it should be easy to compile in other platforms. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results.

C++11 or C++0x Compiler

We use the new thread and chrono functionalities of C++11.

Pangolin

We use Pangolin for visualization and user interface. Dowload and install instructions can be found at: https://github.com/stevenlovegrove/Pangolin.

OpenCV

We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.

Eigen3

Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

DBoW2 and g2o (Included in Thirdparty folder)

We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.

ROS (optional)

We provide some examples to process the live input of a monocular, stereo or RGB-D camera using ROS. Building these examples is optional. In case you want to use ROS, a version Hydro or newer is needed.

3. Building ORB-SLAM2 library and examples

Clone the repository:

git clone https://github.com/raulmur/ORB_SLAM2.git ORB_SLAM2

We provide a script build.sh to build the Thirdparty libraries and ORB-SLAM2. Please make sure you have installed all required dependencies (see section 2). Execute:

cd ORB_SLAM2
chmod +x build.sh
./build.sh

This will create libORB_SLAM2.so at lib folder and the executables mono_tum, mono_kitti, rgbd_tum, stereo_kitti, mono_euroc and stereo_euroc in Examples folder.

4. Monocular Examples

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder.

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlby KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Monocular/mono_kitti Vocabulary/ORBvoc.txt Examples/Monocular/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

EuRoC Dataset

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

  2. Execute the following first command for V1 and V2 sequences, or the second command for MH sequences. Change PATH_TO_SEQUENCE_FOLDER and SEQUENCE according to the sequence you want to run.

./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.txt Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE_FOLDER/mav0/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 
./Examples/Monocular/mono_euroc Vocabulary/ORBvoc.txt Examples/Monocular/EuRoC.yaml PATH_TO_SEQUENCE/cam0/data Examples/Monocular/EuRoC_TimeStamps/SEQUENCE.txt 

5. Stereo Examples

KITTI Dataset

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php

  2. Execute the following command. Change KITTIX.yamlto KITTI00-02.yaml, KITTI03.yaml or KITTI04-12.yaml for sequence 0 to 2, 3, and 4 to 12 respectively. Change PATH_TO_DATASET_FOLDER to the uncompressed dataset folder. Change SEQUENCE_NUMBER to 00, 01, 02,.., 11.

./Examples/Stereo/stereo_kitti Vocabulary/ORBvoc.txt Examples/Stereo/KITTIX.yaml PATH_TO_DATASET_FOLDER/dataset/sequences/SEQUENCE_NUMBER

EuRoC Dataset

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

  2. Execute the following first command for V1 and V2 sequences, or the second command for MH sequences. Change PATH_TO_SEQUENCE_FOLDER and SEQUENCE according to the sequence you want to run.

./Examples/Stereo/stereo_euroc Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml PATH_TO_SEQUENCE/mav0/cam0/data PATH_TO_SEQUENCE/mav0/cam1/data Examples/Stereo/EuRoC_TimeStamps/SEQUENCE.txt
./Examples/Stereo/stereo_euroc Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml PATH_TO_SEQUENCE/cam0/data PATH_TO_SEQUENCE/cam1/data Examples/Stereo/EuRoC_TimeStamps/SEQUENCE.txt

6. RGB-D Example

TUM Dataset

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.

  2. Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in Examples/RGB-D/associations/. You can generate your own associations file executing:

python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Execute the following command. Change TUMX.yaml to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. Change PATH_TO_SEQUENCE_FOLDERto the uncompressed sequence folder. Change ASSOCIATIONS_FILE to the path to the corresponding associations file.
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE

7. ROS Examples

Building the nodes for mono, monoAR, stereo and RGB-D

  1. Add the path including Examples/ROS/ORB_SLAM2 to the ROS_PACKAGE_PATH environment variable. Open .bashrc file and add at the end the following line. Replace PATH by the folder where you cloned ORB_SLAM2:
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS
  1. Execute build_ros.sh script:
chmod +x build_ros.sh
./build_ros.sh

Running Monocular Node

For a monocular input from topic /camera/image_raw run node ORB_SLAM2/Mono. You will need to provide the vocabulary file and a settings file. See the monocular examples above.

rosrun ORB_SLAM2 Mono PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

Running Monocular Augmented Reality Demo

This is a demo of augmented reality where you can use an interface to insert virtual cubes in planar regions of the scene. The node reads images from topic /camera/image_raw.

rosrun ORB_SLAM2 MonoAR PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

Running Stereo Node

For a stereo input from topic /camera/left/image_raw and /camera/right/image_raw run node ORB_SLAM2/Stereo. You will need to provide the vocabulary file and a settings file. If you provide rectification matrices (see Examples/Stereo/EuRoC.yaml example), the node will recitify the images online, otherwise images must be pre-rectified.

rosrun ORB_SLAM2 Stereo PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE ONLINE_RECTIFICATION

Example: Download a rosbag (e.g. V1_01_easy.bag) from the EuRoC dataset (http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets). Open 3 tabs on the terminal and run the following command at each tab:

roscore
rosrun ORB_SLAM2 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/EuRoC.yaml true
rosbag play --pause V1_01_easy.bag /cam0/image_raw:=/camera/left/image_raw /cam1/image_raw:=/camera/right/image_raw

Once ORB-SLAM2 has loaded the vocabulary, press space in the rosbag tab. Enjoy!. Note: a powerful computer is required to run the most exigent sequences of this dataset.

Running RGB_D Node

For an RGB-D input from topics /camera/rgb/image_raw and /camera/depth_registered/image_raw, run node ORB_SLAM2/RGBD. You will need to provide the vocabulary file and a settings file. See the RGB-D example above.

rosrun ORB_SLAM2 RGBD PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE

8. Processing your own sequences

You will need to create a settings file with the calibration of your camera. See the settings file provided for the TUM and KITTI datasets for monocular, stereo and RGB-D cameras. We use the calibration model of OpenCV. See the examples to learn how to create a program that makes use of the ORB-SLAM2 library and how to pass images to the SLAM system. Stereo input must be synchronized and rectified. RGB-D input must be synchronized and depth registered.

9. SLAM and Localization Modes

You can change between the SLAM and Localization mode using the GUI of the map viewer.

SLAM Mode

This is the default mode. The system runs in parallal three threads: Tracking, Local Mapping and Loop Closing. The system localizes the camera, builds new map and tries to close loops.

Localization Mode

This mode can be used when you have a good map of your working area. In this mode the Local Mapping and Loop Closing are deactivated. The system localizes the camera in the map (which is no longer updated), using relocalization if needed.

orb_slam2's People

Contributors

raulmur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

orb_slam2's Issues

unable to run the ros example

Hi
When running the ros example,i got some problems.
I enter follow command
1.cd ORB_SLAM2&& cd Examples&& cd ROS&&cd ORB_SLAM2
2.export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/cm/ORB_SLAM2/Examples/ROS
3.rosrun ORB_SLAM2 RGBD /home//ORB_SLAM2/Vocabulary/ORBvoc.txt /home//ORB_SLAM2/Examples/RGB-D/TUM1.yaml

I get this error:
[ERROR] [1458306383.881071390]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying...
Hope to your reply, thank you.

Unable to mapping when rotating in a place

Hi @raulmur

I was trying to mapping when rotating in a place or in a small circle, but failed.

Is there any limitations to do so in ORB-SLAM2? In Local mapping or in tracking?

I've already commented this lines, but still failed.
// if(viewCos<viewingCosLimit)
// return false;

What can I do to achieve my goal?

Hope to your reply, thanks.

Is Monocular ORB-SLAM in True-Scale?

Hello. This might be a trivial question, but in the case of monocular ORB-SLAM, is the camera trajectory and 3D reconstruction in true-scale (i.e., if a camera moved 10 meters, the camera trajectory in ORB-SLAM is also around 10 meters)?
I appreciate your time!

Missing " #include<iomanip> " in mono_kitti.cc and stereo_kitti.cc ?????

When I compiled the project by the "build.sh", I was noticed the errors of " 'setfill' and 'setw' was not declared in this scope". I fixed it by adding "#include" into the files mono_kitti.cc and stereo_kitti.cc. I use the "ubuntu 14.04.4 LTS" with "g++ 4.8.4", I don't know this problem is a specific problem to me or a common error. Hope to your reply, thank you.

Compile error

Very strange one.
I am getting

make[2]: *** No rule to make target '/usr/lib/libOpenNI2.so', needed by '../lib/libORB_SLAM2.so'.  Stop.

but I dont see in any of the CMakeLists.txt files that OpenNI2 is referenced somewhere.
I searched the full project directory tree for it.

When the lib ORB_SLAM2 is defined here
https://github.com/raulmur/ORB_SLAM2/blob/master/CMakeLists.txt#L66
there is no clueabout OpenNI.

Any ideas?

May it come from Pangolin?

My OS (Ubuntu 14.04 LTS) crashes while building ORB_SLAM2

Hi,
I have installed all required libraries and while trying to build ORB_SLAM2 my OS (Ubuntu 14.04 LTS) always crashes after compiling src/*.cc files (after building *.o files). I tried to resolve the problem but always it crashes at the same place. what's the matter ?? i really need your help ??

ORB_SLAM2 ROS node is not estimating trajectory (Gazebo Simulation)

Hi,

I have successfully built the OBR-SLAM2 library (both normal and ROS variant). Using the Linux version (without ROS) and with a predefined dataset (TUM2 for example), it works perfectly. However, after building ROS node and executing it with a proper /camera/image_raw being published I don't get any Odometry, Trajectory estimation or mapping. In this video is shown what I have previously explained:

https://youtu.be/3OC2ARB9baI

This is the configuration of the Gazebo camera:

  <sensor name='camera' type='camera'>
      <update_rate>30</update_rate>
      <camera name='head'>
        <horizontal_fov>1.5708</horizontal_fov>
        
        <clip>
          <near>1</near>
          <far>500</far>
        </clip>
        <noise>
          <type>gaussian</type>
          <mean>0</mean>
          <stddev>0.007</stddev>
        </noise>
      </camera>
      <plugin name='camera_controller' filename='libgazebo_ros_camera.so'>
        <alwaysOn>true</alwaysOn>
        <updateRate>0.0</updateRate>
        <cameraName>camera</cameraName>
        <imageTopicName>image_raw</imageTopicName>
        <cameraInfoTopicName>camera_info</cameraInfoTopicName>
        <frameName>camera_link</frameName>
        <hackBaseline>0.07</hackBaseline>
    <Fx>517.306408</Fx>
        <Fy>516.469215</Fy>
    <Cx>318.643040</Cx>
        <Cy>255.313989</Cy>
        <distortionK1>0.0</distortionK1>
        <distortionK2>0.0</distortionK2>
        <distortionK3>0.0</distortionK3>
        <distortionT1>0.0</distortionT1>
        <distortionT2>0.0</distortionT2>
        <robotNamespace>/</robotNamespace>
      </plugin>
    </sensor>

This is part of the calibration file:

#--------------------------------------------------------------------------------------------
 Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

 Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 517.306408
Camera.fy: 516.469215
Camera.cx: 318.643040
Camera.cy: 255.313989

Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.k3: 0.0

 Camera frames per second 
Camera.fps: 30.0

 Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

And the Vocabulary file I used was the one located at Vocabulary/ORBvoc.txt.

I hope I explained it properly, tell me if you need further information. Thank you.

Stereo-ORB-SLAM2 on different public datasets

Hi,
at first thank you very much for publishing your code!

Yesterday I tried to run the stereo version on two scenes (Machine Hall 01, Vicon Room 1 01) of the EuRoC MAV dataset from ETHZ: http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets
I verified that the intrinsics are correctly set in the settings-yaml-file.
Following the comment in the kitti settings-file the stereo baseline Camera.bf was calculated as the baseline in meters times Camera.fx. The used VI-Sensor has a baseline of approx. 0.10m. the calibration file says it is closer towards 0.11m. With a focal length of 458, I set Camera.bf to 50.
When started, the features look good in the image, but the estimated position seems to be stuck at identity, while the orientation is correctly tracked. Furthermore, way to many keyframes are inserted into the map, almost like every frame becomes a keyframe. The next symptom is, the "map" still looks like taken from initializiation. Starting at different timestamps did not fix the problem. Neither did switching the cameras.

I adjusted only the following parameters in the settings file:
Camera.fx: 458.654
Camera.fy: 458.654
Camera.cx: 367.215
Camera.cy: 248.375
Camera.k1: -0.28348081 # no difference if set to 0 or not.
Camera.k2: 0.07395907
Camera.p1: 0.00019359
Camera.p2: 1.76187114e-05
Camera.width: 752
Camera.height: 480
Camera.fps: 20.0
Camera.bf: 50

The monocular version runs very nicely on these datasets and running the stereo version on the Kitti dataset works fine too.

Any thoughts where the problem might be?

Greetings,
Jan

Save and load map

Hey @raulmur great work with this update to ORB! I would love to know what the status is on the save/load map functionality. If you give me some directions I can also offer to implement this quickly.

Are there any specific challenges that need to be overcome to implement this?

Thanks!

Marc

Pangolin Error + plus ERROR: Calibration parameters to rectify stereo are missing!

I am trying to run ORB_SLAM2 on kitti_04.bag sequence with following command

$ rosrun ORB_SLAM2 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/KITTI04-12.yaml false

$ rosbag play --pause kitti_04.bag /kitti_stereo/left/image_rect:=/camera/left/image_raw /kitti_stereo/right/image_rect:=/camera/right/image_raw

error I faced is ..............

ERROR: Calibration parameters to rectify stereo are missing!
terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
Aborted (core dumped)

Saving Key Frames

Hello, I am trying to save frames (as .jpg), which are used as key frames in ORB-SLAM.
Could you guide me to which code I should look at?
I appreciate your help!

Params to adjust for high-res input?

I'm running ORB-SLAM2 on some videos captured with a drone. The drone can easily capture 1080P video at 60 fps, but when I run it through ORB-SLAM2 it obviously does not work anywhere near real-time any more. I do not need the system to actually run in real time, but I was wondering if you could point me to some parameters that I should look to tune for this high-resolution input? Currently I have only increased the number of features to 4000. Also, is there a way to increase the number of times Global BA is run? Since we are fine with not-real-time performance, it would be great to have the highest quality possible.

Why not using OpenCV's ORB extractor?

Great job for ORB_SLAM2!
But I do have a question that: Why not using OpenCV's ORB extractor?
Cause I see you wrote your own orbextractor, is there anything different than the OpenCV's?
Thanks.

ORB-SLAM2 with Kinect

Hey everyone, I am trying to use Kinect with the ORB-SLAM2.
I already went through the dataset examples, and they worked fine.
However, when trying to use with kinect, it doesn't work: the camera window displays "Waiting for Images".
I already checked rostopic list, which gives:
/camera/depth_registered/image_raw
/camera/rgb/image_raw
/rosout
/rosout_agg

The two first topics are the ones that should be used with ORB-SLAM2, right?

In addition, I already tried launching libfreenect:
roslaunch freenect_launch freenect.launch

I get a bunch more rostopics and I am even able to view rgb video stream from image_view:
rosrun image_view image_view image:=/camera/rgb/image_raw
However, I cannot view the depth image from /camera/depth_registered/image_raw, but I can view the one in the topic:
rosrun image_view image_view image:=/camera/depth/image

I tried to change the ros_rgbd.cc file
From: message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "camera/depth_registered/image_raw", 1);
To: message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "/camera/depth/image", 1);

However, I didn't it would work anyway (and it didn't). Neither of the above solved the problem.

When executing the rqt_graph, it seems like ORB_SLAM2 creates a node called RGBD that is getting the topic /camera/rgb/image_raw. The rqt_graph can be seen below:
https://www.dropbox.com/s/fri1ik0xtzw7a9e/rosgraph.png?dl=0
(seems like the link can only be accessed by copying and pasting into the browser)

Would any of you have a suggestion of what can I do

Keyframe trajectory explanation

Hello,

Could you give some further insight about the output keyframe trajectory for the monocular examples?
The first column is the timing. Then is t (the next 3 columns) the camera position in (x,y,z) and q (the last four columns) the camera orientation in quaternion?

Does this:
cv::Mat R = pKF->GetRotation().t();
return the orientation in (x,y,z) (no quaternion)?

Thanks.

SO3 Lie Group?

does the orb slam 2 use the lie group ? in Loop closure?

ROS Stereo Example

Hello. First of all, thank you for sharing the code.
For ROS example, I noticed that there are examples for monocular and RGBD, but no stereo.
I wonder whether there will be the ROS stereo example in the future.
Thank you for your time!

Why I cannot create OpenCV windows in the main program?

First of all thank you for the hard work!

I tried using cv::namedWindow("Input") on the main example program to create addition window. (I planned to monitor the input video as well) but some errors occured.

When I tried doing that in Viewer.cc instead it works. I wondered why? Did you somehow lock OpenCV to Viewer's code only?

ROS Integration

Hi Raul,

the current version doesn't seem to be integrated with ROS. Are you planning to do it anytime soon?

Hernan

Confusions about opencv version

"
Could not find a configuration file for package "OpenCV" that is compatible
with requested version "2.4.3".

The following configuration files were considered but not accepted:

/usr/local/share/OpenCV/OpenCVConfig.cmake, version: 3.1.0

"
When I'm running build.sh, it shows me the message above, but I can't find opencv 2.4.3 package

Using rectified Images

Hi!
Perhaps you can help me with this:

I use ros and have some bags with already undistorted, rectified images.

What parameters could I use to give ORB_SLAM2 a try (rectifying a second time, no problem, just want to see the results).

Thank you

'vasprintf' not declared in scope while compiling g2o

Hi,
I'm compiling ORB_SLAM2 on Windows using make (MinGW) and I get the following error:

...\ORB_SLAM2\Thirdparty\g2o\g2o\stuff\string_tools.cpp: 
In function 'std::string g2o::formatString(const char*, ...)':
...\ORB_SLAM2\Thirdparty\g2o\g2o\stuff\string_tools.cpp:100:49: 
error: 'vasprintf' was not declared in this scope
   int numChar = vasprintf(&auxPtr, fmt, arg_list);
                                                 ^
...\ORB_SLAM2\Thirdparty\g2o\g2o\stuff\string_tools.cpp: 
In function 'int g2o::strPrintf(std::string&, const char*, ...)':
...\ORB_SLAM2\Thirdparty\g2o\g2o\stuff\string_tools.cpp:117:50: 
error: 'vasprintf' was not declared in this scope
   int numChars = vasprintf(&auxPtr, fmt, arg_list);
                                                  ^
make[2]: *** [CMakeFiles/g2o.dir/g2o/stuff/string_tools.cpp.obj] Error 1
make[1]: *** [CMakeFiles/g2o.dir/all] Error 2
make: *** [all] Error 2

I was wondering if anyone else came across this error / if anyone could give me some pointers on how to fix this.

Thanks!

Unlimited parallel compilation

I had a problem with parallel compilation. At the end of build.sh file is command

make -j

That causes a lot of processes to fire up. When I'm compiling on computer with low memory (actually 2GB), then it runs out of memory. I propose add some limit to parallel compilation like

make -j 4

which should be enough because ORB-SLAM doesn't take that long to compile

about "Points and keyframe trajetory" and "trajectory and ground truth" plotting

Dear all,

If I want to re-produce the following plots about "Points and keyframe trajetory" and "trajectory and ground truth", which is illustrated at Fig. 11 in p.13 in ORB_SLAM paper , where could I find the corresponding function in ORB_SLAM2 package? Or, if that function does not exist in ORB_SLAM2 package, could someone suggest me how to implement it?

Thanks~
Milton

image

a small bug in Tracking.cc

In line 227 of src/Tracking.cc:

if(mDepthMapFactor!=1 || imDepth.type()!=CV_32F);
imDepth.convertTo(imDepth,CV_32F,mDepthMapFactor);

is that an error because the condition in if statement is ignored?

Mesh Collision / Grid

Hi guys,

Hope you are doing well !

I was wondering if there is any example reproducing the mesh collision like in this video with ORB_SLAM2 (at 01:00, https://www.youtube.com/watch?v=X0hx2vxxTMg).

Is there a standalone version or ORB SLAM2 working without ROS distribution and suitable for a CPU or smartphone like iOS/Android ?

That would be great to make simulation with a monocular camera or drones.

Cheers,
Luc

Map Viewer not working

I recently downloaded and installed ORB SLAM2 and have tried running it uses multiple examples. for some reason the Map Viewer is not loading anything and I cannot seem to get it to work, like I need to.
Thanks for any help provided

Benchmarking ORB_SLAM vs ORB_SLAM2

Hi @raulmur ,

I ran a few quick comparisons with your ROS example for ORB_SLAM2 in monocular mode and I can see a significant difference in performance between the previous version and this one. More specifically, the rate at which ORB_SLAM2 is able to process incoming frames on the same platform (now 3-4 vs previously 8-9 frames per second) is significantly lower. Also, ORB_SLAM2 refuses to initialize on the same dataset that ORB_SLAM is perfectly happy to initialize on.

What are your thoughts on this? What do you think are the major causes of this change?

Thanks a lot!

Marc

weird errors when building DBoW2

Sorry guys, I should compile the DBoW2 with the build.sh in this project instead of from its source.

Getting these error messages when building DBoW2 (Using build.sh provided or cmake under /Thirdparty/DBoW2/)

Linking CXX shared library ../lib/libDBoW2.so
/usr/bin/ld: cannot find -lQt5::Core
/usr/bin/ld: cannot find -lQt5::Gui
/usr/bin/ld: cannot find -lQt5::Widgets
/usr/bin/ld: cannot find -lQt5::Test
/usr/bin/ld: cannot find -lQt5::Concurrent
/usr/bin/ld: cannot find -lQt5::OpenGL
collect2: error: ld returned 1 exit status
make[2]: *** [../lib/libDBoW2.so] Error 1
make[1]: *** [CMakeFiles/DBoW2.dir/all] Error 2
make: *** [all] Error 2

Why lQt5, it dosn't make any sense. I am really confused. Did anyone meet this. Any help will be appreciated.

Unable to build the ros examples!

Hi,

I'm having an issue with building the ROS examples. The first part of the installation works perfectly.i.e I can successfully build the Thirdparty libraries and the examples. But when I try to build the ROS examples, I get this error:

`[rosbuild] Building package ORB_SLAM2
[rosbuild] Error from directory check: /opt/ros/indigo/share/ros/core/rosbuild/bin/check_same_directories.py /home/ankit/orb_slam_catkin_ws/ORB_SLAM2/Examples/ROS/ORB_SLAM2
1
Traceback (most recent call last):
File "/opt/ros/indigo/share/ros/core/rosbuild/bin/check_same_directories.py", line 46, in
raise Exception
Exception
CMake Error at /opt/ros/indigo/share/ros/core/rosbuild/private.cmake:102 (message):
[rosbuild] rospack found package "ORB_SLAM2" at "", but the current
directory is
"/home/ankit/orb_slam_catkin_ws/ORB_SLAM2/Examples/ROS/ORB_SLAM2". You
should double-check your ROS_PACKAGE_PATH to ensure that packages are found
in the correct precedence order.
Call Stack (most recent call first):
/opt/ros/indigo/share/ros/core/rosbuild/public.cmake:177 (_rosbuild_check_package_location)
CMakeLists.txt:4 (rosbuild_init)

-- Configuring incomplete, errors occurred!
`
I guess this could be something to do with opencv. Not sure though!
P.S I have all the dependencies installed successfully.

Thanks

runtime error when using orb slam v2

Firstly, thanks for sharing. I am trying orb slam v2, encountering below error when launching.
Parallels ubuntu 14.04 virtual machine on Mac, 8G Memory, 4 Cores.

terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
Aborted (core dumped)

Thank you all in advance for any help!

Runtime stability

Hello
First of all, thank you for sharing OrbSlam library.
Second, after checking it with TUM database it appears that the algorithm is not stable:

  1. Several initializations with the same database produce different results.
  2. Sometimes SLAM stuck in reset mode (Resetting database) for almost all datasets from freiburg1.
  3. Sometimes track is lost and SLAM exits with error.

I've noticed one smooth run on "rgbd_dataset_freiburg2_360_hemisphere" while also some initializations appear to stuck or loose track.
Also it seems to work rather slowly (5-10 fps) sometimes slower.

Building ORB-SLAM2 library and TUM/KITTI examples

Hi! I was following the tutorial for building ORB-SLAM2 and when I got to the point of ./build.sh, I found a few problems that (seemingly) I was able to solve, but others I wasn't.

  • First, there would be a problem in file ORBextractor.cc, which required to add the header:
    #include <opencv2/opencv.hpp> (solution taken from raulmur/ORB_SLAM#44)
  • Next, I had similar problems in system.cc, stereo_kitti.cc and mono.cc. I solved these errors by adding the header to those files (these files were not able to execute the function setprecision()):
    #include

If I didn't do anything that should not be done, I guess it was supposed to work. However, I get the following output when hitting ./build.sh (I get the libORB_SLAM.so output, but not the executables for mono_tum, mono_kitti, rgbd_tum, stereo_kitti):

Configuring and building Thirdparty/DBoW2 ...
mkdir: cannot create directory ‘build’: File exists
-- Configuring done
-- Generating done
-- Build files have been written to: /home/mma2739/ORB_SLAM2/Thirdparty/DBoW2/build
[100%] Built target DBoW2
Configuring and building Thirdparty/g2o ...
mkdir: cannot create directory ‘build’: File exists
-- BUILD TYPE:Release
-- Compiling on Unix
-- Configuring done
-- Generating done
-- Build files have been written to: /home/mma2739/ORB_SLAM2/Thirdparty/g2o/build
[100%] Built target g2o
Uncompress vocabulary ...
Configuring and building ORB_SLAM2 ...
mkdir: cannot create directory ‘build’: File exists
Build type: Release
-- Using flag -std=c++11.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/mma2739/ORB_SLAM2/build
[ 82%] Built target ORB_SLAM2
Scanning dependencies of target mono_kitti
Scanning dependencies of target stereo_kitti
Linking CXX executable ../Examples/RGB-D/rgbd_tum
Linking CXX executable ../Examples/Monocular/mono_tum
[ 86%] [ 91%] Building CXX object CMakeFiles/stereo_kitti.dir/Examples/Stereo/stereo_kitti.cc.o
Building CXX object CMakeFiles/mono_kitti.dir/Examples/Monocular/mono_kitti.cc.o
/usr/bin/ld: warning: libopencv_core.so.3.1, needed by ../Thirdparty/DBoW2/lib/libDBoW2.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/rgbd_tum.dir/Examples/RGB-D/rgbd_tum.cc.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
/usr/local/lib/libopencv_core.so.3.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [../Examples/RGB-D/rgbd_tum] Error 1
make[1]: *** [CMakeFiles/rgbd_tum.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: warning: libopencv_core.so.3.1, needed by ../Thirdparty/DBoW2/lib/libDBoW2.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/mono_tum.dir/Examples/Monocular/mono_tum.cc.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
/usr/local/lib/libopencv_core.so.3.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [../Examples/Monocular/mono_tum] Error 1
make[1]: *** [CMakeFiles/mono_tum.dir/all] Error 2
Linking CXX executable ../Examples/Monocular/mono_kitti
Linking CXX executable ../Examples/Stereo/stereo_kitti
/usr/bin/ld: warning: libopencv_core.so.3.1, needed by ../Thirdparty/DBoW2/lib/libDBoW2.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/mono_kitti.dir/Examples/Monocular/mono_kitti.cc.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
/usr/local/lib/libopencv_core.so.3.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [../Examples/Monocular/mono_kitti] Error 1
make[1]: *** [CMakeFiles/mono_kitti.dir/all] Error 2
/usr/bin/ld: warning: libopencv_core.so.3.1, needed by ../Thirdparty/DBoW2/lib/libDBoW2.so, may conflict with libopencv_core.so.2.4
/usr/bin/ld: CMakeFiles/stereo_kitti.dir/Examples/Stereo/stereo_kitti.cc.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'
/usr/local/lib/libopencv_core.so.3.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [../Examples/Stereo/stereo_kitti] Error 1
make[1]: *** [CMakeFiles/stereo_kitti.dir/all] Error 2
make: *** [all] Error 2

Core dumped running TUM dataset

The camera was run for around 5 sec(everything seemed normal), and segmentation fault terminated the program.

I am currently using opencv v2.4.11. I encountered a lot of problems installing opencv library. I manually added #include and #include<opencv2/opencv.hpp> in several files to get the dataset run.

This is the information I got when running TUM dataset:

zixuan@zixuan-MacBookPro:~/ORB_SLAM2$ ./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUM1.yaml /home/zixuan/Documents/dataset/rgbd_dataset_freiburg1_xyz

ORB-SLAM2 Copyright (C) 2014-2016 Raul Mur-Artal, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Monocular

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!

Camera Parameters:

  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Start processing sequence ...
Images in the sequence: 798

init done
opengl support available
New Map created with 90 points
Segmentation fault (core dumped)

There is no other information provided.
Thank you for help!
P.S. I am using ubuntu14.04 (dual boot) on Macbook pro.

map save/load

Hi Raul!

first of all thank you for sharing such a great piece of software!
I was thinking it would be nice to have some map save/load functionality.
This could enable some interesting functionalities, like the reconstruction of large scale environments in an incremental fashion (i.e., create and save and initial map, which is loaded and extended iteratively).

Do you plan to deliver such functionality?
(i saw there are some commented functions in system.h)
If not, do you have any advice to enable it?

thanks,
danny

New Semi-Dense Reconstruction

Hi @raulmur

On your project page, you said new semi-dense reconstructing was able to run. But in this github source base, I didn't find any part handle semi-dense reconstruction. That's right?

Pangolin X11: Unable to retrieve framebuffer options

Hello,

when I follow these steps to run KITTI monocular example, I get following error.

terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
Aborted (core dumped)

I'm running it with Ubuntu running in Parallels on Mac.
What's the main reason for this issue?

Thanks.

Creating my own vocabulary

Hi,

I am trying to re-create my own vocabulary file, in order to adapt the ORB-SLAM2 algorithm to the camera streaming in a Gazebo simulation. I use DBow2 to create both the vocabulary file _voc.yml.tgz and the _db.yml.tgz.

However, the ORB-SLAM2 algorithm requires a *.txt vocabulary file (that is suppose to be ¿serialized?).

¿How can I generate the *.txt from the files I have mentioned?

Thanks

How to draw the map points with the camera pose ?

First thanks for the amazing project. I have some trouble to run the Pangolin based map viewer on Mac.
So I'm trying to draw the map points accroding to the camera pose.

cv::Mat cameraPose = SLAM.TrackMonocular(im,tframe);

First I got the camera pose like this. Then I'm trying to draw the map points use the camera pose matrix.

const vector<ORB_SLAM2::MapPoint*> &vpMPs = map->GetAllMapPoints();
std::vector<cv::Point3f> allmappoints;
... ... 
std::vector<cv::Point3f> maptrans;
float scale = 1.0f;
cv::perspectiveTransform(allmappoints, maptrans, cameraPose);  // transform the mappoints
for (size_t j = 0; j < maptrans.size(); ++j) {
     cv::Point3f r1 = maptrans[j];
     r1.x = (r1.x+1)*320;
     r1.y = (r1.y+1)*240;
     cv::rectangle(im, cv::Rect((r1.x - 5) * scale, (r1.y - 5 )* scale, 5 * 2 *scale, 5 * 2 * scale), cv::Scalar( 0, 255, 0 ),1);
}

As above code, I use the camera pose matrix to transform the map points, and I get the transformed point in range (-1,-1,-1 ) to (1,1,1). And then I use (r1.x+1) * 320 , (r1.y+1) * 240 to calculate the map point position in the screen ( 640 x 480 ).

But the result is not very right. When I move the camera up, the map points drawn in the screen also moved above. When I move the camera forward and backward, the scale is also seemed wrong.

Can anyone help me about this ? How can I use the camera pose to draw the map points and make them exactly match the screen image ?

unable to build ORB_SLAM2 due to Pangolin

make[2]: *** No rule to make target ../Pangolin/build/src/libpangolin.so', needed by../lib/libORB_SLAM2.so'. Stop.
make[1]: *** [CMakeFiles/ORB_SLAM2.dir/all] Error 2
make: *** [all] Error 2

is any way to resolve the above error?

Pangolin X11: Invalid GLX version. Require GLX >= 1.3

My work circumstance is Windows8.1, so I use VNC View to connect a computer installed ubuntu 14.04 to test ORB_SLAM2 project. However, after I run the command:"rosrun ORB_SLAM2 RGBD ORBvoc.txt settings.yaml", terminal displays these follewing warning:
terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Invalid GLX version. Require GLX >= 1.3
Aborted (core dumped)

ORB-SLAM2: Current Frame gives black screen with "WAITING FOR IMAGES"

I followed the instruction and was successfully able to run the monocular example with

./Examples/Monocular/mono_tum Vocabulary/ORBvoc.txt Examples/Monocular/TUMX.yaml PATH_TO_SEQUENCE_FOLDER

Now, I am trying to run monocular example on ROS, so I do the following

  • Terminal 1: roscore
  • Terminal 2: rosrun ORB_SLAM2 Mono Vocabulary/ORBvoc.txt Examples/Monocular/TUM2.yaml
  • Terminal 3: (after cd to directory where .bag file is) rosbag play rgbd_dataset_freiburg2_desk.bag

After this I still see nothing on ORB-SLAM2:Map Viwer and black screen on ORB-SLAM2:Current Frame with "WAITING FOR IMAGES" at the bottom.

I am an absolute ROS beginner, can you please explain how I can get it working?

Cannot compile Viewer.cc due to Pangolin

Scanning dependencies of target ORB_SLAM2
[ 4%] [ 8%] [ 13%] [ 17%] [ 21%] [ 26%] [ 30%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/System.cc.o
[ 34%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/LocalMapping.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Tracking.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/LoopClosing.cc.o
[ 39%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/ORBmatcher.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/ORBextractor.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/FrameDrawer.cc.o
[ 43%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Converter.cc.o
[ 47%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/KeyFrame.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/MapPoint.cc.o
[ 52%] [ 56%] [ 60%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Map.cc.o
[ 65%] [ 69%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/MapDrawer.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Optimizer.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/PnPsolver.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Frame.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/KeyFrameDatabase.cc.o
[ 73%] [ 78%] [ 82%] Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Viewer.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Sim3Solver.cc.o
Building CXX object CMakeFiles/ORB_SLAM2.dir/src/Initializer.cc.o
/home/fx/Dropbox/catkin_ws/src/ORB_SLAM2/src/Viewer.cc: In member function ‘void ORB_SLAM2::Viewer::Run()’:
/home/fx/Dropbox/catkin_ws/src/ORB_SLAM2/src/Viewer.cc:58:5: error: ‘CreateWindowAndBind’ is not a member of ‘pangolin’
pangolin::CreateWindowAndBind("ORB-SLAM2: Map Viewer",1024,768);
^
/home/fx/Dropbox/catkin_ws/src/ORB_SLAM2/src/Viewer.cc:134:9: error: ‘FinishFrame’ is not a member of ‘pangolin’
pangolin::FinishFrame();
^

Thank you advance :)

Get live images?

Has anyone had any luck getting images from a camera in real time not using ROS?

Can not compile g20

I can not compile the g2o version in ORB_SLAM2. It appears blow.

CMake Error at cmake_modules/FindBLAS.cmake:393 (message):
A required library with BLAS API not found. Please specify library
location.
Call Stack (most recent call first):
CMakeLists.txt:46 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!

When I use the lastest version download from g2o. I can compile it, but orb_slam2 seems use an different version of g20 so that orb_slam can not find some head file.

What should I do ?

Global bundle adjustment causing map to jump

Hi, thank you for this code, it is brilliant. I have it running live with a stereo camera setup, and at first, the map and translation values look great. But, when it runs GBA, the camera pose suddenly jumps, moving up to a metre from where it was. Is this normal? A stable and accurate camera track is more important to me than map accuracy, could i turn off the Global Bundle adjustment? Or would this be not advised?

Thanks again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.