Code Monkey home page Code Monkey logo

dpptam's Introduction

#DPPTAM:

DPPTAM is a direct monocular odometry algorithm that estimates a dense reconstruction of a scene in real-time on a CPU. Highly textured image areas are mapped using standard direct mapping techniques, that minimize the photometric error across different views. We make the assumption that homogeneous-color regions belong to approximately planar areas. Related Publication:

[1] Alejo Concha, Javier Civera. DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS15), Hamburg, Germany, 2015

Video of the results that you should expect in the example sequences:

https://www.youtube.com/watch?v=1R3DkwKrWiI

#License

DPPTAM is licensed under the GNU General Public License Version 3 (GPLv3), please see http://www.gnu.org/licenses/gpl.html.

For commercial purposes, please contact the authors.

If you use DPPTAM in an academic work, please cite:

  @inproceedings{conchaIROS15,
  title={{Dense Piecewise Planar Tracking and Mapping  from a Monocular Sequence}},
  author={Concha, Alejo and Civera, Javier},
  booktitle={Proc. of The International Conference on Intelligent Robots and Systems (IROS)},
  year={2015}}

#Disclaimer

This site and the code provided here are under active development. Even though we try to only release working high quality code, this version might still contain some issues. Please use it with caution.

#Dependencies

ROS:

We have tested DPPTAM in Ubuntu 14.04 with ROS Indigo.

To install ROS (indigo) use the following command:

 sudo apt-get install ros-indigo-desktop

Or check the following link if you have any issue:

http://wiki.ros.org/indigo/Installation/Ubuntu

PCL library for visualization:

 sudo apt-get install ros-indigo-pcl-ros

BOOST library to launch the different threads:

 sudo apt-get install libboost-all-dev 

#Installation

 git clone  https://github.com/alejocb/dpptam.git

#Compilation

 catkin_make --pkg dpptam

Third Party: SUPERPIXELS COMPILATION

Code used -> Efficient Graph-Based Image Segmentation. P. Felzenszwalb, D. Huttenlocher. International Journal of Computer Vision, Vol. 59, No. 2, September 2004

cd root/catkin_workspace/src/dpptam/ThirdParty/segment
make

#Usage

Launch dpptam from your 'catkin_workspace' folder:

cd root/catkin_workspace 
rosrun dpptam dpptam

Notice that the location of dpptam should be the following:

root/catkin_workspace/src/dpptam

Launch the visualizer of the current frame

rosrun image_view image_view image:=/dpptam/camera/image

Launch the visualizer of the map

rosrun rviz rviz

We are working on an automatic visualizer, but for now, check the following screenshot to set up the rviz visualizer:

http://imgur.com/OA8i3Dj      

Visualization of the results using '.ply' files.

You can alternatively use an offline visualizer (like 'meshlab') to see better the results. The files are saved in dpptam/src/map_and_poses

To execute the provided sequences:

rosbag play lab_unizar.bag
rosbag play lab_upenn.bag

There are two parameters that you have to modify (before executing a sequence) in dpptam/src/data.yml:

1-) Intrinsic parameters of the camera:

'cameraMatrix'

'distCoeffs'

2-) Camera topic

camera_path:"/image_raw"

These are the parameters and the links for the sequences that we provide:

lab_unizar.bag -> http://webdiis.unizar.es/~jcivera/lab_unizar.bag

Update the file dpptam/src/data.yml with the follwing 'camera_path', 'cameraMatrix' and 'distCoeffs'

  camera_path:"/camera/rgb/image_color"
  cameraMatrix: !!opencv-matrix
     rows: 3
     cols: 3
     dt: d
     data: [ 520.908620, 0., 325.141442, 0., 521.007327 , 249.701764, 0., 0., 1. ]
  distCoeffs: !!opencv-matrix
     rows: 5
     cols: 1
     dt: d
     data: [ 0.231222, -0.784899, -0.003257, -0.000105, 0.917205 ]

lab_upen.bag -> http://webdiis.unizar.es/~jcivera/lab_upenn.bag

Update the file dpptam/src/data.yml with the follwing 'camera_path', 'cameraMatrix' and 'distCoeffs'

  camera_path:"/mv_25000432/image_raw"
  cameraMatrix: !!opencv-matrix
     rows: 3
     cols: 3
     dt: d
     data: [624.63,0,372.58,0,624.53,246.89,0,0,1]
  distCoeffs: !!opencv-matrix
     rows: 5
     cols: 1
     dt: d
     data: [-0.429,0.188,2.299e-06,0.00051,0]

The initialization is performed assuming that the first map is a plane parallel to the camera. It converges in a few seconds to the right solution in most of the cases. Nevertheless, we recommend to initialize in a well textured area with a relatively slow motion.

In order to exploit the benefits of the use of superpixels for low texture areas we provide the following hint: Try to 'see' the low texture area in at least 3 different views with enough parallax, so the supeprixel can be matched in multiple views. Notice that superpixels triangulation need a larger parallax than feature/photometric triangulation.

Parameters

There are a few tuneable parameters that you can modify in dpptam/src/data.yml:

1-) Superpixel calculation

calculate_superpixels: [bool] If 1 it will calculate 3D superpixels.

2-) Number of frames for mapping

num_cameras_mapping_th: [int]. Number of frames that you want to use to estimate the depth maps. Default: 9.

3-) Minimum parallax required for mapping

translational_ratio_th_min: [double]. Minimum parallax to insert a keyframe. Default: 0.075. Typical values [0.03-0.15].

4-) Degenerated cases in 3D superpixel matching

limit_ratio_sing_val: [double]. This threshold deals with the degenerated cases in 3D superpixel calculation. Smaller values -> less outliers. Default: 100. Typical values [10-1000].

5-) Minimum normalized residual threshold required.

limit_normalized_residual: [double]. This threshold accounts for the minimum error required in superpixel calculation. Smaller values -> less outliers. Default: 0.30. Typical values [0.05-0.50].

6-) Minimum number of matches of 3D superpixels in multiple views to achieve multiview consistency.

matchings_active_search: [int]. Number of matches required of the 3D superpixel in multiple views. Larger values -> less outliers. Default: 3. Typical values [0-4].

Contact

If you have any issue compiling/running dpptam or you would like to know anything about the code, please contact the authors:

 Alejo Concha -> [email protected]

 Javier Civera -> [email protected]

dpptam's People

Contributors

alejocb avatar sunghoon031 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dpptam's Issues

Scaled pose update

Hi,

I would like to take advantage of external pose updates, and use these over the tracking system proposed in DP-PTAM. Essentially, I would like to utilize only the mapping component, while the tracking information comes from an external (time-synchronised) source. Any help? :)

Since dp-ptam, being a monocular system cannot estimate the metric scale.

Thanks!
Kabir

Inverse Compositional Image Alignment -> Pose Updating step

In "gauss_newton_ic" function, when you update the current camera pose, I think you perform eq(4) in the paper in the opposite order (T_n = inv(T_hat)*T_n).

I mean line 1681~1688 in SemiDenseTracking.cpp shows that R2 and t2 become:

R2 = R1.t()*R2;
t2 = R1.t()*(t2-t1);

which means they represent the world to camera transformation where:

  • 1st transformation: original world to camera transformation (old R2 & t2)
  • 2nd transformation: inverse transformation of R1&t1

If I reverse the order and update T_n = T_n * inv(T_hat), I found that it also tracks well for small movements, but it loses tracking for large movements.

Was it that having T_n = inv(T_hat)*T_n instead of T_n = T_n * inv(T_hat) worked better when you tested the code, and is that why it's different from the method described in the paper?

Or did I simply miss something? Please let me know :)

relocalization

This system does not seem to re-localize once it looses localization. Has there been any work in this area on extending dpptam to work with extending saved maps as well.

question about usage

Dear @alejocb ,

I have questions about the usage, I couldn't see any output in RViz. The following are my steps:

$ cd ~/catkin_ws
$ roscore &
$ rosrun dpptam main

Open an other terminal,
$ rosbag play ~/dpptam-dataset/lab_unizar.bag
Open an other terminal,
$ rosrun rviz rviz

But I couldn't see any output in RViz, and it shows the following error:
Fixed Frame
No tf data. Actual error: Fixed Frame [map] does not exist

https://goo.gl/dTjjys

Any idea or suggestion to fix it?

Thanks in advance~
Milton

OpenCV Error - persistence.cpp - cvGetFileNodeByName

@alejocb @amiltonwong
After successfully catkin_make the package, I run

rosrun dpptam dpptam

then, got

OpenCV Error: Bad argument (Invalid pointer to file storage) in cvGetFileNodeByName, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp, line 740
terminate called after throwing an instance of 'cv::Exception'
  what():  /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp:740: error: (-5) Invalid pointer to file storage in function cvGetFileNodeByName

I tried to uncomment

chdir("/home/alejo/catkin_ws");

and changed it to my path, but it was still failed to run.

Possible coding mistake in DenseMapping.cpp line 857

DenseMapping.cpp line 857 is :
variance_points_tracked.at(round(projected_points.at(ii, 0)), round(projected_points.at(ii, 1))) = real_points.at(ii, 6);

The statement :
real_points.at(ii, 6);
really does
real_points.at(ii+1, 0);

Because,
the width (number of columns) of real_points is 6, and it tries to access out the matrix.
However, OpenCV Mat stores data in an one dimensional array which follows the next :
at(WANT_ROW, WANT_COL) == data[WANT_ROW * numCols + WANT_COL]
so when WANT_COL exceeds number of columns, it accesses the next row.

I changed this code to real_points.at(ii, 5), but DPPTAM still worked.
Is this intended or not?

General question regarding the tuning of the thresholds

I see that you sometimes used a very fine-tuned thresholds in the code (e.g. spatial threshold = 1.9 , neighbors_consistency_print = 0.93, etc).

Can you tell me how you got these values? Did you use some sort of genetic algorithm to reach the current setting? Or was it entirely done manually by just evaluating the result qualitatively?

I'm also curious how came up with the not-so-fine-tuned values as well, such as the patch size.

Median Regularization

Correct me if I'm wrong, but I think you made a mistake in line 769 SemiDenseMapping.cpp

if (cont_depths2reg > 0 )

I think this should be bigger than 1 not zero. Because cont_depths2reg will never be 0 since it's always counting itself.

I fixed this issue in my pull-request #28

how to run the program

Dear @alejocb,

I am trying to run the demo, but there has a problem: Segmentation fault (core dumped). Can you tell me how to slove it ?
*** DPPTAM is working ***

*** Launch the example sequences or use your own sequence / live camera and update the file 'data.yml' with the corresponding camera_path and calibration parameters ***
frames_processed -> 8.33333 %
Segmentation fault (core dumped)

Thanks in advance~
Ailin

catkin_make error

Dear @alejocb ,

My system is Ubuntu 14.04 with ROS Indigo,
My catkin directory structure is as follows:
Base path: /root/catkin_ws
Source space: /root/catkin_ws/src
Build space: /root/catkin_ws/build
Devel space: /root/catkin_ws/devel
Install space: /root/catkin_ws/install

I download dpptam package in /root/catkin_ws/src
--> cd /root/catkin_ws/src
--> git clone https://github.com/alejocb/dpptam.git

Then I switch back into ~/catkin_ws and issue command catkin_make --pkg dpptam as standard build way. But it shows "Packages "dpptam" not found in the workspace"

root@milton-ThinkPad-T450:~/catkin_ws# catkin_make --pkg dpptam
Base path: /root/catkin_ws
Source space: /root/catkin_ws/src
Build space: /root/catkin_ws/build
Devel space: /root/catkin_ws/devel
Install space: /root/catkin_ws/install
Packages "dpptam" not found in the workspace

Then I try command catkin_make dpptam, but still did not succeed, see below:
https://goo.gl/b64rt9

Any idea or suggestion to fix this issue?

Thanks in advance~

Milton

Calibration question

I run the ros bag file ( lab_unizar.bag ) and get a very nice dense point cloud right off. but when I use my web cam it takes a really long time to get anything to show up (just not finding enough key frames?) When it does start to create a point cloud it is very slow to add new points. Is the a calibration issue or a web cam issue. How can I tell what to look at to resolve this kind of behavior. Thank you for any help you can offer.

Question regarding Gauss Newton update step

In SemiDenseTracking.cpp "gauss_newton_ic" function, there is a following code:

for (int ii = 0; ii <6;ii++)
{
    jacobian.rowRange(0,jacobian.rows).colRange(ii,ii+1) = weight.mul(jacobian_analitic.rowRange(0,jacobian_analitic.rows).colRange(ii,ii+1) );
}

init = (jacobian.t()*jacobian).inv(cv::DECOMP_SVD)*jacobian.t();

cv::Mat init2;
init2 = init*(error_vector_sqrt_inicial.mul(weight)) ;

And in "optimize_camera" function, right after performing "gauss_estimation", you run the following code:

weight[j] = (1 + (error_vector[j]/(variances[j])));
weight[j] = 1/weight[j];
weight[j] = weight[j].mul(weight[j]);

Here are my questions:

  1. According to the first code you are computing init2 = (J.t()_W^2_J).inv()_J.t()_r(0)*W where J = jacobian_analitic. (W^2 because jacobian in your init calculation is already multiplied by weight). Was this intended because having W^2 was better than W?
  2. According to the following link https://research.microsoft.com/en-us/um/people/zhang/INRIA/Publis/Tutorial-Estim/node24.html, I see that the equation for the weight function is pretty different. Can you give me the reference for your weight function equation?

Thanks in advance and I will look forward to your reply ! 👍

Question about the thread

The code is very good and amazing.
I have one question deep in my mind for several days after learning the code, hoping to get answer here.
The question is: How do the three threads run synchronously?by the time? or anytingelse I have not found. I found mutex defined but not used.

bruce
Thanks in advance.

opencv error: Assertion failed

When I do "rosrun dpptam dpptam" and "rosbag play lab_unizar.bag"
I can see movie starting to play in the current frame (/dpptam/camera/image)
But sometime in the middle (before any map is generated in rviz), I get this error:

OpenCV Error: Assertion failed ((type()==0) || (DataType<_Tp>::type == type())) in push_back, file /usr/local/include/opencv2/core/mat.hpp, line 687 terminate called after throwing an instance of 'cv::Exception' what(): /usr/local/include/opencv2/core/mat.hpp:687: error: (-215) (type()==0) || (DataType<_Tp>::type == type()) in function push_back
Aborted (core dumped)

I have opencv 2.4.12 and ros jade.
With my current set-up, I have run many other ros SLAM projects (ORB-SLAM, LSD-SLAM, SVO, REMODE...) and this is the first time getting this error.

Can you please tell me how I can fix this issue? or is there any workaround?

(edit: I have tried again by uninstalling/reinstalling opencv 2.4.12 & with ros indigo. I still get the same error)

Problems initializing tracking with proper calibration

Hi,

I've calibrated my camera based off of the OpenCV asymmetrical points calibration but DPPTAM fails to initialize, despite my use of heavily textured areas or very sparse areas. I've managed to initialize dpptam on my input only once. Can anyone help me? I've attached my camera's calibration, it's worked for other projects.

cameraMatrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1066.218200176972, 0., 639.5, 0., 1066.218200176972, 359.5, 0., 0., 1. ]
distCoeffs: !!opencv-matrix
rows: 5
cols: 1
dt: d
data: [ 0.1710096450750865, -0.2813834964230731, 0., 0., 0.]

Camera calibration models

Is it possible to use different calibration models in DP-PTAM? e.g all my test datasets have ATAN-model calibrated cameras (fx/width, fy/height, cx/width, cy/height, d)

If so, which models are supported, other than the OpenCV (pinhole) model. How difficult would it be add support if not?

running error

Dear @alejocb

After I issue the following commands ,
-> roscore &
-> rosrun dpptam main

I came across the following error.

root@milton-ThinkPad-T450:/catkin_ws/src# rosrun dpptam main
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
what(): boost::filesystem::create_directory: No such file or directory: "src/dpptam/src/map_and_poses"
Aborted (core dumped)
root@milton-ThinkPad-T450:
/catkin_ws/src#

It seems the process can't find this path (the path already exists):
/root/catkin_ws/src/dpptam/src/map_and_poses

But then I switch into "/root/catkin_ws/" and issue the same command, it works.

It seems command "rosrun dpptam main" should be issue under /root/catkin_ws/, and it will find that path.

Questions about the code

Dear @alejocb,

I am reading the code to further understand dpptam, so there may be some questions thrown by me in this period, hope you don't mind. :)

In SemidenseTracking.h, what's the meaning/usage for the following member variables:

int *cont_frames; //? Is that the count number of frames?
int last_cont_frames; //??

Thanks in advance~
Milton

Visualisation Messages for ROS Melodic and rviz

Hey,
i would tell you that you should add http://wiki.ros.org/visualization_msgs to your depencies.
I use ROS Melodic on Ubuntu 18.04 LTS and had problems to visualize in rviz.

Your settings will work when you also install visualation_msgs.
For that put in in your catkin workspace and build it.

I hope that will help someone :)

And thanks for your awesome work.

Greetings 4styler

Confusion in the use of semidense_tracker->R&t

Please correct me if I'm wrong.
In the function "join_maps", you used the function "transformed_points" to transform pointClouds3D (wrt world frame) to homogeneous camera coordinate "pointsClouds3Dmap_cam" by computing
points3D_cam = R*points3D + t_r and projecting it.
This implies R and t is from world to camera transformation.

However, in the function "optimize_camera", R_rel & t_rel are computed as if R&t are from camera to world and R_rel and t_rel are from current to last keyframe.

Would you please tell me what I missed?

Questions about DPPTAM

Hi ,
I'm reading the code and have some questions about technical details in the paper "DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence "

.hope you don't mind. :)

A.
How to select a keyframe? The paper says "The keyframes are selected from the sequence frames using certain heuristics" . And I can't catch that detail from the code.

Which heuristics do you have? Frame count measure or Residual ratio measure or Pose-based measure?
Besides, could you help me to locate the code about initialization of global reference frame and keyframe?

B.
I cannot get the result with myself image sequences at the moment.
Whether the DPPTAM can cope with large textureless area (e.g. white wall or ceiling of room) if they are far away or near the camera?
How about its robustness when the camera is moving in the vertical direction rather than walking parallel to the floor?
I think the correctness of segment blob is critical for me.

Problem with compiling dpptam

Hello, I am Sanghoon.

First of all, thank you for your effort on this great work.

I have a problem while running "catkin_make --pkg dpptam"
I am constantly getting the following error.

make[2]: *** No rule to make target /usr/lib/x86_64-linux-gnu/libopencv_videostab.so.2.4.8', needed by/home/sanghoon/catkin_ws/devel/lib/dpptam/dpptam'. Stop.
make[1]: *** [dpptam/CMakeFiles/dpptam.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j8 -l8" failed

I am using ros-indigo, opencv 2.4.11.
I don't have "libopencv_videostab.so.2.4.8" in "/usr/lib/x86_64-linux-gnu/".
I am not using opencv 2.4.8, and my opencv 2.4.11 is installed in "/usr/local/lib/".
I can't understand why catkin_make is trying to find opencv library in "/usr/lib/x86_64-linux-gnu/".
Is there any configuration file where I can change the settings?
PKG_CONFIG seems to work find for me. But I keep getting the error.
Please take a look, and advise me if there's anything that can be helpful.

Thank you.

OpenCV error: Assertion failed (at DenseMapping.cpp Line 605)

Hi @alejocb , thank you for your hard work on the code!

I encountered the same problem as @sunghoon031 did in #9 .
My OpenCV version is 2.4.8 which is the latest version in Ubuntu 14.04 LTS, reinstalling ROS and OpenCV didn't resolve it.
Then I decided to debug DPPTAM, found out that the push_back() at DenseMapping.cpp Line 605 is the cause.

cv::Mat distances_btw_3dpoints(0,1,CV_32FC1);
for (int ii = 0; ii < 20; ii++)
{
    int random_row1 = (rand() % points_sup_total.rows);
    int random_row2 = (rand() % points_sup_total.rows);
    // Line 605
    distances_btw_3dpoints.push_back(cv::norm(points_sup_total.colRange(0,3).row(random_row1)-points_sup_total.colRange(0,3).row(random_row2),cv::NORM_L1) / 3);
}

cv::norm() returns a 8-byte double type float, but the element type of distances_btw_3dpoints is CV_32FC1, which is a 4-byte float type float, that's the reason why the assertion inside push_back() is failed. The problem is solved after adding a type casting.

What really confuses me is this problem seems kinda random. For testing I created a virtual machine running the same operation system (Ubuntu 14.04 LTS), then set the entire ROS environment from clean install, build DPPTAM, and it runs perfectly, but it shouldn't because the type of element is different.

RANSAC is not updating inlier ratio & hypothesis?

I am reading your paper and investigated the code a bit. I love your method and the amazing result it produces !! Here are some questions I have:

  1. Do you update the inlier ratio for RANSAC anywhere in the code? Because I found that ransac_for_3Dspx function is not really updating the inlier ratio & hypothesis, but rather using a fixed number (i.e. 99) of hypotheses.
  2. Correct me if I'm wrong: Is the published code "Semi-dense + 3DS (ours)" method? and is that the reason why the code doesn't involve the dense mapping part (section V.B in paper) ?

I will look forward to your reply.
Thanks in advance !

Cheers,
Seong.

dpptam/map not being published

dpptam/map is not being published when i use my webcam via usb_cam. The only difference is in the header from what I can see...since running the bag file works in the example. Here are the differences.
There are no error messages. Node graph looks ok. Any idea on what is going on....the marker and other topics are ok...just the point clouds are not published.

Bag file image.

header: 
  seq: 447
  stamp: 
    secs: 1426957734
    nsecs: 606139044
  frame_id: /camera_rgb_optical_frame
height: 480
width: 640
encoding: bgr8
is_bigendian: 0
step: 1920

usb_cam image

header: 
  seq: 577
  stamp: 
    secs: 1504412453
    nsecs: 846092342
  frame_id: /camera_rgb_optical_frame
height: 480
width: 640
encoding: rgb8
is_bigendian: 0
step: 1920



when i run "rosrun dpptam dpptam",error occured

*** DPPTAM is working ***

*** Launch the example sequences or use your own sequence / live camera and update the file 'data.yml' with the corresponding camera_path and calibration parameters ***
frames_processed -> 100 %
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) ../modules/core/src/matrix_expressions.cpp:23: error: (-5:Bad argument) Matrix operand is an empty matrix. in function 'checkOperandsExist'

and i don't know how to fix it

Question regarding semidense mapping update

Is there any reason why you keep "semidense_mapper->local_map_points" and "semidense_mapper ->points3D_toprint" separate? I'm referring to the giant for loop you have in line 866~1139, and how push back points_aux to sdm->local_map_points and points_aux2_print to sdm->points3D_toprint

Why not just do something like:

semidense_mapper -> local_map_points = semidense_mapper -> points3D_toprint[num_keyframes].clone()

Then you simply use more accurate 3D map points for tracking, right?
I can simply change the code this way, and it will run just as fine.
I would like to know if there was any special reason why you made it this way.

Is there anyone run the code successfully?

Anytime I run the code on lab_upenn.bag, I meet a core dumped. May be the computing capability of my PC is not enough? Intel® Core™ i5-7300HQ CPU @ 2.50GHz × 4

Many thanks~

Distance calculation in "distances_interval" function

I am suspecting that the following calculation is wrong in distances_interval function in DenseMapping.cpp:

float distance = fabs(real_points_in_image.at<float>(ii,0)-C.at<float>(0,0))
+fabs(real_points_in_image.at<float>(ii,1)-C.at<float>(0,1))
+fabs(real_points_in_image.at<float>(ii,2)-C.at<float>(0,2));

The reason is that real_points_in_image is the position vector wrt world frame, and C is the translational vector from camera to world.
I think the correct computation should simply be

float distance = fabs(real_points_in_image.at<float>(ii,0)-t_.at<float>(0,0))
+fabs(real_points_in_image.at<float>(ii,1)-t_.at<float>(1,0))
+fabs(real_points_in_image.at<float>(ii,2)-t_.at<float>(2,0));

t_ has the same magnitude as C but just an opposite sign (and transpose).
The code runs without an error , but the dense mapping doesn't work so well...

Reconstruction performance

Why is the effect of running the TUM data set so bad that I cannot complete the reconstruction? What could be the reason?
How should the camera parameters in data.yml be adjusted, and what do the values represent?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.