robmaier / ldvo Goto Github PK
View Code? Open in Web Editor NEWLightweight Dense Visual Odometry - a header-only C++ library for real-time dense RGB-D odometry.
License: GNU General Public License v3.0
Lightweight Dense Visual Odometry - a header-only C++ library for real-time dense RGB-D odometry.
License: GNU General Public License v3.0
Hi robmaier,
How do I set the parameters to get the same RPE as in the paper?
Can you give me some advice on that?
Steinbruecker et al, ICCVW 2011:
RPE: 0.0053m/f , 0.0065 deg/f
My results:
RPE: 0.009129m/f , 0.592854 deg/f
thanks
I tried to applying the source code to ros and use it with a realsense d435
however when i use bool ok = tracker.align(prev_pyramid, cur_pyramid, pose_prev_to_cur);
i always get estimated pose:
-nan -nan -nan -nan
-nan -nan -nan -nan
-nan -nan -nan -nan
0 0 0 1
here is the source code:
SubscribeAndPublish(int choose_):sync2(image_sub,image_sub2,500){
image_sub.subscribe(n, "/camera/color/image_raw", 1);
image_sub2.subscribe(n, "/camera/aligned_depth_to_color/image_raw", 1);
sync2.registerCallback(boost::bind(&SubscribeAndPublish::callback,this, _1, 2));
odom_pub = n.advertise<nav_msgs::Odometry>("LDVO", 50);
choose=choose;
tracker_cfg.validate();
tracker_cfg.print();
boost::shared_ptr<sensor_msgs::CameraInfo const> cam_info = ros::topic::waitForMessage<sensor_msgs::CameraInfo>("/camera/color/camera_info",n);
w = ((cam_info)).width;
h = ((cam_info)).height;
Eigen::Matrix3f K;
K << ((cam_info)).K[0], 0.0f, ((cam_info)).K[2],
0.0f, ((cam_info)).K[4], ((cam_info)).K[5],
0.0f, 0.0f, 1.0f;
camera = ldvo::CameraModel(w, h, K);
// fill cv::Mat images (OpenCV)
camera.print();
// TODO fill previous frame
pose_prev_to_cur = Eigen::Matrix4f::Identity();
}
void callback(const sensor_msgs::ImageConstPtr& msg,const sensor_msgs::ImageConstPtr& msg2){ //auto start = chrono::steady_clock::now();
cv::Mat first =cv_bridge::toCvShare(msg)->image;
ldvo::Tracker tracker(tracker_cfg, camera);
cv::Mat depth = cv_bridge::toCvShare(msg2, "32FC1")->image;
cv::Mat depth_flt;
depth.convertTo(depth_flt, CV_32FC1);
depth_flt.setTo(std::numeric_limits::quiet_NaN(), depth == 0);
depth = depth_flt.clone();
cv::Mat gray;
cv::cvtColor(first, gray, cv::COLOR_BGR2GRAY);
if(gray_prev.empty()){
gray_prev= gray.clone();
depth_prev = depth.clone();
double time_color_prev=0, time_depth_prev=0;
std::shared_ptrldvo::Frame prev_frame= std::make_sharedldvo::Frame(w, h);
prev_frame->fill(gray_prev, depth_prev, time_color_prev, time_depth_prev);
int num_levels = tracker_cfg.num_levels;
prev_pyramid = std::make_sharedldvo::FramePyramid(w, h, num_levels);
prev_pyramid->fill(*prev_frame);
}
else{
cv::Mat gray_cur, depth_cur;
double time_color_cur=0, time_depth_cur=0;
gray_cur=gray.clone();
depth_cur = depth.clone();
std::shared_ptrldvo::Frame cur_frame =
std::make_sharedldvo::Frame(w, h);
cur_frame->fill(gray_cur, depth_cur, time_color_cur, time_depth_cur);
int num_levels = tracker_cfg.num_levels;
std::shared_ptrldvo::FramePyramid cur_pyramid =
std::make_sharedldvo::FramePyramid(w, h, num_levels);
cur_pyramid->fill(*cur_frame);
bool ok = tracker.align(*prev_pyramid, *cur_pyramid, pose_prev_to_cur);
if (ok)
{
std::cout << "estimated pose:" << std::endl;
std::cout << pose_prev_to_cur << std::endl;
}
else
{
std::cerr << "could not align frames!" << std::endl;
}
std::swap(prev_pyramid, cur_pyramid);
}
}
Dataset: https://vision.in.tum.de/rgbd/dataset/freiburg2/rgbd_dataset_freiburg2_pioneer_slam.tgz
Commands used:
cd data
python2 associate.py fr2_slam/rgb.txt fr2_slam/depth.txt > fr2_slam/rgbd_assoc.txt
cd ../build
./ldvo_tracker -i ../data/fr2_slam/ -c ../data/intrinsics_fr2.txt -p poses.txt --levels 5 --min_level 1 --max_depth 1.5 --update_thres 0.001
python2 ../data/evaluate_ate.py ../data/fr2_slam/groundtruth.txt poses.txt --plot plot.png --verbose
Output:
compared_pose_pairs 2171 pairs
absolute_translational_error.rmse 14.328729 m
absolute_translational_error.mean 12.494327 m
absolute_translational_error.median 10.357368 m
absolute_translational_error.std 7.014574 m
absolute_translational_error.min 4.346162 m
absolute_translational_error.max 61.631673 m
Adjusting max_depth to 10.0 yields the following (improved) results:
absolute_translational_error.rmse 5.990061 m
absolute_translational_error.mean 5.062975 m
absolute_translational_error.median 3.805372 m
absolute_translational_error.std 3.201112 m
absolute_translational_error.min 0.671174 m
absolute_translational_error.max 12.364352 m
The error is still pretty high though. Do you know whether this is related to the algorithm in general or could it be caused by implementation issues in this code?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.