Code Monkey home page Code Monkey logo

direct_event_camera_tracker's People

Contributors

guillermogb avatar iliis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

direct_event_camera_tracker's Issues

Tracking badly with default settings on my computer

Dear Guillermo, I've been following this work and found you kindly released the code last month. I tried to run this branch on my computer. Although I successfully installed the code and ran it, I could not see a good tracking result on my computer, and the output of terminal also shows there is problem in running. My colleague also tried to run it on his computer and met the same problem. Maybe there is some problem in our setting, but I don't know why. Could you please give me some advice?
Here is the description of the problem I met:

  1. I installed ROS Melodic on Ubuntu 18.04 by
    sudo apt install ros-melodic-desktop-full
  2. I followed the instructions in Installation chapter and could run
    rosrun direct_event_camera_tracker direct_event_camera_tracker cfg/main.yaml
  3. I downloaded the dataset and run the node with cfg/main.yaml. I could see the two windows sucessfully.
  4. I clicked "load event" to read events from the bagfile. Then clicked "load pose" to load the ground truth pose. Then clicked "generate KF" to generate the first keyframe. Then clicked "track all" to start tracking with the initial setting.
    This is before clicking "track all":
    image
    As I did not change anything in the code, I should be able to see a good tracking - Event Frame and Expected intensity change should look similar with optimized pose. However, after about 20 frames of Event Frame, the difference looks too big and tracking is actually bad result.
    image
    Here is the output in terminal:
    Terminal output
  5. I also tried to start tracking without clicking "load pose", but the result is similar. Then I tried to click "track step" several times instead of clicking "track all", the same thing happened.

I checked the version of all the necessary packages but found no difference with your readme file. Please tell me how to acquire a correct tracking. Thank you very much!

What was the setting you used to produce the results in paper?

Dear Bryner,

I created one issue before and you kindly solved it. This time I have some new questions to ask you.
I think in your paper and master thesis you mentioned 2 trajectories of the room scene. Room 1 is clearly corresponding to dvs_recording3_2018-04-06-15-04-19.bag. I downloaded the code and dataset, run it without changing anything in setting. At first I used the default pose in cfg/main.yaml. So I just click load event, generate keyframe, track all. After about 1 hour, I got the track.csv file. Using eval_tracking.py, I tried to produce the plots. However, the image I plot is different from Fig. 10 in your ICRA paper.
Selection_069

So I guess you must used another setting to produce Fig. 10. This time I clicked load pose before generate keyframe. However, tracking still fails after about 4.5 s. From here pose.position.y() is not accurate enough
Selection_070
Could you please tell me what setting you used to produce the results in the paper? Like did you use multi-resolution scheme or not, what was the correct starting time and pose.
Another question is for Room 2 you mentioned in the paper, could you also tell me which record it is in the dataset? I will be very grateful if you can give me more information of your experiments.

Thanks in advance!

when run bag,start time not available

WARNING: Multisampling (anti-aliasing) not supported by your graphics card drivers: Disabling multisampling
loading pointcloud from /tmp/example/room.ply
number of faces: 2505448
OpenGL (ID 1): GL_OUT_OF_MEMORY in glBufferData
loaded 7516346 points
integrating events
integrating events, looking for a count of 22490
An exception was thrown

start time not available

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.