uzh-rpg / direct_event_camera_tracker Goto Github PK
View Code? Open in Web Editor NEWOpen-source code for ICRA'19 paper Bryner et al.
License: GNU General Public License v3.0
Open-source code for ICRA'19 paper Bryner et al.
License: GNU General Public License v3.0
Dear Guillermo, I've been following this work and found you kindly released the code last month. I tried to run this branch on my computer. Although I successfully installed the code and ran it, I could not see a good tracking result on my computer, and the output of terminal also shows there is problem in running. My colleague also tried to run it on his computer and met the same problem. Maybe there is some problem in our setting, but I don't know why. Could you please give me some advice?
Here is the description of the problem I met:
sudo apt install ros-melodic-desktop-full
rosrun direct_event_camera_tracker direct_event_camera_tracker cfg/main.yaml
I checked the version of all the necessary packages but found no difference with your readme file. Please tell me how to acquire a correct tracking. Thank you very much!
Dear Bryner,
I created one issue before and you kindly solved it. This time I have some new questions to ask you.
I think in your paper and master thesis you mentioned 2 trajectories of the room scene. Room 1 is clearly corresponding to dvs_recording3_2018-04-06-15-04-19.bag. I downloaded the code and dataset, run it without changing anything in setting. At first I used the default pose in cfg/main.yaml. So I just click load event, generate keyframe, track all. After about 1 hour, I got the track.csv file. Using eval_tracking.py, I tried to produce the plots. However, the image I plot is different from Fig. 10 in your ICRA paper.
So I guess you must used another setting to produce Fig. 10. This time I clicked load pose before generate keyframe. However, tracking still fails after about 4.5 s. From here pose.position.y() is not accurate enough
Could you please tell me what setting you used to produce the results in the paper? Like did you use multi-resolution scheme or not, what was the correct starting time and pose.
Another question is for Room 2 you mentioned in the paper, could you also tell me which record it is in the dataset? I will be very grateful if you can give me more information of your experiments.
Thanks in advance!
WARNING: Multisampling (anti-aliasing) not supported by your graphics card drivers: Disabling multisampling
loading pointcloud from /tmp/example/room.ply
number of faces: 2505448
OpenGL (ID 1): GL_OUT_OF_MEMORY in glBufferData
loaded 7516346 points
integrating events
integrating events, looking for a count of 22490
An exception was thrown
start time not available
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.