Comments (5)
- You could in principle generate 1us timestamp resolution v2e output, but it would be terribly slow because you would be upsampling to 1MHz frame rate.... would take forever. More realistic is either 100us or 1ms for real input. The timestamp jitter under most lighting conditions is on the order of 100us to 1ms (see the plot in the original DVS128 paper at Fig. 10 of Lichtsteiner, Patrick, Christoph Posch, and Tobi Delbruck. 2008. “A 128×128 120 dB 15 μs Latency Asynchronous Temporal Contrast Vision Sensor.” IEEE Journal of Solid-State Circuits 43 (2): 566–76. https://doi.org/10.1109/jssc.2007.914337.)
- No, you can get a larger number of events than 1 event per timestamp resolution if the contrast change is large enough. But note that the refractory period will also limit the number of events and 1kHz event rate per pixel is unrealisitically high. Typically we can get at MOST a few hundreds events per second from individual pixels when small parts of the sensor are stimulated.
- the respeated events will have timestamps linearly interpolated between the the upsampling times. See the code for how this works. v2e computes the max number of events from any one pixel in each frame, then assigns the interframe timestamp interval to be that number of timestamps for that frame. I.e. if the upsampled timestamp interval is T and there are N max events from some pixel, then the timestamps will be T/N for that frame. But note there may also be some interaction with the refractory period; see the code for how that works.
- You could write such saccade generator pretty easily using synthetic input module, but we don't have one now. If I were to do it, I would just copy one of the synthetic input classes and then modify it to take an image as input argument, and some other possible parameters, then transform this image using OpenCV methods and supply these frames to v2e along with their times. I could try to write this but don't have time now. I suggest starting from moving_dot.py since it takes arguments already https://github.com/SensorsINI/v2e/blob/master/scripts/moving_dot.py
- We can look for MVSEC and Caltech101 code but Yuhuang would be the guy for this.
hope this helps.
from v2e.
A saccade generator synthetic input class could be useful for reproducing datasets like N-Caltech101. I'll take a look next time working on v2e. In the meantime, I think someone else could write such class and suggest a push with it.
from v2e.
Thank you so much for all the valuable information and guidelines!!
from v2e.
Hi,
Further, could you please help me to understand the following concerns regarding saccade camera trajectory?
-In the v2e paper it is mentioned that "Each image in the Caltech 101 dataset was
recorded by a DVS for 300 ms using three 100 ms-long tri-angular saccade".
-As i expect to generate events for static images (which is related to structural damages), since it is required a set of frames for the event generation, I checked out the code for ESIM ( (https://github.com/uzh-rpg/rpg_esim) and was able to get some frames for a static image with random camera motion. But according to the paper i also want to use the saccade motion for the camera. In the original paper also it is said that "three micro-saccades tracing out an isosceles triangle was used on each (static) image" to generate set of from a static image.
If i want to do this same thing in ESIM , Normally, how should i provide the corresponding trajectory for the camera for saccades tracing out an isosceles triangle?
In the original paper three micro saccades are mentioned as in the table in shown in the following link. Here it seems there are no translations of the camera and only rotations. starting and end orientations of the camera is related to a one micro saccades is for example (-0.5 deg,0.5 deg,0) to (0.5,0.5,0) degree).
https://www.frontiersin.org/files/Articles/159859/fnins-09-00437-HTML/image_m/fnins-09-00437-t001.jpg
- I have provided the following trajectory to ESIM, where first entry is the time, second three entries are the position of the camera and third four entries are the quarternion which related to the orientation. Is that correct? however in this way i cannot provide the speed for each micro saccade. Is the speed can also be fed? If it is,could you please tell me how should i do that?
#time x y z qx qy qz qw
0 0 0 0 -0.0044 0.0044 0.0000 1
100000000 0 0 0 -0.0044 0 0 1
200000000 0 0 0 0.0044 0.0044 -0.0000 1.0000
300000000 0 0 0 0.0044 -0.0044 0.0000 1
Thank you !!
from v2e.
I'm sorry I can't help with this issue, hope you worked it out.
from v2e.
Related Issues (20)
- Event polarity of the output hdf5 file is wrong! HOT 1
- Incorrect x,y order in output text file HOT 7
- Using HDR, PNG issue HOT 1
- !$final_v2e_command gives error HOT 9
- Collab requirements fail to install HOT 12
- it seems like the code do not support --dvs640. HOT 4
- Slomo frame insertion problem HOT 1
- No module named 'dv_processing' HOT 4
- The different number of the output event frames as the original input frames. HOT 12
- Blank (gray) event frames in the output video HOT 2
- Resolution of aedat4 output HOT 1
- APS and DVS time in DDD dataset are not aligned HOT 2
- Getting events from frames HOT 1
- Problem about downloading the Super-SloMo model. HOT 5
- The number of input and output video frames is different
- Exporting HDF5 format issues HOT 1
- Low pass filter inten01 range
- Significant bump/drop in spikes and count differences when moving in the negative contrast vs positive contrast direction
- "Warmup" despite 0 refractory period
- Lower overall spike count in random motion vs consistent direction
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from v2e.