Code Monkey home page Code Monkey logo

v2e's Issues

Collab requirements fail to install

Very odd but pip install seems to decide to try and install a descending sequence of OpenCV versions:

Cloning into 'v2e'...
remote: Enumerating objects: 3225, done.
remote: Counting objects: 100% (995/995), done.
remote: Compressing objects: 100% (143/143), done.
remote: Total 3225 (delta 916), reused 905 (delta 852), pack-reused 2230
Receiving objects: 100% (3225/3225), 34.33 MiB | 39.95 MiB/s, done.
Resolving deltas: 100% (2381/2381), done.
/content/v2e
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Processing /content/v2e
  Preparing metadata (setup.py) ... done
Collecting numpy==1.20
  Downloading numpy-1.20.0.zip (8.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.0/8.0 MB 63.0 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: argcomplete in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (3.0.8)
Requirement already satisfied: engineering-notation in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (0.8.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (4.65.0)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (4.7.0.72)
Requirement already satisfied: h5py in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (3.8.0)
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (2.0.0+cu118)
Requirement already satisfied: torchvision in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (0.15.1+cu118)
Requirement already satisfied: numba in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (0.56.4)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (3.7.1)
Collecting plyer
  Downloading plyer-2.1.0-py2.py3-none-any.whl (142 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 142.3/142.3 kB 18.1 MB/s eta 0:00:00
Collecting screeninfo
  Downloading screeninfo-0.8.1-py3-none-any.whl (12 kB)
Collecting easygui
  Downloading easygui-0.98.3-py2.py3-none-any.whl (92 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.7/92.7 kB 13.0 MB/s eta 0:00:00
Requirement already satisfied: scikit-image in /usr/local/lib/python3.10/dist-packages (from v2e==1.5.1) (0.19.3)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (3.0.9)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (1.4.4)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (1.0.7)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (8.4.0)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (4.39.3)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (2.8.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (23.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->v2e==1.5.1) (0.11.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from numba->v2e==1.5.1) (67.7.2)
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /usr/local/lib/python3.10/dist-packages (from numba->v2e==1.5.1) (0.39.1)
Collecting opencv-python
  Downloading opencv_python-4.7.0.68-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (61.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.8/61.8 MB 13.4 MB/s eta 0:00:00
  Downloading opencv_python-4.6.0.66-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.9/60.9 MB 12.2 MB/s eta 0:00:00
  Downloading opencv_python-4.5.5.64-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.5/60.5 MB 11.3 MB/s eta 0:00:00
  Downloading opencv_python-4.5.5.62-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 MB 12.9 MB/s eta 0:00:00
  Downloading opencv_python-4.5.4.60-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.3/60.3 MB 12.5 MB/s eta 0:00:00
  Downloading opencv_python-4.5.4.58-cp310-cp310-manylinux2014_x86_64.whl (60.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.3/60.3 MB 13.2 MB/s eta 0:00:00
  Downloading opencv-python-4.5.3.56.tar.gz (89.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.2/89.2 MB 11.8 MB/s eta 0:00:00
  error: subprocess-exited-with-error
  
  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  Installing build dependencies ... error
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
/content
/content

Incorrect x,y order in ae_text_output.appendEvents()

EventEmulator.generate_events() returns events as an np.ndarray with each row being: [timestamp, y cordinate, x cordinate, sign of event]

These events are then passed into the text output:
self.dvs_text.appendEvents(events)

In appendEvents:
x = events[:, 1].astype(np.int32)
...
y = events[:, 2].astype(np.int32)

This is incorrect.
It should be:
x = events[:, 2].astype(np.int32)
...
y = events[:, 1].astype(np.int32)

How to suppress a specific output file?

I would like to suppress the .aedat file from being generated in the output folder, but it seems to always default to v2e-dvs-events.aedat.
Is there any command line argument that I can use to suppress this output file from being generated?

Thanks

How to extract event data information in the form of (x,y,t,p) from the output .aedat file?

I followed the instructions as mentioned to get event data information in the form of tuple (x,y,t,p) from input sequence of images. Even if I'm getting .aedat file in output folder, I am not getting how to read and extract event data info(in the form of {x,y,t,p} from this. I was checking this: https://pypi.org/project/aedat/ , and run "cp target/release/libaedat.so aedat.so" but it says no folder target. Still not sure how to deal with .aedat file to extract required event data information.

Also first the input frames are interpolated and then event data information is obtained from them. But I need not do the frame interpolation and directly want to get event frame information from corresponding video image frame sequence captured with 25 fps. How this can be done?

I request @duguyue100 and other people to help me address the above issues.

how to install engineering_notation

Hello

I use the conda environment.
I want to install "enginnering_notation" but the library is only from pip .
So I can't install the library in the conda environment.

How should I do ?

Event polarity of the output hdf5 file is wrong!

Hi, Tobi!

I wanna report a bug.

When i use v2e with dataset GoPro to generate events, I find that the events polarity of output hdf5 file is not one or zero sometime but 4294967395.
Integer 4294967395 is max integer for uint32. Also, data type of hdf5 file is uint32.
image
image

cutoff frequency and time constant warning

Hi,

Following the v2e paper, to synthesize bright events i set the cutoff frequency value to 200 Hz. And the DVS timestamp resolution is set to 1ms.But then it gave the following warning. That means to synthesize events in bright mode (with 200Hz cutoff frequency) we always need to set the set the DVS timestamp resolution less than 1ms (in microsecond range) ? isn't it?

�[1;31mWARNING�[1;0m - Lowpass 3dB cutoff is f_3dB=200Hz (time constant tau=795.77us) with sample rate fs=1kHz (sample interval dt=1ms) ,
but this results in large IIR mixing factor eps = dt/tau = 1.257 > 0.3 (maxeps),

Thanks and Rgds,
Udayanga

Resolution of aedat4 output

Hello,

I have generated *.aedat4 file using the following command:
python v2e.py -i input/test.mp4 --overwrite --dvs_exposure duration 0.005 --output_folder=output/test --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat4 test.aedat4 --output_width=320 --output_height=240 --cutoff_hz=15

and when I run file using DV viewer, the events appear in upper left corner, and I have some free space that shouldn't be:

image

So it appears that my 320x240 video fits into frames of larger size
The generated *.AVI files don't have it.

Events duration does not match input video duration

Hi,

When exporting the events in HDF5 format, the timestamp of the last event does not correspond to the end time of the input video, but it is scaled by the input_slowmotion_factor. However, the event stream and input video should have the same duration.

Here is an example.
From the log: video has total 1167 frames with total duration 23.32s.
Here I use a slomo factor of 3. The last event timestamp in events.h5 is 7780000, in us.
Multiply back the time of events by the slomo factor to obtain the total duration of the input video: 7780000/1e6*3 = 23.3 seconds

To get the right event timestamps is it enough to scale them up by the slomo factor?

Thanks

some formulation errors

In emlator.py, line481:
shotOffProbThisSample = shotNoiseFactornp.divide(
self.pos_thres_nominal, self.pos_thres)
may be :
shotOffProbThisSample = shotNoiseFactor
np.divide(
self.neg_thres_nominal, self.neg_thres)

Failed to reproduce tennis.mov

Hi,

As I am trying to reproduce the tennis.mov example given in the README. I realized the reproduced result is far from the one shown on the website and lacks stability in the dvs video. I first used the command line options in README:

python v2e.py -i input/tennis.mov --timestamp_resolution=.005 --dvs_exposure duration 0.005 --output_folder=output/tennis --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2 tennis.aedat --output_width=346 --output_height=260 --stop_time=3

Then it prompted error:
ERROR: __main__: specify either --overwrite or --unique_output_folder

So I deleted --overwrite then came across this error:
ERROR: __main__:auto_timestamp_resolution=True and timestamp_resolution=0.005: Disable auto_timestamp_resolution

Then I set auto_timestamp_resolution to be False. Overall using the following command line options:
python v2e.py -i input/tennis.mov --timestamp_resolution=.005 --auto_timestamp_resolution=False --dvs_exposure duration 0.005 --output_folder=output/tennis --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2 tennis.aedat --output_width=346 --output_height=260 --stop_time=3

The code was able to successfully finish. But the generated DVS video is the following:

dvs-video.mov

Is there anything wrong with my command line options or is the current code different from previous version in some way?

Thank you in advance.

Setting Initial Timestamp

Hello :)
First of all, thanks for this amazing event simulator!
I have a query about setting a initial timestamp.
Is there any way I can set up the initial timestamp from the event?
Thank you

To get some clarifications about using V2e events ..

Hi,

we are planning to use v2e to generate events for static images and videos(collected as 6Hz frames by a drone). Herewith i am requesting your support to understand following things related to v2e.

  1. Normally event camera has a temporal resolution of 1us (microsecond). Can we have 1us temporal resolution for a video of 1 second period with v2e event synthesizer?

  2. If the timestamp step is 1ms, then that means we can have 1000 time bins/steps for a one second video?
    if it is we can maximum of 1000 events for one pixel for a video of 1s.Isn't it?

  3. In the v2e paper, it is mentioned that 'The upsampled frames corresponds to timestamps'. So, if then the time stamp step is 1ms, then it should have 1000 time stamps for 1 second video, isn't it?
    If it is, That means it should contain 1000 upsampled frames. But according to the example given in the paper (60Hz video with DVS timestamp step 1ms) the number of upsampled frames is 17 (not 1000 as i thought it should be). So could you please explain where my understanding went wrong?

  4. is it still possible to generate events when we have only one image (still image) by simulating some kind of virtual camera movement with v2e? Could you please share the code which how still images are converted to a small video using the saccade motion so that it can be fed for event generation?

  5. Is there anyway that we can refer the code that you used to generate events for N-Caltech 101, MVSEC data sets using v2e? Because, with that we can comfortable understand how to access the v2e event database, how to compose voxel grids by organizing the v2e events accordingly and also how to synthesize events for static image with v2e.
    Thank you so much.

Rgds,
Udayanga

moving_dot with --synthetic_input argument did not generated any video

Hi,
I have executed the following command to generate synthesized video for moving dot. But in the result folder, it has video files which are zero in size. Is there anything that i need to modify?
v2e --leak_rate=0 --shot=0 --cutoff_hz=300 --sigma_thr=.08 --pos_thr=.15 --neg_thr=.15 --dvs_exposure duration .01 --output_folder particles-slightly-less-faint-fast-2-particles --unique_output --dvs_aedat2=particles --output_width=346 --output_height=260 --batch=64 --disable_slomo --synthetic_input=scripts.moving_dot --t_total=3 --contrast=1.15 --radius=.3 --dt=0.001 --num_particles=2 --ignore-gooey

Actually the process always stop after 5% of the frame generation completes.
INFO:v2ecore.v2e_args:DVS frame expsosure mode ExposureMode.DURATION: frame rate 100.0
INFO:v2ecore.emulator:ON/OFF log_e temporal contrast thresholds: 0.15 / 0.15 +/- 0.08
INFO:v2ecore.emulator:opening AEDAT-2.0 output file D:\DTU\ResearchWork\v2e\particles-slightly-less-faint-fast-2-particles-5\particles.aedat
INFO:root:opening AEDAT-2.0 output file D:\DTU\ResearchWork\v2e\particles-slightly-less-faint-fast-2-particles-5\particles.aedat in binary mode
INFO:v2ecore.output.aedat2_output:opened D:\DTU\ResearchWork\v2e\particles-slightly-less-faint-fast-2-particles-5\particles.aedat for DVS output data for jAER
dvs: 5%|███▉ | 162/3000 [00:01<00:12, 228.19fr/s]

Thank you.

Thanks and Rgds,
Udayanga

--skip_video_output failed

When this parameter is added to the command line, the procedure still produces video_orig.avi and video_slomo.avi.

Incorrect x,y order in output text file

Hello,tobi!
x, y order is incorrect for output text file.

I run the following
v2e.py -i data/input/tennis.mov --overwrite --timestamp_resolution=.003 --auto_timestamp_resolution=False --dvs_exposure duration 0.005 --output_folder=data/output/tennis2 --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2=None --dvs_h5=dvs.h5 --dvs_text=dvs.txt --output_width=346 --output_height=260
Now width and height of my input are [346, 260]
But in output 'dvs.txt', x.max() is 259 and y.max() is 345.

The CUDA memory kept growing when transfer videos into events recursively.

Here I'm trying to transfer series of videos into events through v2e method. I recursively use the def main() function in v2e.py to transfer around 1000 short videos into events. But the CUDA memory kept growing till "RuntimeError: CUDA error: out of memory".
My code:

from v2e import *  
def main():  
    video_ls = os.listdir(input_path)
    for i in video_ls:  
        video_path = os.path.join(input_path, i)  
        event_path = os.path.join(output_path, i[:-4]+".npz")  
        if os.path.exists(event_path):  
            continue  
        run_v2e(video_path)

where def run_v2e() is def main() in your v2e.py
Then I got growing CUDA memory from 2500MiB in the beginning till out of memory.
Here's the beginning in GPU2:
image
Here's after some runs:
image
The memory grow up to 4000MiB due to some reason.

Event time stamps do not match with video

Hello, this is a really awesome tool! Thank you for your wonderful research and development!

After diving deep into the generated events, however, I found that the time stamps of the events do not match with the video timeline. Consider a input video with framerate 30 and Super SloMo is turned on. According to my experiment, the events between 0-33ms correspond to the motion between the first and second frame, which is 33-67ms in the video. On the contrary, the events of the last 33ms correspond to the motion between the second last and last frame of the video. In another word, the events are 33ms behind the video at the beginning, but this gap is gradually fulilled, and disappeared at the end. As a result, I cannot cut the events accurately to match the movement of each frame.

Problem while using 'disable_slowmo' parameter

Hi.

I've run into an issue while testing the simulator if the 'disable_slowmo' parameter is used. I tried to run with the following set of parameters for one of the example videos:

--input=input/tennis.mov --output_folder=output/v2e-test --cutoff=0 --leak_rate=0 --shot=0 --dvs_exposure duration .03 --dvs_aedat2=v2e.aedat --output_width=346 --output_height=260 --batch=16 --disable_slomo --input_slowmotion_factor 1 --stop_time 0.5 --ignore-gooey --overwrite --no_preview

Getting the following output error:

...
Traceback (most recent call last):
  File "C:/Users/Usuario/Desktop/Sistemas Inteligentes UJI/Asignaturas/ReAViPeRo/v2e/v2e/v2e.py", line 706, in <module>
    main()
  File "C:/Users/Usuario/Desktop/Sistemas Inteligentes UJI/Asignaturas/ReAViPeRo/v2e/v2e/v2e.py", line 559, in main
    interpTimes, avgUpsamplingFactor = slomo.interpolate(
AttributeError: 'NoneType' object has no attribute 'interpolate'

Searching for a possible solution, I found out that the problem could reside on v2e.py (line 545):

# v2e.py - line 545:
if auto_timestamp_resolution or slowdown_factor != NO_SLOWDOWN:
    # interpolated frames are stored to tmpfolder as
    # 1.png, 2.png, etc

    logger.info(
        f'*** Stage 2/3: SloMo upsampling from '
        f'{source_frames_dir}')
    interpTimes, avgUpsamplingFactor = slomo.interpolate(
        source_frames_dir, interpFramesFolder,
        (output_width, output_height))

If 'disable_slomo' is used (setting slowdown_factor to NO_SLOWDOWN value, equals 1.0), the 'slomo' object is never built, so this statement should not occur.

I solved it by changing the 'or' operator in the if statement for an 'and' operator:

if auto_timestamp_resolution and slowdown_factor != NO_SLOWDOWN:

This way, if the user won't use the SlowMotion algorithm, maintaining the input video timestamp resolution, the else statement will occur:

else:
    logger.info(
        f'*** Stage 2/3:turning npy frame files to png '
        f'from {source_frames_dir}')
    interpFramesFilenames = []
    n = 0
    src_files = sorted(
        glob.glob("{}".format(source_frames_dir) + "/*.npy"))
    for frame_idx, src_file_path in tqdm(
            enumerate(src_files), desc='npy2png', unit='fr'):
        src_frame = np.load(src_file_path)
        tgt_file_path = os.path.join(
            interpFramesFolder, str(frame_idx) + ".png")
        interpFramesFilenames.append(tgt_file_path)
        n += 1
        cv2.imwrite(tgt_file_path, src_frame)
    interpTimes = np.array(range(n))

Is that a possible issue? Or maybe I missunderstood the use of the 'disable_slomo' parameter?

Using HDR, PNG issue

Hello,
I am trying to use the hdr functionality of V2E. It seems that saving frame files as PNG loses the benefit of hdr. In "Stage 2/3:turning npy frame files to png" all input frames are converted into png. This results in the frames being compressed to integer values. This creates the problem that if you have an event threshold of 0.3 it will not output any events until the input signal increases by an integer of the natural log then output three events at once. My simple workaround was to use the .npy frame file instead. Attached is a plot of adding events using .npy frames vs .png frames for a single pixel. The ASSET signal is calculated by taking the log of the input frame signal subtracting the log of the first frame then dividing it by the event threshold to get the sum of events. When using the current implementation with png frames we can see clumping of events. When npy frame files in stage 2/3 of V2E, the output of V2E matches what is expected. Please let me know if this is not a bug, but rather user error. Thanks!
PNGvsNPY

128x128 output not supported

Hello,

I'm trying to generate events from a sequence of images. When I tried to use the --dvs128 option that is specified in the readme I got the following error message:

ValueError: AEDAT-2.0 output width=128 height=128 not supported; add your camera to v2ecore.output.aedat2_output or use one of the predefined DVS cameras, e.g. --dvs346 or --dvs240

Using --dvs240 and --dvs346 works, I'm wondering, is the --dvs128 option not supported?

Thanks!

it seems like the code do not support --dvs640.

it seems like the code do not support --dvs640 whenever i insert--dvs640, it will out put ValueError: AEDAT-2.0 output width=640 height=480 not supported; add your camera to v2ecore.output.aedat2_output or use one of the predefined DVS cameras, e.g. --dvs346 or --dvs240 that have sizes ((346, 260), (240, 180))

File AEDAT can't be converted with Dynamic Vision Viewer

Hello,

I have recently been working in an internship which consists of creating a database with DVS images. So I inquired about this tool by having tested it with indications of this github. However, I encountered a problem with the generated "aedat" file.

Knowing that this version of aedat is no longer up to date, there is a tool in Dynamic Vision Viewer to convert the "aedat" file into "aedat4" file to read it after but there is an error during the conversion. Do you have any idea why this wouldn't work?

Here is the command used about the build:
python v2e.py -i input / vid.avi --overwrite --timestamp_resolution = .003 --auto_timestamp_resolution = False --dvs_exposure duration 0.005 --output_folder = output --overwrite - -pos_thres = .15 --neg_thres = .15 --sigma_thres = 0.03 --dvs_aedat2 vid.aedat --output_width = 240 --output_height = 180 --stop_time = 3 --cutoff_hz = 15 --ignore-gooey

Thank you very much for your time.
Cordially.

AttributeError: 'str' object has no attribute 'appendEvents'

When I run the following command with the tennis.mov video as input:

python v2e.py -i input/tennis.mov --overwrite --timestamp_resolution=.003 --auto_timestamp_resolution=False --dvs_exposure duration 0.005 --output_folder=output/tennis --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2 tennis.aedat --output_width=346 --output_height=260 --stop_time=3 --cutoff_hz=15

I get the following error:

INFO:__main__:*** Stage 3/3: emulating DVS events from 1551 frames

dvs:   0%|          | 0/1551 [00:00<?, ?fr/s]
dvs:   0%|          | 2/1551 [00:00<01:52, 13.74fr/s]
dvs:   0%|          | 2/1551 [00:00<07:24,  3.48fr/s]
Traceback (most recent call last):
  File "v2e.py", line 719, in <module>
    main()
  File "v2e.py", line 658, in main
    fr, interpTimes[i])
  File "C:\Users\dabigioi\Documents\v2e\v2ecore\emulator.py", line 624, in generate_events
    self.dvs_aedat2.appendEvents(events)
AttributeError: 'str' object has no attribute 'appendEvents'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "C:\Users\dabigioi\Documents\v2e\v2ecore\emulator.py", line 217, in cleanup
    self.dvs_aedat2.close()
AttributeError: 'str' object has no attribute 'close'

Any clue what is happening?

Setting arbitrary output size for v2e output dimension

Hi,
Is it possible to set the output size of the v2e as 100x100 instead of the dimension of --dvs346?
actually I set ---outptut_width and --output_height to 100 , but it gave error saying to set dvs240 or dvs 346.
Actually i need to generate events for events of size 100x100 dimension.
Thanks !

many warning when change the timestamp_resolution and --auto_timestamp_resolution set to true

Hi,

For the given sample video (tennis.mov) I have use the following command to generate events. Here i set the timestamp_resolution to 5ms instead and set the auto_timestamp_resolution to true. (in the example command given in the read me time_stamp_resolution was 3ms and auto_timestamp_resolution was set to False. ). In that case there are many warning as shown. So, for our own video how do we decide the exact timestamp resolution to generate events without these kind of warnings ?(because 3ms timestamp these warning did not come). Thank you.
Command:
python v2e.py -i inputs/tennis.mov --overwrite --slomo_model inputs/SuperSloMo39.ckpt --timestamp_resolution=.005 --auto_timestamp_resolution=True --dvs_exposure duration 0.005 --output_folder=output/tennis --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2 tennis.aedat --output_width=346 --output_height=260 --stop_time=5 --cutoff_hz=15

Warnings:

WARNING:v2ecore.emulator:no events generated for frame, generating any sampled temporal noise for this frame with 1 iteration
WARNING:v2ecore.emulator_utils:IIR lowpass filter update has large maximum update eps=0.39 from delta_time/tau=0.00417/0.0106
WARNING:v2ecore.emulator_utils:IIR lowpass filter update has large maximum update eps=0.39 from delta_time/tau=0.00417/0.0106

Blank (gray) event frames in the output video

Hi,

I've been using the latest version of v2e and it seems that some frames are either blank (gray) or the event pixels are all less intense than the last frame. The object in the input video is moving consistently. The last good commit for me is 1edd9e3 where the event frames look consistent and I get no blank frames.

My run command is:
v2e -i frames --overwrite --input_frame_rate=10
--timestamp_resolution=0.1 --disable_slomo --auto_timestamp_resolution=False
--dvs_exposure duration 0.1 --output_folder=event_data --overwrite --pos_thres=.15 --neg_thres=.15
--sigma_thres=0.3 --dvs_text events.csv --output_width=1280 --output_height=720 --cutoff_hz=30 --avi_frame_rate=10
--refractory_period 0 --dvs_aedat2 None

What was changed after that commit that I could get blank frames?

Thanks

!$final_v2e_command gives error

v2e -i /content/v2e_tutorial_video.avi -o /content/v2e-output --overwrite --unique_output_folder false --dvs_h5 events.h5 --dvs_aedat2 None --dvs_text None --no_preview --dvs_exposure duration .033 --input_frame_rate 30 --input_slowmotion_factor 1 --disable_slomo --auto_timestamp_resolution false --pos_thres 0.2 --neg_thres 0.2 --sigma_thres 0.03 --cutoff_hz 30 --leak_rate_hz 0.1 --shot_noise_rate_hz 5 --dvs346

ERROR:v2ecore.emulator:Output file exception "Unable to create file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')" (maybe you need to specify a supported DVS camera type?)
Traceback (most recent call last):
File "/usr/local/bin/v2e", line 8, in
sys.exit(main())
File "/usr/local/bin/v2e.py", line 530, in main
emulator = EventEmulator(
File "/usr/local/lib/python3.8/dist-packages/v2ecore/emulator.py", line 301, in init
raise e
File "/usr/local/lib/python3.8/dist-packages/v2ecore/emulator.py", line 274, in init
self.dvs_h5 = h5py.File(path, "w")
File "/usr/local/lib/python3.8/dist-packages/h5py/_hl/files.py", line 424, in init
fid = make_fid(name, mode, userblock_size,
File "/usr/local/lib/python3.8/dist-packages/h5py/_hl/files.py", line 196, in make_fid
fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 116, in h5py.h5f.create
OSError: Unable to create file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')

Error running code

Hi, I have encountered an error running the code.
I am running on win10 python3.7.An error is as follows:
............................
WARNING:main:Gooey GUI builder not available, will use command line arguments.
Install with "pip install Gooey". See README
WARNING:main:Gooey GUI not available, using command line arguments.
You can try to install with "pip install Gooey"
ERROR:main:specify either --overwrite or --unique_output_folder
..............................

No module named 'dv_processing'

Hello, I am trying to run the Colab version of v2e. However when I get to the cell with the command !$final_v2e_command, I encounter the error below: Can you please help me fix this?

Traceback (most recent call last):
File "/usr/local/bin/v2e", line 5, in
from v2e import main
File "/usr/local/bin/v2e.py", line 34, in
from v2ecore.v2e_args import v2e_args, write_args_info, SmartFormatter
File "/usr/local/lib/python3.10/dist-packages/v2ecore/v2e_args.py", line 6, in
from v2ecore.emulator import EventEmulator
File "/usr/local/lib/python3.10/dist-packages/v2ecore/emulator.py", line 27, in
from v2ecore.output.aedat4_output import AEDat4Output
File "/usr/local/lib/python3.10/dist-packages/v2ecore/output/aedat4_output.py", line 10, in
import dv_processing as dv
ModuleNotFoundError: No module named 'dv_processing'

Timestamps of the generated event stream

hi, thanks for this great simulator! It is really convenient to use and noise models look cool.

One problem is that, it seems all the events between two interpolated frames have same timestamps.
For example, when I set --timestamp_resolution=0.0005 (1920fps). Then within 0.0005s, all the events have same timestamps.
Is it? Or am I not using it correctly?

I suppose to generate events, it first interpolates the video to 1920fps. Then, events are generated by comparing intensity difference between two frames.
My point is that these events between two frames can still have different timestamps by linear interpolation, which will make the simulator more realistic.

DVS .npy file not generated as expected

Hello,

I am trying to convert a frame-based video and generate the DVS in numpy .npy format in the output folder. The command line option I am using is the following:

python v2e.py --slomo_model=input/SuperSloMo39.ckpt --input input/tennis.mov --output_folder=output/tennis --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.05 --cutoff=30 --leak_rate=.1 --shot=.1 --dvs_exposure duration 0.005 --dvs_aedat2=None --dvs_numpy=dvs-numpy.npy --output_width=346 --output_height=260 --batch=32 --stop_time=1

However, in the output folder, I do not see any dvs-numpy.npy file as expected. And there is no warning about .npy file or anything like this. Any help would be much appreciated.

AttributeError: 'NoneType' object has no attribute 'interpolate'

Hi,

I was trying to generate simulated events with the slowmo step disabled (i.e. by using the --disable_slomo argument. However, I run into an error when I do that. The same video works fine when I don't use the --disable_slomo argument.

I want to disable slomo, because I want to do some custom experiments with the slowmo parts, and hence, I had already upsampled the video before sending it to the simulator.

The error message and some lines preceeding it are given below:

INFO:__main__:*** Stage 2/3: SloMo upsampling from /tmp/tmp9h82o98d
Traceback (most recent call last):
  File "/home/amoghtiwari/v2e/v2e.py", line 805, in <module>
    main()
  File "/home/amoghtiwari/v2e/v2e.py", line 657, in main
    interpTimes, avgUpsamplingFactor = slomo.interpolate(
AttributeError: 'NoneType' object has no attribute 'interpolate'
INFO:v2ecore.output.aedat2_output:Closing /home/amoghtiwari/v2e/output/airboard_1_30_my_slomo/tennis.aedat after writing 0 events (0 on, 0 off) ``` 

error launching DDD example

Hello,

I am trying to run DDD example and I get following error:

(v2e) $ python -m dataset_scripts.ddd.ddd-v2e.py --input DavisDrivingDataset/rec1501902136.hdf5 --slomo_model input/SuperSloMo39.ckpt --slowdown_factor 20 --start 70 --stop 73 --output_folder v2e-output/ddd20-v2e-short --dvs_aedat dvs --pos_thres=.2 --neg_thres=.2 --overwrite --dvs_vid_full_scale=2 --frame_rate=100
Traceback (most recent call last):
File "v2e/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "v2e/lib/python3.7/runpy.py", line 109, in _get_module_details
import(pkg_name)
File "v2e/dataset_scripts/ddd/ddd-v2e.py", line 27, in
from v2e.v2e_utils import OUTPUT_VIDEO_FPS, all_images,
ImportError: cannot import name 'OUTPUT_VIDEO_FPS' from 'v2e.v2e_utils' (v2e/v2e/v2e_utils.py)

Then I try and hardcode OUTPUT_VIDEO_FPS = 30 in the code, and I get new error:
(v2e) $ python -m dataset_scripts.ddd.ddd-v2e.py --input DavisDrivingDataset/rec1501902136.hdf5 --slomo_model input/SuperSloMo39.ckpt --slowdown_factor 20 --start 70 --stop 73 --output_folder v2e-output/ddd20-v2e-short --dvs_aedat dvs --pos_thres=.2 --neg_thres=.2 --overwrite --dvs_vid_full_scale=2 --frame_rate=100
usage: -m [-h] [-o OUTPUT_FOLDER] [--overwrite]
[--unique_output_folder UNIQUE_OUTPUT_FOLDER] [--no_preview]
[--avi_frame_rate AVI_FRAME_RATE]
[--auto_timestamp_resolution AUTO_TIMESTAMP_RESOLUTION]
[--timestamp_resolution TIMESTAMP_RESOLUTION]
[--output_height OUTPUT_HEIGHT] [--output_width OUTPUT_WIDTH]
[--dvs_params DVS_PARAMS] [--pos_thres POS_THRES]
[--neg_thres NEG_THRES] [--sigma_thres SIGMA_THRES]
[--cutoff_hz CUTOFF_HZ] [--leak_rate_hz LEAK_RATE_HZ]
[--shot_noise_rate_hz SHOT_NOISE_RATE_HZ]
[--dvs128 | --dvs240 | --dvs346 | --dvs640 | --dvs1024]
[--disable_slomo] [--slomo_model SLOMO_MODEL]
[--batch_size BATCH_SIZE] [--vid_orig VID_ORIG]
[--vid_slomo VID_SLOMO] [--slomo_stats_plot] [-i INPUT]
[--input_slowmotion_factor INPUT_SLOWMOTION_FACTOR]
[--start_time START_TIME] [--stop_time STOP_TIME]
[--dvs_exposure DVS_EXPOSURE [DVS_EXPOSURE ...]] [--dvs_vid DVS_VID]
[--dvs_vid_full_scale DVS_VID_FULL_SCALE] [--dvs_h5 DVS_H5]
[--dvs_aedat2 DVS_AEDAT2] [--dvs_text DVS_TEXT]
[--dvs_numpy DVS_NUMPY] [--rotate180 ROTATE180] [--numpy_output]
-m: error: unrecognized arguments: --slowdown_factor 20 --frame_rate=100

Then I give a try without thses arguments and here the result:
(v2e) $ python -m dataset_scripts.ddd.ddd-v2e.py --input DavisDrivingDataset/rec1501902136.hdf5 --slomo_model input/SuperSloMo39.ckpt --start 70 --stop 73 --output_folder v2e-output/ddd20-v2e-short --dvs_aedat dvs --pos_thres=.2 --neg_thres=.2 --overwrite --dvs_vid_full_scale=2
v2e/bin/python: Error while finding module specification for 'dataset_scripts.ddd.ddd-v2e.py' (ModuleNotFoundError: path attribute not found on 'dataset_scripts.ddd.ddd-v2e' while trying to find 'dataset_scripts.ddd.ddd-v2e.py')

Do you plan to update these scripts?

Regards

Slomo frame insertion problem

Thanks for your nice work!
I try to use your code to generate events from video, however, frame insertion results show like below.

7.25.1.mov

In the process of acceleration motion of the camera, the video obtained by inserting frames will have serious blurring.
I wonder if the problem is caused by the optical flow insertion frame or my parameters are not set properly. And how can I improve the performance.
Thanks!

Hi, which algorithm should I use to parse the transformed File **.aedat?

Hi, thanks for your wonderful work first. My problem is which algorithm should I use to parse the transformed File **.aedat? I find the obtained aedat file can't be parsed via:

from dv import AedatFile

with AedatFile(input_filename) as f:
events = np.hstack([event for event in f['events'].numpy()])

But aforementioned code can be used for videos captured from DVS346 sensors. so, would you please tell me how to load the transformed aedat files? Thank you very much!

Best regards,
Xiao Wang

avi_frame_rate parameter not working as expected

Hi.

I've been using this simulator (in my opinion, very useful and easy to use) to transform conventional video frames to DVS events. I'm trying to understand the usage of some parameters, and I think i'm not using the avi_frame_rate parameter correctly.

Reading its description, I supposed that the frame rate of the generated dvs-video.avi will be set by this parameter, but in my case, that file is always generated with 30fps frame rate. I've tried to set this parameter to different values (60fps, 120fps and 10fps), resulting in all cases in a video with the same frame rate.

Has I missunderstood how this parameter works? I expect that the resulting video will be shown at a different velocity because of this frame rate, but all videos look the same. How can I notice about any change produced by this parameter?

Thanks in advance.

DAVIS camera conversion Dataset error when run the command "python -m dataset_scripts.ddd.ddd_extract_data.py -h"

I've tried run all the commands in this section, but the console give me the present error:

(v2e) PS C:\Users\laloh\v2e> cd input
(v2e) PS C:\Users\laloh\v2e\input> mkdir -p input

Directorio: C:\Users\laloh\v2e\input

Mode LastWriteTime Length Name


d----- 30/11/2021 12:39 p. m. input

(v2e) PS C:\Users\laloh\v2e\input> mv rec1501902136.hdf5 ./input
(v2e) PS C:\Users\laloh\v2e\input> python -m dataset_scripts.ddd.ddd_extract_data.py -h
Traceback (most recent call last):
File "C:\Users\laloh\anaconda3\envs\v2e\lib\runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Users\laloh\anaconda3\envs\v2e\lib\runpy.py", line 111, in _get_module_details
import(pkg_name)
File "c:\users\laloh\v2e\dataset_scripts\ddd\ddd_extract_data.py", line 16, in
from v2e.ddd20_utils import ddd_h5_reader
ModuleNotFoundError: No module named 'v2e.ddd20_utils'; 'v2e' is not a package
(v2e) PS C:\Users\laloh\v2e\input>

event time stamps is not equal to origin video's fps

Thank you for developing such a brilliant tool!!

The timestamps do not match the original video, even though it is set to no slow motion.
Even though the video is at 100 fps, the event timestamps are not every 0.01s.
I would like to know why this is so.

The command I used is as follows.
python v2e.py -i input\0.avi --output_folder=output/tennis --dvs_text DVS_TEXT --dvs_exposure duration 0.01 --overwrite --timestamp_resolution=0.01 --auto_timestamp_resolution=False --output_folder=output/tennis --overwrite --pos_thres=.15 --neg_thres=.15 --sigma_thres=0.03 --dvs_aedat2 tennis.aedat --output_width=240 --output_height=180 --cutoff_hz=15 --disable_slomo --timestamp_resolution=0.01 --input_slowmotion_factor 1

The data output to DVS_TEXT.txt is as follows
...
0.044999998062849045 149 170 1
0.044999998062849045 37 82 1
0.044999998062849045 85 83 0
0.044999998062849045 58 67 1
0.05000000074505806 61 64 1
0.06000000238418579 112 120 0
0.06000000238418579 72 67 0
0.06000000238418579 156 179 1
...

final_v2e_command gives error

Hello,

I have tried to run the final_v2e_command command for the same video given on google collab. But It always give the following error.
Could you please help to resolve this?

WARNING:v2e:Either output_width is None or output_height is None,or both. Setting both of them to None.Actualy dimension will be set automatically.
Traceback (most recent call last):
File "/usr/local/bin/v2e", line 8, in
sys.exit(main())
File "/usr/local/bin/v2e.py", line 156, in main
num_pixels=output_width*output_height
TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'

Thanks and Rgds,
Udayanga

The differences between the generated "clean" events and "noisy" events is not as expected

Hi, I wanted to investigate the influence of DVS noise. so I converted the provided sample video (pendulum) to events streams with v2e. I conducted two experiments with all the arguments same except the ones controlling the noise. The commands are as follows.

  1. I expected this one generated clean events without noise.
python v2e.py -i input/pendulum_trim.mp4  --overwrite  --auto_timestamp_resolution=False --timestamp_resolution=0.001  --dvs_exposure duration 0.005 --output_width=346 --output_height=260   --output_folder=output/pendulum_trim_wo_noise  --pos_thres=.2 --neg_thres=.2 --sigma_thres=0 --leak_rate_hz=0 --shot_noise_rate_hz=0 --cutoff_hz=0 --dvs_aedat2 pendulum.aedat   --dvs_params=None  --no_preview --ignore-gooey
  1. I intended to generate noisy events.
python v2e.py -i input/pendulum_trim.mp4  --overwrite  --auto_timestamp_resolution=False --timestamp_resolution=0.001  --dvs_exposure duration 0.005 --output_width=346 --output_height=260   --output_folder=output/pendulum_trim_noise1  --pos_thres=.2 --neg_thres=.2 --sigma_thres=0 --leak_rate_hz=0.01 --shot_noise_rate_hz=0.001 --cutoff_hz=0 --dvs_aedat2 pendulum.aedat   --dvs_params=None  --no_preview --ignore-gooey

Since the only difference of two commands is the --leak_rate_hz and --shot_noise_rate_hz parameter, I supposed that the generated stream of these two commands have many "events" in comman and the extra data generated by the 2nd command are all noisy events. However, the results showed that generated data were very much different and the event item generated with the 1st command can hardly find the counterpart in the 2ndone.

Can anyone help me to figure out the problem?

The following figure shows part of the generated data. The left column corresponds to the 1st command and the right 2nd.
snapshot

Problem with v2e gooey GUI

When I ran the instruction python -m v2e in the console, gui doesn't appear, only appear a window requesting the file to open, then I chose de file and appear the present error:

(v2e) PS C:\Users\laloh\v2e> python -m v2e

INFO:main:Use --ignore-gooey to disable GUI and run with command line arguments
←[1;31mWARNING←[1;0m:main:Either output_width is None or output_height is None,or both. Setting both of them to None.Actualy dimension will be set automatically.
INFO:v2ecore.v2e_utils:using output folder C:\Users\laloh\v2e\v2e-output-1
INFO:main:output_in_place==False so made output_folder=C:\Users\laloh\v2e\v2e-output-1
INFO:v2ecore.v2e_args:arguments:
auto_timestamp_resolution: True
avi_frame_rate: 30
batch_size: 8
crop: None
cutoff_hz: 300
davis_output: False
disable_slomo: False
dvs1024: False
dvs128: False
dvs240: False
dvs346: False
dvs640: False
dvs_aedat2: v2e-dvs-events.aedat
dvs_emulator_seed: 0
dvs_exposure: ['duration', '0.01']
dvs_h5: None
dvs_params: None
dvs_text: None
dvs_vid: dvs-video.avi
dvs_vid_full_scale: 2
input: None
input_frame_rate: None
input_slowmotion_factor: 1.0
leak_jitter_fraction: 0.1
leak_rate_hz: 0.01
neg_thres: 0.2
no_preview: False
noise_rate_cov_decades: 0.1
output_folder: C:\Users\laloh\v2e\v2e-output
output_height: None
output_in_place: True
output_width: None
overwrite: False
pos_thres: 0.2
refractory_period: 0.0005
shot_noise_rate_hz: 0.001
show_dvs_model_state: None
sigma_thres: 0.03
skip_video_output: False
slomo_model: C:\Users\laloh\v2e\input\SuperSloMo39.ckpt
slomo_stats_plot: False
start_time: None
stop_time: None
synthetic_input: None
timestamp_resolution: None
unique_output_folder: True
vid_orig: video_orig.avi
vid_slomo: video_slomo.avi

INFO:main:Command line:
C:\Users\laloh\v2e\v2e.py
INFO:v2ecore.v2e_args:DVS frame expsosure mode ExposureMode.DURATION: frame rate 100.0
INFO:main:opening video input file C:/Users/laloh/v2e/input/tennis.mov
INFO:main:--auto_timestamp_resolution=True and timestamp_resolution is not set: source video will be automatically upsampled to limit maximum interframe motion to 1 pixel
←[1;31mWARNING←[1;0m:v2ecore.slomo:CUDA not available, will be slow :-(
INFO:v2ecore.slomo:Using auto_upsample and upsampling_factor; setting minimum upsampling to 1
INFO:v2ecore.slomo:using automatic upsampling mode
INFO:main:Source video C:/Users/laloh/v2e/input/tennis.mov has total 1551 frames with total duration 25.86s.
Source video is 59.94fps with slowmotion_factor 1 (frame interval 16.68ms),
Will convert 1551 frames 0 to 1550
(From 0.0s to 25.859161114693027s, duration 25.859161114693027s)
INFO:main:v2e DVS video will have constant-duration frames
at 100fps (accumulation time 10ms),
DVS video will have 2585 frames with duration 25.85s and playback duration 86.17s

INFO:v2ecore.emulator:ON/OFF log_e temporal contrast thresholds: 0.2 / 0.2 +/- 0.03
INFO:v2ecore.emulator:opening AEDAT-2.0 output file C:\Users\laloh\v2e\v2e-output-1\v2e-dvs-events.aedat
←[1;41mERROR←[1;0m:v2ecore.emulator:Output file exception "AEDAT-2.0 output width=None height=None not supported; add your camera to v2ecore.output.aedat2_output or use one of the predefined DVS cameras, e.g. --dvs346 or --dvs240 that have sizes ((346, 260), (240, 180))" (maybe you need to specify a supported DVS camera type?)
Traceback (most recent call last):
File "C:\Users\laloh\anaconda3\envs\v2e\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\laloh\anaconda3\envs\v2e\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\laloh\v2e\v2e.py", line 805, in
main()
File "C:\Users\laloh\v2e\v2e.py", line 462, in main
emulator = EventEmulator(
File "C:\Users\laloh\v2e\v2ecore\emulator.py", line 181, in init
raise e
File "C:\Users\laloh\v2e\v2ecore\emulator.py", line 171, in init
self.dvs_aedat2 = AEDat2Output(
File "C:\Users\laloh\v2e\v2ecore\output\aedat2_output.py", line 59, in init
raise ValueError(f'AEDAT-2.0 output width={output_width} height={output_height} not supported; add your camera to {name} or use one of the predefined DVS cameras, e.g. --dvs346 or --dvs240 that have sizes {self.SUPPORTED_SIZES}')
ValueError: AEDAT-2.0 output width=None height=None not supported; add your camera to v2ecore.output.aedat2_output or use one of the predefined DVS cameras, e.g. --dvs346 or --dvs240 that have sizes ((346, 260), (240, 180))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.