Code Monkey home page Code Monkey logo

opentraj's Introduction

OpenTraj

Human Trajectory Prediction Dataset Benchmark

We introduce existing datasets for Human Trajectory Prediction (HTP) task, and also provide tools to load, visualize and analyze datasets. So far multiple datasets are supported.

Publicly Available Datasets

Sample Name                                                   Description                                                   Ref
ETH 2 top view scenes containing walking pedestrians #Traj:[Peds=750] Coord=world-2D FPS=2.5 website paper
UCY 3 scenes (Zara/Arxiepiskopi/University). Zara and University close to top view. Arxiepiskopi more inclined. #Traj:[Peds=786] Coord=world-2D FPS=2.5 website paper
PETS 2009 different crowd activities #Traj:[?] Coord=image-2D FPS=7 website paper
SDD 8 top view scenes recorded by drone contains various types of agents #Traj:[Bikes=4210 Peds=5232 Skates=292 Carts=174 Cars=316 Buss=76 Total=10,300] Coord=image-2D FPS=30 website paper dropbox
GC Grand Central Train Station Dataset: 1 scene of 33:20 minutes of crowd trajectories #Traj:[Peds=12,684] Coord=image-2D FPS=25 dropbox paper
HERMES Controlled Experiments of Pedestrian Dynamics (Unidirectional and bidirectional flows) #Traj:[?] Coord=world-2D FPS=16 website data
Waymo High-resolution sensor data collected by Waymo self-driving cars #Traj:[?] Coord=2D and 3D FPS=? website github
KITTI 6 hours of traffic scenarios. various sensors #Traj:[?] Coord=image-3D + Calib FPS=10 website
inD Naturalistic Trajectories of Vehicles and Vulnerable Road Users Recorded at German Intersections #Traj:[Total=11,500] Coord=world-2D FPS=25 website paper
L-CAS Multisensor People Dataset Collected by a Pioneer 3-AT robot #Traj:[?] Coord=0 FPS=0 website
Edinburgh People walking through the Informatics Forum (University of Edinburgh) #Traj:[ped=+92,000] FPS=0 website
Town Center CCTV video of pedestrians in a busy downtown area in Oxford #Traj:[peds=2,200] Coord=0 FPS=0 website
Wild Track surveillance video dataset of students recorded outside the ETH university main building in Zurich. #Traj:[peds=1,200] website
ATC 92 days of pedestrian trajectories in a shopping center in Osaka, Japan #Traj:[?] Coord=world-2D + Range data website
VIRAT Natural scenes showing people performing normal actions #Traj:[?] Coord=0 FPS=0 website
Forking Paths Garden Multi-modal Synthetic dataset, created in CARLA (3D simulator) based on real world trajectory data, extrapolated by human annotators #Traj:[?] website github paper
DUT Natural Vehicle-Crowd Interactions in crowded university campus #Traj:[Peds=1,739 vehicles=123 Total=1,862] Coord=world-2D FPS=23.98 github paper
CITR Fundamental Vehicle-Crowd Interaction scenarios in controlled experiments #Traj:[Peds=340] Coord=world-2D FPS=29.97 github paper
nuScenes Large-scale Autonomous Driving dataset #Traj:[peds=222,164 vehicles=662,856] Coord=World + 3D Range Data FPS=2 website
VRU consists of pedestrian and cyclist trajectories, recorded at an urban intersection using cameras and LiDARs #Traj:[peds=1068 Bikes=464] Coord=World (Meter) FPS=25 website
City Scapes 25,000 annotated images (Semantic/ Instance-wise/ Dense pixel annotations) #Traj:[?] website
Argoverse 320 hours of Self-driving dataset #Traj:[objects=11,052] Coord=3D FPS=10 website
Ko-PER Trajectories of People and vehicles at Urban Intersections (Laserscanner + Video) #Traj:[peds=350] Coord=world-2D paper
TRAF small dataset of dense and heterogeneous traffic videos in India (22 footages) #Traj:[Cars=33 Bikes=20 Peds=11] Coord=image-2D FPS=10 website gDrive paper
ETH-Person Multi-Person Data Collected from Mobile Platforms website

Human Trajectory Prediction Benchmarks

Toolkit

To download the toolkit, separately in a zip file click: here

1. Benchmarks

Using python files in benchmarking/indicators dir, you can generate the results of each of the indicators presented in the article. For more information about each of the scripts check the information in toolkit.

2. Loaders

Using python files in loaders dir, you can load a dataset into a dataset object, which uses Pandas data frames to store the data. It would be super easy to retrieve the trajectories, using different queries (by agent_id, timestamp, ...).

3. Visualization

A simple script is added play.py, and can be used to visualize a given dataset:

References: an awsome list of trajectory prediction references can be found here

Contributions: Have any idea to improve the code? Fork the project, update it and submit a merge request.

  • Feel free to open new issues.

If you find this work useful in your research, then please cite:

@inproceedings{amirian2020opentraj,
      title={OpenTraj: Assessing Prediction Complexity in Human Trajectories Datasets}, 
      author={Javad Amirian and Bingqing Zhang and Francisco Valente Castro and Juan Jose Baldelomar and Jean-Bernard Hayet and Julien Pettre},
      booktitle={Asian Conference on Computer Vision (ACCV)},
      number={CONF},      
      year={2020},
      organization={Springer}
}

opentraj's People

Contributors

abduallahmohamed avatar akaimody123 avatar amiryanj avatar franciscovalentecastro avatar jbhayet avatar juanbaldelomar98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentraj's Issues

H.txt files for Students3

Hello, thanks for your work on the H.txt of each scene. However, when I use the H.txt of students3 and Zara, the result is not right and the ETH and Hotel are right. Could you support me with some suggestions? Thank you very much.

script for downloading datasets

As we are not allowed (?) to store the datasets here, we need to remove them and instead write scripts to download/unzip/ and use them.

  1. It can be done through pyqt user interface
  2. for SDD/GC at least we might store the annotations separately somewhere as that would be too much to download all the videos

questions regarding annotation file in SDD dataset

Hi, I have some confusions regarding the columns in annotation files. if you could answer that

1-)you have provided the reference images along with annotations. That means that all the trajectories are happing in still images right?

2-)What does it mean by the same path considering the trajectories
Track ID. All rows with the same ID belong to the same path.

3-)"annotation is outside" means if the bounding box gets outside the image?
lost. If 1, the annotation is outside of the view screen.

4-) I couldn't understand this line
generated. If 1, the annotation was automatically interpolated.

Question on play.py, trajdataset.__t_p_dict;

Dear Authors

Thanks for your sharing. Great work.

I have a question about the play.py Line 82: trajdataset.t_p_dict turns to be None in my local run. I also can not find it in the toolkit/core/trajdataset.py file. Am I missing anything?

More generally, can you give me a simple example on how to convert world coordinates to image coordinates? (How to use the Hinv parameter (3x3) for that?)

Thanks,
He

ModuleNotFoundError: No module named 'benchmarking'

Hi,
Anyone having problems with importing ? I was following setup process:

python toolkit/benchmarking . [output_dir]

and output was

Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "toolkit/benchmarking/main.py", line 4, in
from benchmarking.load_all_datasets import get_datasets, get_trajlets, all_dataset_names
ModuleNotFoundError: No module named 'benchmarking'

my bashrc path:
export PATH=/usr/bin/python3:$PATH
export PYTHONPATH=$PYTHONPATH:/home/vsproject/OpenTraj

Develop a Dataset Explorer GUI

The purpose of the GUI is to provide information about various datasets and provide links to download them.

TODO:

  • Create a config file containing details about the dataset.

  • Create a GUI/parser that reads from the config file and displays them.

Homography matrix for student01

Hi,

I notice that the homography matrix for student01 is not provided. Does the homography matrix for student03 apply to student01 as well?

Images for each frame?

Hi there,

I'm currently working with the UCY dataset using your loaders, which have been very convenient. In the resulting Pandas dataframe, each row is associated with a frame_id, which (from my understanding) corresponds to the annotated i-th frame in the video. I would like to work with these individual frames, and I was wondering what might be the best way to extract them. The dataset folder have the .avi files, but the corresponding individual frames are not available.

My current idea is to use a program like VLC to go through the video and extract these individual frames every 0.4 seconds, which should be manageable, but I was wondering if there is a better way to do this. I need the frames to correctly correspond to each row; ideally, if a folder of these frames is available, I would use these. Otherwise, my method should work, but I just need to make sure that I am working with the correct frames.

Apologies if this question has been asked and answered elsewhere - if so, I haven't found it yet. If you have any suggestions or comments, they would be greatly appreciated! Thank you for your time, and I hope to hear from you soon.

Number of trajs in SDD?

For each of the following classes how many trajectory exist in whole SDD dataset?

  • Pedestrian
  • Bicyclist
  • Skateboarder
  • Cart
  • Car
  • Bus

Which loaders are implemented already?

Hi,

Thanks for the great work. I was wondering if I can find documentation anywhere that updates which loader implementations in the toolkit are complete? I understand that this is still a working progress but it'd be great if it is easier to see what has been done already and which ones aren't ready yet.

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.