Code Monkey home page Code Monkey logo

rnnexp's Introduction

RNNexp

Prerequisites

Install NeuralModels

Description

This repo contains the source code for the following papers:

  • Structural-RNN (S-RNN) for doing deep learning over spatio-temporal graphs. Checkout the srnn branch and the source code is present in the directory structural_rnn. This source code takes input a spatio-temporal graph written in a text file and creates the S-RNN architecture. Currently the directory contains the code for human motion modeling on H3.6m data set. Our paper can be downloaded from here

  • Sensory-fusion recurrent neural architecture for driver activity anticipation http://brain4cars.com The source code is in the directory anticipatory-rnn/maneuver-rnn. Checkout 'icraversion' branch of NeuralModels to reproduce our results.

  • S-RNN for modeling human-object interaction (human activity detection and anticipation) code is present in anticipatory-rnn/activity-anticipation. Our paper can be downloaded from here

rnnexp's People

Contributors

asheshjain399 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnnexp's Issues

nodeRNN input

Hello, great paper and code provided! Thank you! But I have a question about the the input of nodeRNN(torso for instance). The paper says the nodeRNN(torso) concatenates the nodeFeatures(torso) and outputs of corresponding edgeRNNS(torso_input, torso_arm, torso_leg)[AS SEEN IN FIG.4]. But in the implementation(configuration: srnn), I found that there is no nodeFeatures(torso) sent into the nodeRNN(torso). Is that right? Besides, in the code, does the 'torso_input' mean 'torso_torso temporal edgeRNNs'? Very much looking forward to your reply!

Pik file format

Hi
I want to ask you about the input and output files (.pik format). How can I access these files correctly.
I would like to see the actual input features style.

hot to run s-rnn

Hi Ashesh,
How to run S-RNN on the H3.6m dataset?
Running generateMotionData.py or processdata.py gave me the error:
ImportError: No module named readCRFGraph
Thanks,
Val.

Missing dataset and checkpoints for activity-anticipation

Hi,

Could you please provide the dataset and checkpoint (or, at least the instruction) for the RNN implemetation of activity-anticipation?

I can't find the dataset folders in the repository or instruction in README file.

path_to_dataset = '/scr/ashesh/activity-anticipation/dataset/{0}'.format(fold)
path_to_checkpoints = '/scr/ashesh/activity-anticipation/checkpoints/{0}'.format(fold)

Thank you so much.

How to display the generated sequence?

HI,
Is there a way to display the animation of the generated sequence or training sequence?
I found a script called 'generateAndSaveVideos.m', but I don't know how it works?
Is it possible to save the cdf file to a bvh file by given the skeleton information? If yes, how to do this, and how can I get the skeleton information, because I didn't see any skeleton information from cdf files. Or maybe there is a direct way to display cdf file without need of transferring to bvh format.
Thanks !

how could I run maneuver-rnn.py with index and fold?

I just download the dataset from https://www.dropbox.com/sh/yndzlk3o90ooq2j/AACWUT8xjabmILM6-rm1_gNAa?dl=0 and then run the maneuver-rnn.py as :

~/RNNexp/anticipatory-rnn/maneuver-anticipation$ python maneuver-rnn.py 960453 fold_1

The error info:
File "maneuver-rnn.py", line 23, in
test_data = cPickle.load(open('{1}/test_data_{0}.pik'.format(index,path_to_dataset)))
IOError: [Errno 2] No such file or directory: '/home/dong/brain4car/brain4cars_data/fold_1/test_data_960453.pik'

how could I get the correct floder path and pik files?
thanks for your help

Error in MultipleRNNCombine

I tried to run the simulation using the 'all' maneuver-type, fold_1. I modified the maneuver-rnn.py so it takes in maneuver-type rather than index argument. I'm getting an error during the execution of MultipleRNNsCombined. It says too many values to unpack. Any thoughts on what might be the problem.

python maneuver-rnn.py 'all' 1
(7, 1888, 13)
(7, 1888)
<type 'numpy.float32'>
Number of classes 6
Feature dimension 13
Traceback (most recent call last):
File "maneuver-rnn.py", line 107, in
rnn = MultipleRNNsCombined([layers_1,layers_2],output_layer,softmax_decay_loss,trY,step_size,Adagrad())
File "/home/oolabiyi/brain4cars/NeuralModels/neuralmodels/models/MultipleRNNsCombined.py", line 59, in init
self.train = theano.function([self.X[0],self.X[1],self.Y],self.cost,updates=self.updates)
File "/home/oolabiyi/anaconda/lib/python2.7/site-packages/theano/compile/function.py", line 266, in function
profile=profile)
File "/home/oolabiyi/anaconda/lib/python2.7/site-packages/theano/compile/pfunc.py", line 489, in pfunc
no_default_updates=no_default_updates)
File "/home/oolabiyi/anaconda/lib/python2.7/site-packages/theano/compile/pfunc.py", line 191, in rebuild_collect_shared
for (store_into, update_val) in iter_over_pairs(updates):
ValueError: too many values to unpack

Pretrained models notavailable

Hi,

I am trying to download pre-trained SRNN and 3LR models as instructed in there "Pre-trained models of S-RNN, ERD, and LSTM-3LR can be downloaded from here" (readme file) but I can not download the models. Clicking on the links brings me back on the same page and no download starts. It would be great if you could have a look at it.

nodeFeatureRanges

in processdata.py, what does nodeFeatureRanges mean? It seems that the features are just numbers?nodeFeaturesRanges = {} nodeFeaturesRanges['torso'] = range(6) nodeFeaturesRanges['torso'].extend(range(36, 51)) nodeFeaturesRanges['right_arm'] = range(75, 99) nodeFeaturesRanges['left_arm'] = range(51, 75) nodeFeaturesRanges['right_leg'] = range(6, 21) nodeFeaturesRanges['left_leg'] = range(21, 36)

Pre-trained models do not reproduce paper results

Hi!

I'm using the pre-trained models available at https://drive.google.com/open?id=0B7lfjqylzqmMZlI3TUNUUEFQMXc and running generateMotionForecast.py. This produces motion predictions for different activities and models, but I've found that these do not correspond to what is reported in the paper. For reference, here's the figure from the paper that I'm talking about:

selection_007

But, for example, using the lstm3lr_walking model, the checkpoint.pikforecast_error file contains the following values:

T=0 2.87922000885, 0.053318887949
T=1 3.31045722961, 0.0613047629595
T=2 3.72076749802, 0.0689031034708
T=3 4.20972061157, 0.0779577866197
T=4 4.62205123901, 0.0855935439467
T=5 4.9056763649, 0.0908458605409
T=6 5.15456962585, 0.0954549908638
T=7 5.68943977356, 0.105359993875
T=8 6.14819526672, 0.113855466247
T=9 6.47697734833, 0.119944028556
T=10 6.86927509308, 0.127208799124
T=11 7.25948381424, 0.134434878826
T=12 7.56049823761, 0.140009224415
T=13 7.60584354401, 0.140848949552
T=14 7.81918954849, 0.144799813628
T=15 7.99432945251, 0.148043140769
T=16 8.21197509766, 0.152073606849
T=17 8.22490978241, 0.152313143015
T=18 8.21773910522, 0.152180358768
T=19 8.20940303802, 0.152025982738
T=20 8.21308326721, 0.152094140649
T=21 8.08870410919, 0.14979082346
T=22 7.9909658432, 0.147980853915
T=23 7.93785572052, 0.146997332573
T=24 8.08372688293, 0.149698644876
T=25 8.17058372498, 0.151307106018
T=26 8.29908180237, 0.153686702251
T=27 8.29321861267, 0.15357811749
T=28 8.33865356445, 0.154419511557
T=29 8.29992961884, 0.153702393174
T=30 8.31999206543, 0.154073923826
T=31 8.37398910522, 0.155073866248
T=32 8.47292232513, 0.156905964017
T=33 8.59246826172, 0.159119784832
T=34 8.65988731384, 0.160368278623
T=35 8.66351318359, 0.160435423255
T=36 8.65542507172, 0.160285651684
T=37 8.70272254944, 0.161161527038
T=38 8.90265083313, 0.16486389935
T=39 9.08981990814, 0.168329998851
T=40 9.22410964966, 0.170816838741
T=41 9.25332164764, 0.171357810497
T=42 9.3009595871, 0.172239989042
T=43 9.29813861847, 0.172187745571
T=44 9.26357460022, 0.171547681093
T=45 9.19590568542, 0.170294553041
T=46 9.15723419189, 0.169578418136
T=47 9.24366569519, 0.171178996563
T=48 9.30495262146, 0.172313943505
T=49 9.25953674316, 0.171472907066
T=50 9.24114990234, 0.171132400632
T=51 9.26937294006, 0.171655058861
T=52 9.3104429245, 0.172415614128
T=53 9.19757270813, 0.170325413346
T=54 9.04441356659, 0.167489141226
T=55 8.96823406219, 0.166078403592
T=56 9.00592136383, 0.166776314378
T=57 9.09947776794, 0.168508842587
T=58 9.06608009338, 0.167890369892
T=59 9.1175775528, 0.168844029307
T=60 9.23169708252, 0.170957356691
T=61 9.25059127808, 0.171307250857
T=62 9.23868370056, 0.171086728573
T=63 9.21300506592, 0.170611202717
T=64 9.20988750458, 0.170553475618
T=65 9.30304908752, 0.172278687358
T=66 9.30745029449, 0.17236019671
T=67 9.29339599609, 0.172099933028
T=68 9.21964550018, 0.170734182
T=69 9.22905826569, 0.170908480883
T=70 9.11111068726, 0.168724268675
T=71 9.0918712616, 0.168367981911
T=72 8.92658901215, 0.165307208896
T=73 8.91659736633, 0.165122166276
T=74 8.82111263275, 0.163353934884
T=75 8.90966320038, 0.16499376297
T=76 9.02032756805, 0.167043104768
T=77 9.09782981873, 0.168478325009
T=78 9.22392463684, 0.170813426375
T=79 9.33905029297, 0.172945380211
T=80 9.31301212311, 0.172463193536
T=81 9.44260978699, 0.174863144755
T=82 9.45653438568, 0.17512100935
T=83 9.52670955658, 0.176420554519
T=84 9.64883327484, 0.178682103753
T=85 9.83387374878, 0.182108774781
T=86 9.95151329041, 0.184287279844
T=87 9.91870689392, 0.183679759502
T=88 9.91715335846, 0.18365098536
T=89 10.0150337219, 0.18546359241
T=90 9.95522022247, 0.184355929494
T=91 9.70408630371, 0.179705306888
T=92 9.56737327576, 0.1771735847
T=93 9.58298301697, 0.177462652326
T=94 9.52612495422, 0.176409721375
T=95 9.55842971802, 0.177007958293
T=96 9.53139877319, 0.176507383585
T=97 9.50600910187, 0.176037207246
T=98 9.59951972961, 0.177768886089
T=99 9.80951976776, 0.181657776237

where the left and right columns correspond to skel_err and err_per_dof as computed in forecastTrajectories.py#L124

skel_err = np.mean(np.sqrt(np.sum(np.square((forecasted_motion - trY_forecasting)),axis=2)),axis=1)
err_per_dof = skel_err / trY_forecasting.shape[2]

I find one value to be much worse, and the other to be about 1 order of magnitude better. Do you have any pointers as to what I could be doing wrong?

Checkpoint path does not exist. Exiting!!

Sorry to bother you, but I can not successfully run the 'python generateMotionForecast.py model path'
I don't know why?
screenshot from 2017-05-14 10 41 19
I have read the forcastTrajectories.py, and find:
screenshot from 2017-05-14 10 48 00
I creat a basedir file saved the h3.6m dir,and checkpoint.pik file is also in there.

I do not know how to run the file.Thank you!

How to save model

Hi, I'm using hyperParameterTuning.py lstm3lr to train lstm3lr model. However, I noticed that the checkpoints doesn't contain the model I just trained.
If I want to get access to the weights of the model I just trained, how can I do that?

Thanks!

Missing dataset and checkpoints

Hi,

Can you please provide the dataset and checkpoint for the RNN implemetation?

I can't find the dataset folders in the repo. I need to correct the following lines in maneuver-rnn.py

path_to_dataset = '/scr/ashesh/brain4cars/dataset/{0}'.format(fold)
path_to_checkpoints = '/scr/ashesh/brain4cars/checkpoints/{0}'.format(fold)

Thanks.

Run the /RNNexp/anticipatory-rnn/activity-anticipation/readData.py, error: ImportError: No module named math

this module, math, is no third-part library. I don`t know why this so. I will appreciate it if you could help me.

>>$ python readData.py 
Traceback (most recent call last):
  File "readData.py", line 1, in <module>
    import numpy as np
  File "/usr/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in <module>
    from . import add_newdocs
  File "/usr/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
    from numpy.lib import add_newdoc
  File "/usr/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 3, in <module>
    import math
ImportError: No module named math

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.