crowdbotp / socialways Goto Github PK
View Code? Open in Web Editor NEWSocial Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs (CVPR 2019)
Social Ways: Learning Multi-Modal Distributions of Pedestrian Trajectories with GANs (CVPR 2019)
How could I train on UCY dataset ??
Can't find the script to create the dataset of UCY like ETH, would you mind sharing the code?please?? Thank you very much!!!
for dirpath, dirnames, filenames in sorted(os.walk(preds_dir)):
只需要把待测的一张图片放进preds_dir文件夹就可以吗?运行就可以?
Where Can I get videos of data-sets that have been used in the project?
Hi,
So I am trying to understand how the dataset is used in the training process. While studying the create_dataset function, I noticed that there is no usage of velocity data ( it is just using a position value from a future timestamp ).
So my question is that if the velocity data have no value in terms of predictions of trajectories? Or am I missing something?
Is this the final version of the code? Why are the settings of many variables inconsistent with those given in the paper? Can you provide the final version of the code? I cannot reproduce the prediction error in the paper.
@amiryanj I realized that in the script train.py in 577 and so on you divide the ade_avg etc by n_test_samples which does not match the number of times that you add up those error in the for loop in line 532, if you check the ii index. Do you have any explanation for this?
when uncomment 604 -607:
NameError: name 'epoch' is not defined
Can the author release the pretrained model weight? It always took a lot of time to train the model starting from scratch, appreciate it!
when I run "python train.py"
I get the Error: "FileNotFoundError: [Errno 2] No such file or directory: '../trained_models/socialWays-hotel.pt'"
who can tell me where is the "socialWays-hotel.pt" or how can I get it?
Thank you for your awesome work. I've noticed that unlike the training procedure, the testing part does not consider other agents social features. The network didn’t calculate the social pooling (sub batches = []). Is there any purpose of doing so?
Traceback (most recent call last):
File "train.py", line 614, in
test(128, write_to_file=wr_dir, just_one=True)
File "train.py", line 533, in test
pred_hat_4d = predict(obsv, noise, n_next)
File "train.py", line 374, in predict
new_v = decoder(encoder.lstm_h[0].view(bs, -1), weighted_features, noise).view(bs, 2)
File "/home/yuanfu/workspace/socialways/env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "train.py", line 290, in forward
inp = torch.cat([h, s, z], dim=1)
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 2 and 1 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:102
Hi,
I find there is a critical bug in the prediction error calculation of this implementation.
Based on this line https://github.com/amiryanj/socialways/blob/master/train.py#L532, it only loops over the first 20 test batches to aggregate the prediction errors.
However, at this line https://github.com/amiryanj/socialways/blob/master/train.py#L575, the code uses n_test_samples
as the denominator to compute the Avg ADE,FDE (12)
values. And this n_test_samples
is the total number of test samples and is not the number of test samples that are used to aggregate the prediction errors.
This bug will cause the computed Avg ADE,FDE (12)
values smaller than the true Avg ADE,FDE (12)
values.
Amazing work. How do you implement the multimodality in your decoder?
When I run 'python train.py'.There is 'FileNotFoundError: [Errno 2] No such file or directory: '../trained_models/socialWays-hotel.pt''.
So I am wondering how can I get the '../trained_models/socialWays-hotel.pt'
Hi, do you train on 4 datasets and test on one dataset? It seems you are using 70% of data to train and 30% data for testing from your code. Also, your position normalisation uses same minimum and maximum value on all scenes .
If not, can you describe how you normalise the data for mixed scenes? Thanks.
can you explain how to run this code?
According to the InfoGAN,the latent codes is really useful.
But in your code,the latent code just act as a "int".I think the code is wrong.And thus the result is wrong.It can not reproduce the ADE and FDE you said in your paper.
Hope to get your respond.
hotel dataset for draw_gt_data ?
First of all, thank you very much for providing such a wonderful work. I have a question to ask, what do the parameters in the code mean?
dataset_obsv, dataset_pred, dataset_t, the_batches
I look forward to your reply.
Can anyone successfully run the visualization code with the video dataset instead of the toy one? Several issues have come up during the testing procedure
I ran the network on the ETH dataset, but by adjusting the batch size, I can only get an error of about 0.96 / 1.55, and cannot achieve an accuracy of 0.39 / 0.64. In addition to changing the LeakyReLU slope in the code to 0.1, what other parameters need to be modified?
In the README file, How To Setup section, shouldn't it be
pip install seaborn opencv-python
?
Hi,
The paper mentions agent i (agent whose trajectory is to be predicted ) and other agents j (neighbors). However, I couldn't find such distinction in the code. Can you please explain?
Also, how is the information about the neighbors stored in the dataset_obsv?
dim[dataset_obsv] = NxTx2 (line 89, train.py)
So does N here refers to the sample size?
I am confused because SocialFeatures() function mentions N as well and seems like N = No. of agents in the scene, here.
Thanks.
Hi, I have a question about the dataset. I want to visualize the result of ETH Hotel. However, I can't find the corresponding time offset to correctly extracting the image data from the video. Would you mind sharing the information about how to extract the image data of Hotel? Thank you!!
sb[1] = sb[1] - sb[0] + last_ind
sb[0] = last_ind
last_ind = sb[1]
If I am right, these lines of code don't change anything when using BIWI dataset. Are these here for the purpose of some other dataset?
Dear @amiryanj thanks again for sharing your code. I also got issues when training with the st dataset. can you provide the parameters that you use that worked to train your model in this dataset?
Best,
Bruno
‘’[Errno 2] No such file or directory: '../eth-8-12.npz'‘’
Is this error because the dataset has not been uploaded in your project?
"data" container is written specifically for the toy dataset and does not have a mapping to other real world datasets. Is there any way to transform real data to this form?
Hi, thank you for publishing code for your paper.
Currently, I am trying to train with a custom dataset (dataset.zip). It is a intersection with 3 modes. There are 8 input points and 12 predicted points.
Samples from dataset look like this:
When I try to train with the InfoGAN objective only, I receive really bad results.
I switched off social features and hyperparameters are the default ones. Do you have an explanation for the results? Or do you think there is a bug somewhere?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.