Code Monkey home page Code Monkey logo

iclr2017mcnet's People

Contributors

rubenvillegas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

iclr2017mcnet's Issues

pre_trained model

Hello, I think the pre-trained model already been removed, every time when I run "./models/paper_models/download.sh" it always gives me a 403 forbidden error.

Evaluating trained model with KTH fails

I am getting the following error when trying to evaluate a model that I trained:

Traceback (most recent call last):
  File "/home/szetor/Documents/video_prediction/iclr2017mcnet/src/test_kth.py", line 182, in <module>
    main(**vars(args))
  File "/home/szetor/Documents/video_prediction/iclr2017mcnet/src/test_kth.py", line 51, in main
    save_path = quant_dir+"results_model="+best_model+".npz"
TypeError: cannot concatenate 'str' and 'NoneType' objects

I assume at some point before that line, you're supposed to set best_model to "MCNET.model-<iter_num>", but that never happens. I suspect the same bug is in test_ucf101.py.

Documentation suggestions

Hi Ruben,

I would like to recommend adding a few dependencies that were not mentioned in the main README file:

# Needed to run test code
pip install opencv-python
# Needed to download Sports1M
pip install pytube
# Needed to extract UCF-100
sudo apt-get install unrar

The command to extract the UCF-100 .rar file may be helpful as well:

unrar x UCF101.rar

Finally, instructions on reading the .npz files would be useful:

import numpy as np
a = np.load('results_model=MCNET.model-102502.npz')
print(a['psnr'])
print(a['ssim'])

preprocessing for KTH Dataset

Hi,
The paper says that "Sub-clips for running, jogging and walking were manually trimmed to ensure humans are always present in the frames."
Does this mean that the frames having partially visible human (mostly the beginning and the end frames) have been trimmed to?

Thanks

About another paper

It is a wonderful work! I notice another paper "Learning to Generate Long-term Future via Hierarchical Prediction" is your job, too. But you don't release code for that paper. Will you release in the future? If you will, can you release it as soon as possible? Or can you just tell me how you process the Penn Action dataset in that paper?

Pretrained model which can be trained further

Hi,
I'm trying to load the pretrained model (on S1M dataset) you've provided and train it further on another dataset (PENN) instead of starting from scratch. But when creating MCNET model, if I pass is_train=True, I get an error that the checkpoint doesn't have all the variables.
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint.

Can you kindly provide a pretrained model which can be loaded and trained further? Or can I make some changes to the code to achieve that?

RuntimeError

When I ran this code, I encountered the following problem: RuntimeError: The ffmpeg plugin does not work on Python 2.x,but when I input: python then: import ffmpeg,there is no problem,I can't figure out the reason, can someone help me?thx~

Training/Testing mcnet for different values of K and T

@rubenvillegas , Is it possible to train and test the mcnet for different values of K and T. I have trained and tested for RGB dataset for the given parameters of K=4 and T=1, but when I change the K and T parameters, the model is not building, do you suggest us to do any configurations to the model.

Input data

Hi, according to the paper, the input are the feature maps till the 3rd pooling layer from VGG-16. I understand the motion_enc serves this purpose. I am wondering the pre-trained model includes the weights from pretrained VGG-16 on imagenet or you train it from scratch? Thanks for your time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.