Code Monkey home page Code Monkey logo

wsdec's People

Contributors

xgduan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

wsdec's Issues

Segmentation fault (core dumped) even with Cuda-9.0

Thanks for sharing the code!

I ran the training code with CUDA-9.0 under Pytorch-0.3.1-cuda90. But, I still met the bug. Can you tell me which part of the code leads to the bug? I would like to try to address it.

Thanks.

third_party evaluation script

Hi,

Thank you for sharing your code in such an organized way and it is really helpful! I was trying to run your code and evaluate the result with the provided third-party scripts. However, I encountered an error during the evaluation and I do not know how to fix it. I am wondering did you encountered the same problem while running the evaluate.py provided by the third-party? And if so, how did you solve it?

Here is the error while I was trying to run the evaluate.py script in the densevid_eval.
2531591841350_ pic_hd
Before this, I added 'shell=True' to the subprocess.Popen in both ptbtokenizer.py and meteor.py in order to solve the following two errors:
2541591841352_ pic_hd
2551591841354_ pic_hd

I am not so sure whether my changes to the two files cause the key error, so I am wondering did you also encountered this. Thanks so much in advance:)

translator.pkl

Hello, thank you very much for your sharing. Could you please tell me where to download the translator.pkl that is captioning dict between words and indexes? Looking forward to your reply.

Hi! I have some question about just generating caption about full trimmed video

Hi!
First, Thank you for your awesome and kindness github!

I'm on research about caption-to-video generation and I need dataset included pair of video and captions.

So This github show me the hope. But I have some problem

I already ready C3D-500dims features via C3D and PCA(n_samples * 1 * 500). But I don't have any video_length and video_mask because i used the trimmed video.

So how can I run just captioning model with your pretrained model?

Thanks you for reading :)

Can't reproduce the reported results

Hi,
Thanks for providing code in github. I am able to run your code. But unfortunately could not able to generate you reported results. Even I tried using your pretrained model from Onedrive specially for CIDEr, the scores are far away from the reported results. I followed your approach described in readme. Can you please tell me how can I able to reproduce the results?

What's the train_script/train_cg_pretrain.py?

In the "training" of your readme, it says "python train_script/train_cg_pretrain.py" first. However, there is no file named "train_cg_pretrain.py" in the "train_script" folder. Which file should we run, "train_cg.py" or "train_captionmodel_pretrain.py"?

Evaluation

I have trained supervised video captioning using train_sl.py. For evaluating the resultant .ckp which script should I use?

数据问题

我的天!这数据是要从外网下吗?

Question encoundered while reproducing your experiments

Hi! Thanks for releasing your excellent work.
While reproducing your experiments, I encountered several problems.
1)Is the default --translator_path should be changed from ./data/translator6000.pkl to ./data/translator.pkl to keep consistent with the dictionary generated in captioning_preprocessing.py? BTW, do I need to change the vocab_size accordingly?
2) I did not find a download.sh under the folder ./third_party/densevid_eval as stated in the ReadMe. Would u mind give me a hint about how to get that?
Thanks again for releasing. It's really interesting work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.