Code Monkey home page Code Monkey logo

Comments (12)

dyelax avatar dyelax commented on September 24, 2024

@AdarshMJ Each of those eight folders holds the images for a single sample (ie. the four input images + the generated / ground-truth next frame at each scale for those inputs)

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

But the input samples are around 200 images. So for all 200 images its going to generate 8 output files?

from adversarial_video_generation.

dyelax avatar dyelax commented on September 24, 2024

It just saves each sample in the test batch (which is size 8 in this case) as its own directory. Look at test_batch in g_model.py and where it is called in avg_runner.py

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

Thank you so much for your patience and taking time to answer my qyuestions. Im sorry If I keep asking trivial questions.

So say I have 36 sub-folders in my Test main folder. And each sub-folder has over 200 image frames. That means I have 7200 images as my test set. So it keeps taking 8 images as its subset from 7200 images and perform frame prediction?

Also how many frames does it predict? By taking 4 input frames its able to predict next frame? That is the fifth frame?

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

screen shot 2018-03-17 at 8 35 04 am

Also I want to know whether these steps that Im saving correspond to just one image of the test set or does it take 8 different images everytime?

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

When running the code in test-only mode, the model takes 4 input images and generates 1 next frame. Is there a way to generate more than 1 next-frames?

from adversarial_video_generation.

dyelax avatar dyelax commented on September 24, 2024

If you have 36 folders in your Test folder, that means you have 36 videos. The test_batch function will pick test_batch_size (in this case, 8) of those videos at random, and pull one input sample (4 consecutive input images) from each. It then uses each sample to predict the next (fifth) frame for each. It writes the inputs and predicted outputs for each sample to one of those 0, 1, ..., 7 directories.

from adversarial_video_generation.

dyelax avatar dyelax commented on September 24, 2024

You can specify how many next frames to predict using the --recursions flag.

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

Thank you so much. I have a much clearer understanding now. But I wanted to ask this, how will the model converge? I mean should it run some 50000 steps and it stops or what is the criteria for convergence?

from adversarial_video_generation.

dyelax avatar dyelax commented on September 24, 2024

On the Ms. Pac-Man dataset that I used, it took about 500,000 steps to converge. It looks like you are using real-world video, so I imagine convergence will be different (and you'll probably need different hyperparameters than I used). There's no set criteria for convergence. Just watch the tensorboard loss graph / the test image outputs and stop training when it looks like it isn't improving anymore.

from adversarial_video_generation.

AdarshMJ avatar AdarshMJ commented on September 24, 2024

I wanted to know whether the future frames will be predicted for any given frame or will it depend on the training data? Like for example, I have a training set which has only normal frames, and for testing the network I have given frames of anomaly, will the network predict normal version of the anomalous frames or is it capable of generating the future frames of the anomaly ridden frames?

from adversarial_video_generation.

dyelax avatar dyelax commented on September 24, 2024

As with all machine learning, performance on new data (your test set) is completely dependent on what is able to be learned from your training data. It's hard to say without knowing what the differences between your normal and anomaly data are. The closer the anomalies are to what the model has seen before, the more accurate the test results will be. The best way to find out is to test it yourself.

I'm going to close this issue, but feel free to email me ([email protected]) with any other questions.

from adversarial_video_generation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.