Comments (12)
@AdarshMJ Each of those eight folders holds the images for a single sample (ie. the four input images + the generated / ground-truth next frame at each scale for those inputs)
from adversarial_video_generation.
But the input samples are around 200 images. So for all 200 images its going to generate 8 output files?
from adversarial_video_generation.
It just saves each sample in the test batch (which is size 8 in this case) as its own directory. Look at test_batch
in g_model.py
and where it is called in avg_runner.py
from adversarial_video_generation.
Thank you so much for your patience and taking time to answer my qyuestions. Im sorry If I keep asking trivial questions.
So say I have 36 sub-folders in my Test main folder. And each sub-folder has over 200 image frames. That means I have 7200 images as my test set. So it keeps taking 8 images as its subset from 7200 images and perform frame prediction?
Also how many frames does it predict? By taking 4 input frames its able to predict next frame? That is the fifth frame?
from adversarial_video_generation.
Also I want to know whether these steps that Im saving correspond to just one image of the test set or does it take 8 different images everytime?
from adversarial_video_generation.
When running the code in test-only mode, the model takes 4 input images and generates 1 next frame. Is there a way to generate more than 1 next-frames?
from adversarial_video_generation.
If you have 36 folders in your Test folder, that means you have 36 videos. The test_batch
function will pick test_batch_size
(in this case, 8) of those videos at random, and pull one input sample (4 consecutive input images) from each. It then uses each sample to predict the next (fifth) frame for each. It writes the inputs and predicted outputs for each sample to one of those 0, 1, ..., 7 directories.
from adversarial_video_generation.
You can specify how many next frames to predict using the --recursions
flag.
from adversarial_video_generation.
Thank you so much. I have a much clearer understanding now. But I wanted to ask this, how will the model converge? I mean should it run some 50000 steps and it stops or what is the criteria for convergence?
from adversarial_video_generation.
On the Ms. Pac-Man dataset that I used, it took about 500,000 steps to converge. It looks like you are using real-world video, so I imagine convergence will be different (and you'll probably need different hyperparameters than I used). There's no set criteria for convergence. Just watch the tensorboard loss graph / the test image outputs and stop training when it looks like it isn't improving anymore.
from adversarial_video_generation.
I wanted to know whether the future frames will be predicted for any given frame or will it depend on the training data? Like for example, I have a training set which has only normal frames, and for testing the network I have given frames of anomaly, will the network predict normal version of the anomalous frames or is it capable of generating the future frames of the anomaly ridden frames?
from adversarial_video_generation.
As with all machine learning, performance on new data (your test set) is completely dependent on what is able to be learned from your training data. It's hard to say without knowing what the differences between your normal and anomaly data are. The closer the anomalies are to what the model has seen before, the more accurate the test results will be. The best way to find out is to test it yourself.
I'm going to close this issue, but feel free to email me ([email protected]) with any other questions.
from adversarial_video_generation.
Related Issues (20)
- Confusion using the plug-and-play data HOT 5
- Regarding the normalization step HOT 4
- problem about exist model HOT 4
- Some problem about gdl_loss HOT 1
- Question about discriminator input HOT 3
- Normalization of losses
- Why need the data processing step? HOT 3
- Error with np.random.choice HOT 4
- What is PSNR error exactly? HOT 3
- Can you share the code that generates the gif file? HOT 2
- TypeError: Value passed to parameter 'shape' has DataType float32 not in list of allowed values: int32, int64
- Alternative GDL loss implemetation
- Loss weighting HOT 1
- ValueError: Dimension 3 in both shapes must be equal HOT 1
- Updating Code to New Tensorflow version HOT 2
- GLARING bug with the process data pipeline.
- TypeError: Value passed to parameter 'shape' has DataType float32 not in list of allowed values: int32, int64 HOT 1
- ValueError: Dimensions must be equal, but are 1 and 3 for 'generator/train/Conv2D' (op: 'Conv2D') with input shapes: [?,4,4,1], [1,2,3,3].
- Tensorflow and packages are out of date
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adversarial_video_generation.