Code Monkey home page Code Monkey logo

vineeths96 / video-frame-prediction Goto Github PK

View Code? Open in Web Editor NEW
19.0 2.0 7.0 72.54 MB

In this repository, we focus on video frame prediction the task of predicting future frames given a set of past frames. We present an Adversarial Spatio-Temporal Convolutional LSTM architecture to predict the future frames of the Moving MNIST Dataset. We evaluate the model on long-term future frame prediction and its performance of the model on out-of-domain inputs by providing sequences on which the model was not trained.

License: MIT License

Python 100.00%
video-prediction video-processing frame-prediction convlstm generative-adversarial-network moving-mnist

video-frame-prediction's Introduction

Language Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Video Frame Prediction

Video Frame Prediction using Spatio Temporal Convolutional LSTM
Explore the repository»
View Report

tags : video prediction, frame prediction, spatio temporal, convlstms, generative networks, discriminative networks, movingmnist, deep learning, pytorch

About The Project

Video Frame Prediction is the task of predicting future frames given a set of past frames. Despite the fact that humans can easily and effortlessly solve the future frame prediction problem, it is extremely challenging for a machine. Complexities such as occlusions, camera movement, lighting conditions, or clutter make this task difficult for a machine. Predicting the next frames requires an accurate learning of the representation of the input frame sequence or the video. This task is of high interest as it caters to many applications such as autonomous navigation and self-driving. We present a novel Adversarial Spatio-Temporal Convolutional LSTM architecture to predict the future frames of the Moving MNIST Dataset. We evaluate the model on long-term future frame prediction and its performance of the model on out-of-domain inputs by providing sequences on which the model was not trained. A detailed description of algorithms and analysis of the results are available in the Report.

Built With

This project was built with

  • python v3.8.5
  • PyTorch v1.7
  • The environment used for developing this project is available at environment.yml.

Getting Started

Clone the repository into a local machine and enter the src directory using

git clone https://github.com/vineeths96/Video-Frame-Prediction
cd Video-Frame-Prediction/src

Prerequisites

Create a new conda environment and install all the libraries by running the following command

conda env create -f environment.yml

The dataset used in this project (Moving MNIST) will be automatically downloaded and setup in data directory during execution.

Instructions to run

To train the model on m nodes and g GPUs per node run,

python -m torch.distributed.launch --nnode=m --node_rank=n --nproc_per_node=g main.py --local_world_size=g

This trains the frame prediction model and saves it in the model directory.

This generates folders in the results directory for every log frequency steps. The folders contains the ground truth and predicted frames for the train dataset and test dataset. These outputs along with loss and metric are written to Tensorboard as well.

Model overview

The architecture of the model is shown below. The frame predictor model takes in the first ten frames as input and predicts the future ten frames. The discriminator model tries to classify between the true future frames and predicted future frames. For the first ten time instances, we use the ground truth past frames as input, where as for the future time instances, we use the past predicted frames as input.

Transformer

Results

Detailed results and inferences are available in report here.

We evaluate the performance of the model for long-term predictions to reveal its generalization capabilities. We provide the first 20 frames as input and let the model predict for the next 100 frames.

Ground truth frames (1-10):

Transformer

Predicted frames (2-101):

Transformer

We evaluate the performance of the model on out-of-domain inputs which the model has not seen during the training. We provide a frame sequence with one moving digit as input and observe the outputs from the model.

Ground truth frames (1-10):

Transformer

Predicted frames (2-41):

Transformer

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Vineeth S - [email protected]

Project Link: https://github.com/vineeths96/Video-Frame-Prediction

Acknowledgments

Base code is taken from:

https://github.com/JaMesLiMers/Frame_Video_Prediction_Pytorch

video-frame-prediction's People

Contributors

vineeths96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.