Code Monkey home page Code Monkey logo

meta-interpolation's Introduction

SAVFI - Meta-Learning for Video Frame Interpolation

Myungsub Choi, Janghoon Choi, Sungyong Baik, Tae Hyun Kim, Kyoung Mu Lee

Source code for CVPR 2020 paper "Scene-Adaptive Video Frame Interpolation via Meta-Learning"

Project | Paper-CVF | Paper-ArXiv | Supp

Paper

Requirements

  • Ubuntu 18.04
  • Python==3.7
  • numpy==1.18.1
  • PyTorch==1.4.0, cudatoolkit==10.1
  • opencv==3.4.2
  • cupy==7.3 (recommended: conda install cupy -c conda-forge)
  • tqdm==4.44.1

For [DAIN], the environment is different; please check dain/dain_env.yml for the requirements.

Usage

Disclaimer : This code is re-organized to run multiple different models in this single codebase. Due to a lot of version and env changes, the numbers obtained from this code may be different (usually better) from those reported in the paper. The original code modifies the main training scripts for each frame interpolation github repo ([DVF (voxelflow)], [SuperSloMo], [SepConv], [DAIN]), and are put in ./legacy/*.py. If you want to exactly reproduce the numbers reported in our paper, please contact @myungsub for legacy experimental settings.

Dataset Preparation

  • We use [ Vimeo90K Septuplet dataset ] for training + testing
    • After downloading the full dataset, make symbolic links in data/ folder:
      • ln -s /path/to/vimeo_septuplet_data/ ./data/vimeo_septuplet
  • For further evaluation, use:

Frame Interpolation Model Preparation

  • Download pretrained models from [Here], and save them to ./pretrained_models/*.pth

Training / Testing with Vimeo90K-Septuplet dataset

  • For training, simply run: ./scripts/run_{VFI_MODEL_NAME}.sh
    • Currently supports: sepconv, voxelflow, superslomo, cain, and rrin
    • Other models are coming soon!
  • For testing, just uncomment two lines containing: --mode val and --pretrained_model {MODEL_NAME}

Testing with custom data

  • See scripts/run_test.sh for details:
  • Things to change:
    • Modify the folder directory containing the video frames by changing --data_root to your desired dir/
    • Make sure to match the image format --img_fmt (defaults to png)
    • Change --model, --loss, and --pretrained_models to what you want
      • For SepConv, --model should be sepconv, and --loss should be 1*L1
      • For VoxelFlow, --model should be voxelflow, and --loss should be 1*MSE
      • For SuperSloMo, --model should be superslomo, --loss should be 1*Super
      • For DAIN, --model should be dain, and --loss should be 1*L1
      • For CAIN, --model should be cain, and --loss should be 1*L1
      • For RRIN, '--model should be rrin, and --loss should be 1*L1

Using Other Meta-Learning Algorithms

  • Current code supports using more advanced meta-learning algorithms compared to vanilla MAML, e.g. MAML++, L2F, or Meta-SGD.
    • For MAML++ you can explore many different hyperparameters by adding additional options (see config.py)
    • For L2F, just uncomment --attenuate in scripts/run_{VFI_MODEL_NAME}.sh
    • For Meta-SGD, just uncomment --metasgd (This usually results in the best performance!)

Framework Overview

Results

  • Qualitative results for VimeoSeptuplet dataset

  • Qualitative results for Middlebury-OTHERS dataset

  • Qualitative results for HD dataset

Additional Results Video

Video

Citation

If you find this code useful for your research, please consider citing the following paper:

@inproceedings{choi2020meta,
    author = {Choi, Myungsub and Choi, Janghoon and Baik, Sungyong and Kim, Tae Hyun and Lee, Kyoung Mu},
    title = {Scene-Adaptive Video Frame Interpolation via Meta-Learning},
    booktitle = {CVPR},
    year = {2020}
}

Acknowledgement

The main structure of this code is based on MAML++. Training scripts for each of the frame interpolation method is adopted from: [DVF], [SuperSloMo], [SepConv], [DAIN], [CAIN], [RRIN]. We thank the authors for sharing the codes for their great works.

meta-interpolation's People

Contributors

myungsub avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.