Code Monkey home page Code Monkey logo

awesome-video-domain-adaptation's Introduction

Awesome Video Domain Adaptation

MIT License

This repo is a comprehensive collection of awesome research (papers, codes, etc.) and other items about video domain adaptation.

Our comprehensive survey on Video Unsupervised Domain Adaptation with Deep Learning is now available. Please check our paper on arXiv.

Domain adaptation has been a focus of research in transfer learning, enabling models to improve robustness which is crucial to apply models to real-world applications. Despite a long history of domain adaptation research, there has been limited discussions on video domain adaptation. This repo aims to present a collection of research on video domain adaptation including papers, code, etc.

Feel free to star, fork or raise an issue to include your research or to add in more categories! Discussion is most welcomed!

Contents

Explanatory Notes

This repository categorizes video domain adaptation papers according to the domain adaptation scenarios (i.e., closed-set, partial-set, source-free, etc.), sorted by date of publish/public appearance. These include both semi-supervised, weakly-supervised, and unsupervised DA. By default, VDA research focuses on action recognition. For other tasks, the corresponding task would be annotated independently.

Note: This repository is inspired by the ADA repository, a repository with awesome domain adaptation papers. For more research on domain adaptation (with images/point cloud etc.), you may check out that repository.

Papers

Closed-set VDA

Conference

Journal

ArXiv and Workshops

Partial-set VDA

Conference

Open-set VDA

Conference

Journal

Multi-Source VDA

ArXiv and Workshops

Source-Free or Test-time VDA

Conference

Target-Free VDA

Conference

Few-shot VDA

Conference

Continual VDA

Conference

ArXiv and Workshops

Zero-shot VDA (Video Domain Generalization)

Conference

Journal

Multi-Modal VDA

The different modalities are listed for each listing.

Conference

Other Topics in Video Transfer Learning

Conference

Journal

ArXiv

Datasets and Benchmarks

We collect relevant datasets designed for video domain adaptation. Datasets are designed for closed-set video domain adaptation addressing action recognition by default. Note that downloading some datasets may require permission. You are advised to download common action recognition datasets e.g., HMDB51, UCF101, Kinetics, which are commonly used in these cross-domain video datasets.

2024

2023

2021-2022

2018-2020

Before 2015

Useful Tools and Other Resources

Challenges for Video Domain Adaptation

Note: these are the latest editions of the respective challenges, please check their previous versions through their respective websites

awesome-video-domain-adaptation's People

Contributors

xuyu0010 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-video-domain-adaptation's Issues

Request to Add Paper

Thank you for meticulously maintaining this repository!

We would like to recommend our recent work, TranSVAE, a disentanglement framework for unsupervised video domain adaptation, which has achieved SoTA performance among the UDA leaderboards of UCF-HMDB, Jester, and Epic-Kitchens.

about the pretrain model

I want to know where to get the pretrain model "pretrained/tsn2d_rgb_r50.pth".
And the code in the file train_source_model.py, there is two parameter “end_epoch” in the function train_source_model.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.