Code Monkey home page Code Monkey logo

vgt's Introduction

VGT: Video Graph Transformer for Video Question Answering

Introduction

Existing transformer-style models only demonstrate their success in answering questions that involve the coarse recognition or scene-level description of video contents. Their performance remains either unknown or weak in answering questions that emphasize fine-grained visual relation reasoning, especially the causal and temporal relations that feature video dynamics at action and event level. In this paper, we propose Video Graph Transformer (VGT) model to advance VideoQA from coarse recognition and scene-level description to fine-gained visual relation reasoning and cognition-level understanding of the dynamic visual contents. Specifically, we make the following contributions:

  • We design dynamic graph transformer (DGT) to encode visual graph dynamics for relation reasoning in space-time.
  • We demonstrate that supervised contrastive learning significantly outperforms classification for multi-choice cross-modal video understanding. Also, a fine-grained cross-modal interaction can help improve performance.
  • We demonstrate that pretraining visual graph transformer can benefit video-language understanding towards a more data-efficient and fine-grained direction.

See our poster at ECCV'22 for a quick overview of the work. An extended version CoVGT

VGT vs VGT without DGT

Todo

  1. Release feature of MSRVTT-QA.

Environment

Assume you have installed Anaconda, please do the following to setup the envs:

>conda create -n videoqa python==3.8.8
>conda activate videoqa
>git clone https://github.com/sail-sg/VGT.git
>pip install -r requirements.txt

Preparation

Please create a data folder outside this repo, so you have two folders in your workspace 'workspace/data/' and 'workspace/VGT/'.

Below we use NExT-QA as an example to get you farmiliar with the code. Please download the related video feature and QA annotations according to the links provided in the Results and Resources section. Extract QA annotations into workspace/data/datasets/nextqa/, video features into workspace/data/feats/nextqa/ and checkpoint files into workspace/data/save_models/nextqa/.

Inference

./shell/next_test.sh 0

Evaluation

python eval_next.py --folder VGT --mode val

Results and Resources

Table 1. VideoQA Accuracy (%).

Cross-Modal Pretrain NExT-QA TGIF-QA (Action) TGIF-QA (Trans) TGIF-QA (FrameQA) TGIF-QA-R* (Action) TGIF-QA-R* (Trans) MSRVTT-QA
- 53.7 95.0 97.6 61.6 59.9 70.5 39.7
WebVid0.18M 55.7 - - - 60.5 71.5 -
- feats feats feats feats feats feats feats
- videos videos videos videos videos videos videos
- Q&A Q&A Q&A Q&A Q&A Q&A Q&A
(We have merged some files of the same dataset to avoid too many links. *: resolve the answer bias issue in TGIF-QA by regenerating the distractor answers.)

Train

We have provided all the scripts in the folder 'shells', you can start your training by specifying the GPU IDs behind the script. (If you have multiple GPUs, you can separate them with comma: ./shell/nextqa_train.sh 0,1)

./shell/nextqa_train.sh 0

It will train the model and save to the folder 'save_models/nextqa/'. Please follow a two-stage training scheme by firstly training the model and then freezing the language model to finetune on the best model obtained at the first stage.

Result Visualization (NExT-QA)

VGT vs VGT without DGT

Citation

@inproceedings{xiao2022video,
  title={Video Graph Transformer for Video Question Answering},
  author={Xiao, Junbin and Zhou, Pan and Chua, Tat-Seng and Yan, Shuicheng},
  booktitle={European Conference on Computer Vision},
  pages={39--58},
  year={2022},
  organization={Springer}
}

Acknowledgements

Some code is token from VQA-T, and our video feature extraction is inspired by HQGA. Thanks the authors for their great work and code.

Notes

If you use any resources (feature & code & models) from this repo, please kindly cite our paper and acknowledge the source.

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

vgt's People

Contributors

doc-doc avatar ikuinen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

vgt's Issues

Val Accuracy

Thank you for sharing your programe. When I trained the model without CM-pretain under default setting, I got val accuracy of 54.22. How can I reproduce 55.02 which is presented in the paper? Thank you for your help!

微信截图_20230425204009
微信截图_20230425204407

Low Test Accuracy

After downloading all the required files for nextqa, I executed the command :

./shells/next_test.sh 0

but found that the accuracy is only 19.47%.

And when I removed the line of "pretrain_path" in next_test.sh and executed the command aforementioned, I got the accuracy 20.04%.

So I wander whether fine-tuning is required for the given model "VGT_B5" on nextqa.

Object alignment

Hi, thank you for your great work.
Have you processed object alignment across frames (4 frames) per video clip (8 video clips in one data point) in your nextqa feat file?

features of TGIF dataset

Hello, I need the feature files for TGIF-QA.
Could you provide the features files like nexqa dataset in the h5 file, please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.