Code Monkey home page Code Monkey logo

mplug-2's Introduction

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)

https://arxiv.org/abs/2302.00402

Introduction

we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding.

News

  • 2023.07.21: Released mPLUG-2 pre-training model and downstream tasks!

Models and Datasets

Pre-trained Models

Model Visual Backbone Text Enc Layers Universal Layers Fusion Layers Text Dec Layers #params Download
mPLUG-2 ViT-L-14 24 2 6 12 0.9B mPLUG-2

Pre-train Datasets

COCO VG SBU CC3M CC13M Webvid2M WikiCorpus
image 113K 100K 860K 3M 10M 2M 20G
text 567K 769K 860K 3M 10M 2M 350G

Downstream Models

VideoQA

Model Dataset Accuarcy Download
mPLUG-2 MSRVTT-QA 48.0 mPLUG-2
mPLUG-2 MSVD-QA 58.1 mPLUG-2

Video Caption

Model Dataset CIDER Download
mPLUG-2 MSRVTT 80.3 mPLUG-2
mPLUG-2 MSVD 165.8 mPLUG-2

Requirements

  • PyTorch version >= 1.11.0

  • Install other libraries via

pip install -r requirements.txt

Pre-training

Comming soon.

Fine-tuning

Video Question Answering

  1. Download MSRVTT-QA / MSVD-QA / TGIF datasets from the original websites.
  2. In configs_video/VideoQA_msrvtt_large.yaml, set the paths for the json files and the video paths.
  3. To perform evaluation, run:
sh scripts/inference_videoqa.sh
  1. To perform finetuning, run:
sh scripts/run_videoqa.sh

Video Captioning

  1. Download MSRVTT / MSVD datasets from the original websites.
  2. In configs_video/VideoCaption_msrvtt_large.yaml, set the paths for the json files and the video paths.
  3. To perform evaluation, run:
sh scripts/inference_videocaption.sh
  1. To perform finetuning, run:
sh scripts/run_videocaption.sh

Citation

If you found this work useful, consider giving this repository a star and citing our paper as followed:

@article{Xu2023mPLUG2AM,
  title={mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video},
  author={Haiyang Xu and Qinghao Ye and Ming Yan and Yaya Shi and Jiabo Ye and Yuanhong Xu and Chenliang Li and Bin Bi and Qi Qian and Wei Wang and Guohai Xu and Ji Zhang and Songfang Huang and Fei Huang and Jingren Zhou},
  journal={ArXiv},
  year={2023},
  volume={abs/2302.00402}
}

Acknowledgement

The implementation of mPLUG relies on resources from ALBEF, BLIP, and timm. We thank the original authors for their open-sourcing.

mplug-2's People

Contributors

magaer13 avatar roleone123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mplug-2's Issues

Does the inference need to run on 8 A100?

I tried to run inference for video captioning on 1 A100 but got OOM issues. Does the inference need to run on 8 A100 or can it run one one A100? Thanks in advance.

Localizing positions of objects in a scene

Hello
How are you?
Thanks for contributing to this project.
In general, there is NOT ONLY one object in a scene.
So if there are multiple objects in a scene and actions of the objects (ex: person) are different, we need to localize the object's position.
Is it possible to localize positions of all the objects for one video caption?
If it is impossible for right now, do u know any solution or method for this purpose?

Code request

Hi, will you open source the pre-training code for the model and around when will it be available?

Where can I find the universal layer module?

I checked the file model_video_caption_mplug.py but I find there is no universal layer module in the code, but I see that images are first fed into visual encoders and then fed into text encoders. Does it mean universal layer is actually text encoder?

How to perform CIDEr optimization?

Thank you for your excellent work! The paper mentions that CIDEr optimization has been performed for extra 5 epochs in video captioning task. How to run CIDEr optimization using your code?

Can't find "language_evaluation"

Hello , when I run the “sh scripts/inference_videocaption.sh” file for the video caption task, the following error was encountered.
image
But I didn't find "language_evaluation", may I ask where this file can be found?

same, chaos captions

My pred_captions is chaos and always same on one video. I tried to change the model. But still the same caption by two different models?

Double usage of local temporal modeling

Congrats on the awesome work and thanks for sharing it!

I am particularly interested in your temporal modeling. I am investigating your code and it looks like you are using your temporal layer twice in each TemporalBlock. This class has two instances of LocalTemporal: self.lmhra1 and self.lmhra2, one called before and one called after the self-attention layer. Maybe this is connected with an old flag that you have commented out of the code called --double_lmhra?

I might be missing something in the paper, but I don't think that it mentions this double usage of a temporal layer. Could you please confirm whether your numbers come from using one or two temporal layers per block? Could you also please share some insights into the impact of using the temporal layer once vs twice?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.