This repository includes implementations of the following method:
The goal of Position-guided Text Prompt (PTP) is to bring position information into conventional Vision-Language Pre-training (VLP) models, as current mainstream e2e VLP models ignore this important cues.
We observe Position information is missed in a well-trained ViLT models.
Our method provide a good altentive for existing object feature based methods (BUTD and the following works).
- We have put the pretrained and fine-tuned weight on huggingface for fast download.
- The first version of downstream evaluation code based on BLIP and pretrained/down-stream weight is released! The pre-training code is in cleaning up now.
Please find installation instructions for PyTorch in INSTALL.md.
You may follow the instructions in DATASET.md to prepare the datasets. Considering the dataset prepartion is very time consuming, we provide detail guidence and provided our trained corpus.
We provide a large set of baseline results, pre-trained models and fine-tune models in PTP MODEL_ZOO.
Follow the example in GETTING_STARTED.md to start playing vlp models with PTP.
The PTP can easily transfer to other architectures without much effort. Specifically, change your base code with following two steps:
- Download or generate corpus in the same format as ours.
- Modify the dataset.py
Then train the model with original objectives.
This work is mainly based on BLIP and ViLT, thanks for these good baselines. We also refer OSCAR for ablation study and dataset preparation.
PTP is released under the Apache 2.0 license.
Email: awinyimgprocess at gmail dot com
If you have any questions, please email me or open an new issue.
If you find our work helps, please use the following BibTeX entry for citation.
@article{wang2022ptp,
title={Position-guided Text Prompt for Vision Language Pre-training},
author={Wang, Alex Jinpeng and Zhou, Pan and Shou, Mike Zheng and Yan, Shui Cheng},
journal={arXiv preprint arXiv:2212.09737},
year={2022}
}