Code Monkey home page Code Monkey logo

ptp's Introduction

PTP

PWC

PWC

PWC

PWC

This repository includes implementations of the following method:

Introduction

The goal of Position-guided Text Prompt (PTP) is to bring position information into conventional Vision-Language Pre-training (VLP) models, as current mainstream e2e VLP models ignore this important cues.

We observe Position information is missed in a well-trained ViLT models.

Our method provide a good altentive for existing object feature based methods (BUTD and the following works).

Updates

  • We have put the pretrained and fine-tuned weight on huggingface for fast download.
  • The first version of downstream evaluation code based on BLIP and pretrained/down-stream weight is released! The pre-training code is in cleaning up now.

Installation

Please find installation instructions for PyTorch in INSTALL.md.

Dataset Preparation

You may follow the instructions in DATASET.md to prepare the datasets. Considering the dataset prepartion is very time consuming, we provide detail guidence and provided our trained corpus.

Model Zoo

We provide a large set of baseline results, pre-trained models and fine-tune models in PTP MODEL_ZOO.

Quick Start

Follow the example in GETTING_STARTED.md to start playing vlp models with PTP.

Transfer To Other Architectures

The PTP can easily transfer to other architectures without much effort. Specifically, change your base code with following two steps:

  • Download or generate corpus in the same format as ours.
  • Modify the dataset.py

Then train the model with original objectives.

Ackowledgement

This work is mainly based on BLIP and ViLT, thanks for these good baselines. We also refer OSCAR for ablation study and dataset preparation.

License

PTP is released under the Apache 2.0 license.

Contact

Email: awinyimgprocess at gmail dot com

If you have any questions, please email me or open an new issue.

Citation

If you find our work helps, please use the following BibTeX entry for citation.

@article{wang2022ptp,
  title={Position-guided Text Prompt for Vision Language Pre-training},
  author={Wang, Alex Jinpeng and Zhou, Pan and Shou, Mike Zheng and Yan, Shui Cheng},
  journal={arXiv preprint arXiv:2212.09737},
  year={2022}
}

ptp's People

Contributors

fingerrec avatar panzhous avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.