Code Monkey home page Code Monkey logo

image-paragraph-captioning's Introduction

Training for Diversity in Image Paragraph Captioning

This repository includes a PyTorch implementation of Training for Diversity in Image Paragraph Captioning. Our code is based on Ruotian Luo's implementation of Self-critical Sequence Training for Image Captioning, available here..

Requirements

  • Python 2.7 (because coco-caption does not support Python 3)
  • PyTorch 0.4 (with torchvision)
  • cider (already included as a submodule)
  • coco-caption (already included as a submodule)

If training from scratch, you also need:

  • spacy (to tokenize words)
  • h5py (to store features)
  • scikit-image (to process images)

To clone this repository with submodules, use:

  • git clone --recurse-submodules https://github.com/lukemelas/image-paragraph-captioning

Train your own network

Download and preprocess cations

  • Download captions:
    • Run download.sh in data/captions
  • Preprocess captions for training (part 1):
    • Download spacy English tokenizer with python -m spacy download en
    • First, convert the text into tokens: cd scripts && python prepro_text.py
    • Next, preprocess the tokens into a vocabulary (and map infrequent words to an UNK token) with the following command. Note that image/vocab information is stored in data/paratalk.json and caption data is stored in data/paratalk\_label.h5
python scripts/prepro_labels.py --input_json data/captions/para_karpathy_format.json --output_json data/paratalk.json --output_h5 data/paratalk
  • Preprocess captions into a coco-captions format for calculating CIDER/BLEU/etc:
    • Run scripts/prepro\_captions.py
    • There should be 14,575/2487/2489 images and annotations in the train/val/test splits
    • Uncomment line 44 ((Spice(), "SPICE")) in coco-caption/pycocoevalcap/eval.py to disable Spice testing
  • Preprocess ngrams for self-critical training:
python scripts/prepro_ngrams.py --input_json data/captions/para_karpathy_format.json --dict_json data/paratalk.json --output_pkl data/para_train --split train
  • Extract image features using an object detector
    • We make pre-processed features widely available:
      • Download and extract parabu_fc and parabu_att from here into data/bu_data
    • Or generate the features yourself:
      • Download the Visual Genome Dataset
      • Apply the bottom-up attention object detector here made by Peter Anderson.
      • Use scripts/make_bu_data.py to convert the image features to .npz files for faster data loading

Train the network

As explained in Self-Critical Sequence Training, training occurs in two steps:

  1. The model is trained with a cross-entropy loss (~30 epochs)
  2. The model is trained with a self-critical loss (30+ epochs)

Training hyperparameters may be accessed with python train.py --help.

A reasonable set of hyperparameters is provided in train_xe.sh (for cross-entropy) and train_sc.sh (for self-critical).

mkdir log_xe
./train_xe.sh 

You can then copy the model:

./scripts/copy_model.sh xe sc

And train with self-critical:

mkdir log_sc
./train_xe.sh 

Pretrained Network

You can download a pretrained captioning model here.

Citation

In case you would like to cite our paper/code (no obligation at all):

@article{melaskyriazi2018paragraph, 
  title={Training for diversity in image paragraph captioning},
  author={Melas-Kyriazi, Luke and Rush, Alexander and Han, George},
  journal={EMNLP},
  year={2018}
}     

And Ruotian Luo's code, on which this repo is built:

@article{luo2018discriminability,
  title={Discriminability objective for training descriptive captions},
  author={Luo, Ruotian and Price, Brian and Cohen, Scott and Shakhnarovich, Gregory},
  journal={CVPR},
  year={2018}
}

image-paragraph-captioning's People

Contributors

ruotianluo avatar lukemelas avatar hexiang-hu avatar clu8 avatar raoyongming avatar gujiuxiang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.