Code Monkey home page Code Monkey logo

luca-bajardi / conditional_text_generation Goto Github PK

View Code? Open in Web Editor NEW
5.0 2.0 1.0 104.75 MB

The main problem of conditional text generation is that it is mainly based on the content of an input set of examples: this leads to little diversification of the generated text. To overcome this shortcoming, we have fine tuned CTRL using three different datasets. The first model has been used as a baseline for comparison, while the other two have been used to obtain more formal and informal text. The BART model has been employed for text classification to gauge formality.

Python 100.00%
coco-captions fine-tunes text-generation conditional-text-generation bleu-metric wikipedia-articles reddit tfrecords ctrl bart

conditional_text_generation's Introduction

Conditional Text Generation

The main problem of conditional text generation is that it is mainly based on the content of an input set of examples: this leads to little diversification of the generated text. To overcome this shortcoming, we have fine tuned CTRL using three different datasets. The first model has been used as a baseline for comparison, while the other two have been used to obtain more formal and informal text. The BART model has been employed for text classification to gauge formality.

The dataset

We used 3 different dataset:

  1. COCO captions from COCO dataset,
  2. COCO captions mixed with Wikipedia articles,
  3. COCO captions mixed with Reddit comments.

The model

We used the CTRL model which can generate text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior.

Fine tuning

We dedicated ourselves to producing a CTRL model successfully fine-tuned on the COCO captions dataset. The aim was to obtain a model able to complete the initial part of a test caption in a convincing way.

Fine-tuning can be used to augment existing control codes or add new control codes. There are 7 steps elaborated in our paper:

  1. Create the dataset
  2. Split data
  3. Convert this text data into TFRecords
  4. Fine-tuning the model on these TFRecords files
  5. Generation of the captions
  6. Evaluate with metrics
  7. Use Wikipedia articles and Reddit comments

Step 1 - Create the dataset

Reddit and Wikipedia sentences were accessed from available data dumps using Kaggle and pushshift APIs, while COCO captions were taken diretly from the website COCO website.

Step 2 - Split data

We used the script split_data.py to split the captions in two parts and divide the entire dataset in training set, validation set and test set.

python split_data.py

Step 3 - Convert this text data into TFRecords

We use the file make_tf_records.py to convert the data into TFRecords.

python make_tf_records.py --text_file <text_file> --control_code caption --sequence_len 256

It has three arguments: text_file which specifies the name of the file to convert, control_code which specifies one token (must be in vocabulary) to append to each example, and sequence_len which specifies the sequence length to use to create the data. This must match the sequence length of the model being trained.

Step 4 - Fine-tuning the model on these TFRecords files

Simply run the script training.py

python training.py --model_dir <model_directory> --iterations <number_of_iterations>

The script picks up all TFRecords in the current folder and fine-tunes the model provided in the --model_dir flag.

We fine tuned the model 3 different times, one with 250 iterations (that correspond to 21K captions), one with 500 iterations (42K captions) and one with 1000 iterations (84K captions).

We chose the model with 500 iterations.

Step 5 - Generation of the captions

We use the file generate_from_prompts.py to generate the remaining parts of the captions.

python generate_from_prompts.py --model_dir  <model_directory> --prompts <prompts_file> --control_code caption

This script is a modified version of the generation.py script from Salesforce's repository. It takes as input a file containing one prompt per line, generates a complete caption from each prompt and saves the output in a output.txt file. The models we trained are able to generate COCO-style captions or formal/informal written captions. For more insights, refer to the report.

Step 6 - Evaluate with metrics

We use the file metrics.py to evaluate the generated text. The file contains ready to use functions to calculate BLEU, SELF-BLEU and POS-BLEU metrics. On top of that, a text classifier is provided.

python metrics.py

Step 7 - Use Wikipedia articles and Reddit comments

Using the model chosen in Step 4, we fine tuned the model similarly with Wikipedia articles and Reddit comments.

conditional_text_generation's People

Contributors

francescosaracco avatar luca-bajardi avatar ludoro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

thehunt100

conditional_text_generation's Issues

Questions to prepare the dataset

Hey Luca,

I have encountered problems to prepare the dataset. As the information from the README file, the original dataset could be downloaded directly from COCO or kaggle. However, when I am tracing into the split_data.py file, I found the lines 5 and 6 are the original input of the train_validate and test npy files. I wanna ask you here is that how did you prepare these npy files. Is that directly download from the COCO? or did you do additional preprocessing after getting the original train, valid and test files from the COCO website?

Best Regards,
Peng

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.