Code Monkey home page Code Monkey logo

glge's Introduction

GLGE: A New General Language Generation Evaluation Benchmark

Tasks | Dataset | Leaderboard | Baselines | Paper

Updates

  • 2021-12-01: In order to facilitate researchers to evaluate their NLG models, we make the test set of GLGE public. The evaluation script can be found here.
  • 2021-05-11: We fixed the issue where inconsistent post-processing led to low ROUGE scores of our baselines for the gigaword test set.

Introduction

This repository contains information about the general langugae generation evaluation benchmark GLGE, which is composed of 8 language generation tasks, including Abstractive Text Summarization (CNN/DailyMail, Gigaword, XSUM, MSNews), Answer-aware Question Generation (SQuAD 1.1, MSQG), Conversational Question Answering (CoQA), and Personalizing Dialogue (Personachat). In order to provide more diversified difficulty challenges, we provide 3 different difficulty versions (easy, medium, and hard) for each task.

The 8 NLG tasks in GLGE:

Tasks

The 8 tasks in GLGE can be categorized into 4 groups: Abstractive Text Summarization tasks, Answer-aware Question Generation tasks, Conversational Question Answering task, and Personalizing Dialogue task.

Abstractive Text Summarization Tasks

CNN/DailyMail

CNN/DailyMail \cite{hermann2015cnndm} dataset contains 220K articles from the Daily Mail newspapers, and 93K articles from the CNN. Each article contains a bullet point summary. GLGE use the non-anonymized variant used in \cite{see2017get}. After the pre-processing, there are 311,971 <article, summary> data pairs, where the source input is the article, and the target output is the summary which consists of multiple sentences. ROUGE-1, ROUGE-2, and ROUGE-L are used as the metrics.

Gigaword

Gigaword \cite{rush2015neural} contains 4M examples extracted from the news articles of the Gigaword corpus \cite{graff2003gigaword}. After the pre-processing, there are 3,995,559 <passage, summary> data pairs, where the source input is the the first sentence of the article, and the target output is the headline that often only contains a single sentence. ROUGE-1, ROUGE-2, and ROUGE-L are used as the metrics.

XSUM

XSUM \cite{narayan2018don} consists of 227K online articles from the British Broadcasting Corporation (BBC), which contains professionally written single-sentence summaries. After the pre-processing, there are 226,677 <article, summary> data pairs, where the source input is the the news article, and the target output is a single summary sentence. ROUGE-1, ROUGE-2, and ROUGE-L are used as the metrics.

MSNews

MicroSoft News headline generation (MSNews). We random select 151K online news articles from 2012-1-1 to 2020-9-1 from a real-world news search engine. Each article contains a professionally written single-sentence headline. After the pre-processing, there are 151,140 <article, headline> data pairs, where the source input is the news article, and the target output is a news headline.

Answer-aware Question Generation Tasks

SQuAD 1.1

SQuAD 1.1 cite{rajpurkar2016squad} dataset contains 536 Wikipedia articles with over 100k Amazon Mechanical Turks crowd-worker created questions posed about the articles with the corresponding answer span. Since the original hidden test set of the SQuAD 1.1 is hidden, we re-split the dataset with the examples from the original training set and dev set. After the pre-processing, there are 98,169 <answer, passage, question> data triples the source input is a Wikipedia passage along with an answer span, and the target output is a question. ROUGE-L, BLEU-4, and METEOR are used as the metrics.

MSQG

MicroSoft Question Generation (MSQG) is another dataset we collected, which is a new challenge dataset. the questions in this dataset are freely edited by daily users. For MSQG, we collect 220K passages from a real world search engine. Each passage contains a highlight span and a related query, we regard the queries as questions in this dataset. After the pre-processing, there are 220,088 <highlight span, passage, question> data triples, where the source input is a news passage along with highlight span, and the target output is a user question. ROUGE-L, BLEU-4, and METEOR are used as the metrics.

Answer-aware Question Generation Tasks

CoQA

CoQA \cite{reddy2019coqa} dataset contains 127K questions with answers, obtained from 8K conversations about text passages from seven diverse domains. After the pre-processing, there are 116,630 <conversation history, passage, question, answer> data 4-tuples, where the source input is a sequence of conversation history along with a given question and a give passage, and the target output is a free-form answer text. F1-Score is used as the metrics.

Conversational Question Answering Task

PersonaChat

PersonaChat \cite{zhang2018personalizing} dataset is consist of 162,064 utterances, which require models generate responses according to given multi-turn conversations and persona profile. After the pre-processing, there are 151,157 <persona profile description text, conversation history, response> data triples, where the source input is a sequence of conversation history along with several sentences of persona profile description text, and the target output is a response. BLEU-1, BLEU-2, Distinct-1, and Distinct-2 are used as the metrics.

Get Dataset

In order to use our dataset, please navigate to GLGE Leaderboard and agree to our terms of service. After you do so a download link will be made available.

Get Baselines

We put the baselines to ProphetNet repo. It contains the pre-trained models, fine-tuning scripts, and evaluation scripts for GLGE.

The implementation of BART can refer to repo.

The implementation of MASS can refer to repo.

It should be noted that, considering the computational cost, we have not carefully adjusted the hyperparameters of all baseline methods. Better results may be obtained by adjusting the default hyperparameters.

Leaderboard Submission

Submissions

To submit your predictions for evaluation, please create a single folder which contains the prediction files (see submission_examples for an example). The prediction file shoud be named with the following format: {task}.{version}.test where {version} is the difficulty versions (easy, medium, and hard), task is the task name (cnndm, gigaword, xsum, msnews, sqaudqg, msqg, coqa, and personachat ). Please validate that you have done this correctly by evaluating against the development file. Once that is done email your submission. We will reply with your model performance.

We allow partial submission of results, such as the result of a single task on the GLGE-easy.

In order to facilitate researchers to evaluate their NLG models, we make the test set of GLGE public. The evaluation script can be found here.

Paper

If you use our benchmark or dataset, please cite our paper \cite{Liu2020GLGE}.

@article{Liu2020GLGE,
  title={GLGE: A New General Language Generation Evaluation Benchmark},
  author={Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou and Nan Duan},
  journal={arXiv},
  year={2020},
  volume={abs/2011.11928}
}

Additionally, since GLGE is also built out of exiting 6 datasets, please ensure you cite all of them.

An example: We evaluate our model using the GLGE benchmark \cite{Liu2020GLGE}, a general langugae generation evaluation benchmark consiting of CNN/DailyMail \cite{hermann2015cnndm} \cite{see2017get}, Gigaword \cite{rush2015neural} \cite{graff2003gigaword}, XSum \cite{narayan2018don}, MSNews, SQuAD 1.1 cite{rajpurkar2016squad}, MSQG, CoQA \cite{reddy2019coqa}, and PersonaChat \cite{zhang2018personalizing}.

Bibtex for external datasets used in GLGE:

@inproceedings{hermann2015cnndm,
  title={Teaching machines to read and comprehend},
  author={Hermann, Karl Moritz and Kocisky, Tomas and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil},
  booktitle={NIPS},
  pages={1693--1701},
  year={2015}
}

@inproceedings{see2017get,
  title={Get to the point: Summarization with pointer-generator networks},
  author={See, Abigail and Liu, Peter J and Manning, Christopher D},
  booktitle={ACL},
  pages={1073--1083},
  year={2017}
}

@inproceedings{rush2015neural,
  title={A neural attention model for abstractive sentence summarization},
  author={Rush, Alexander M and Chopra, Sumit and Weston, Jason},
  booktitle={EMNLP},
  pages={379-389},
  year={2015}
}

@article{graff2003gigaword,
  title={English gigaword},
  author={Graff, David and Kong, Junbo and Chen, Ke and Maeda, Kazuaki},
  journal={Linguistic Data Consortium, Philadelphia},
  volume={4},
  number={1},
  pages={34},
  year={2003}
}

@inproceedings{narayan2018don,
  title={Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization},
  author={Narayan, Shashi and Cohen, Shay B and Lapata, Mirella},
  booktitle={EMNLP},
  pages={1797--1807},
  year={2018}
}

@inproceedings{rajpurkar2016squad,
  title={Squad: 100,000+ questions for machine comprehension of text},
  author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},
  booktitle={EMNLP},
  pages={2383--2392},
  year={2016}
}

@article{reddy2019coqa,
  title={Coqa: A conversational question answering challenge},
  author={Reddy, Siva and Chen, Danqi and Manning, Christopher D},
  journal={TACL},
  volume={7},
  pages={249--266},
  year={2019}
}

@inproceedings{zhang2018personalizing,
  title={Personalizing dialogue agents: I have a dog, do you have pets too?},
  author={Zhang, Saizheng and Dinan, Emily and Urbanek, Jack and Szlam, Arthur and Kiela, Douwe and Weston, Jason},
  booktitle={ACL},
  pages={2204--2213},
  year={2018}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Legal Notices

Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.

glge's People

Contributors

astariul avatar dayihengliu avatar microsoftopensource avatar qiweizhen avatar weizhenq avatar yuyan2do avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

glge's Issues

A question about Persona Chat datatset.

Hello, I have downloaded the Persona Chat dataset in the glge website and used the easy version (full data). When testing my own BART-base code following basic seq2seq settings, the BLEU result of Persona Chat is far below the one of BART-large provided in your paper.
BLEU-1: 12.79, BLEU-2: 7.81

While I use the same code to test dataset GigaWord, the result is expected.
ROUGE-1: 37.98, ROUGE-2: 19.18

So I am confused about the evaluation of Persona Chat, and I get that this is a multi-turn dialog dataset. Hence, I want to know if there is any difference in the training or evaluation settings.

❓ Why CNN_DM data is already tokenized ?

When downloading the data, it appears the CNN_DM dataset is already tokenized (by StanfordNLP tokenizer).

Shouldn't the data be available in raw (untokenized) format ?
Because each architecture have their own different way to tokenize data ?

For example, in the HuggingFace datasets package, CNN_DM data is not tokenized.

ACTION REQUIRED: Microsoft needs this repository to complete compliance info

There are open compliance tasks that need to be reviewed for your glge repo.

Action required: 3 compliance tasks

To bring this repository to the standard required for 2021, we require Microsoft administrators of this GitHub repository to complete a small set of tasks within the next 60 days. This is critical work to ensure the compliance and security of your GitHub repo.

Microsoft repo admins: Please take a few minutes to complete the tasks at: https://repos.opensource.microsoft.com/orgs/microsoft/repos/glge/compliance

You can close this work item once you have completed the compliance tasks, or it will automatically close within a day of taking action.

If you no longer need this repository, it might be quickest to delete the repo, too.

Reproduce the result of LSTM on PersonaChat

Hi, I followed the instruction from this page.
I tested the result of the LSTM model on easy PersonaChat dataset.
I used the original script to preprocess the dataset combined with the vocab.txt in this page (because I can't find it in GLGE page).

./preprocessed.sh personachat easy

And then I used the original scipt to train and evaluate the lstm model without modifying anything.

./run.sh personachat easy lstm test

However, there is a big gap between the results of mine and the ones in the paper.
The results of paper are 42.2/35.9/0.2/0.7.
The results of mine are 31.75/19.63/0.3/1.2.

So, I want to know how to get the results in the paper or where I am wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.