Code Monkey home page Code Monkey logo

rc-data's Introduction

Question Answering Corpus

This repository contains a script to generate question/answer pairs using CNN and Daily Mail articles downloaded from the Wayback Machine.

For a detailed description of this corpus please read: Teaching Machines to Read and Comprehend, Hermann et al., NIPS 2015. Please cite the paper if you use this corpus in your work.

Bibtex

@inproceedings{nips15_hermann,
author = {Karl Moritz Hermann and Tom\'a\v{s} Ko\v{c}isk\'y and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom},
title = {Teaching Machines to Read and Comprehend},
url = {http://arxiv.org/abs/1506.03340},
booktitle = "Advances in Neural Information Processing Systems (NIPS)",
year = "2015",
}

Download Processed Version

In case the script does not work you can also download the processed data sets from [http://cs.nyu.edu/~kcho/DMQA/]. This should help in situations where the underlying data is not accessible (Wayback Machine partially down).

Running the Script

Prerequisites

Python 2.7, wget, libxml2, libxslt, python-dev and virtualenv. libxml2 must be version 2.9.1. You can install libxslt from here: http://xmlsoft.org/libxslt/downloads.html

sudo pip install virtualenv
sudo apt-get install python-dev

Download Script

mkdir rc-data
cd rc-data
wget https://github.com/deepmind/rc-data/raw/master/generate_questions.py

Download and Extract Metadata

wget https://storage.googleapis.com/deepmind-data/20150824/data.tar.gz -O - | tar -xz --strip-components=1

The news article metadata is ~1 GB.

Enter Virtual Environment and Install Packages

virtualenv venv
source venv/bin/activate
wget https://github.com/deepmind/rc-data/raw/master/requirements.txt
pip install -r requirements.txt

You may need to install libxml2 development packages to install lxml:

sudo apt-get install libxml2-dev libxslt-dev

Download URLs

python generate_questions.py --corpus=[cnn/dailymail] --mode=download

This will download news articles from the Wayback Machine. Some URLs may be unavailable. The script can be run again and will cache URLs that already have been downloaded. Generation of questions can run without all URLs downloaded successfully.

Generate Questions

python generate_questions.py --corpus=[cnn/dailymail] --mode=generate

Note, this will generate ~1,000,000 small files for the Daily Mail so an SSD is preferred.

Questions are stored in [cnn/dailymail]/questions/ in the following format:

[URL]

[Context]

[Question]

[Answer]

[Entity mapping]

Deactivate Virtual Environment

deactivate

Verifying Test Sets

wget https://github.com/deepmind/rc-data/raw/master/expected_[cnn/dailymail]_test.txt
comm -3 <(cat expected_[cnn/dailymail]_test.txt) <(ls [cnn/dailymail]/questions/test/)

The filenames of the questions in the first column are missing generated questions. No output means everything is downloaded and generated correctly.

rc-data's People

Contributors

akfidjeland avatar blixt avatar karlmoritz avatar lespeholt avatar ogrisel avatar petrux avatar pmarais avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rc-data's Issues

missing urls after multiple trial

Hi,

I follow the instructions to download the data set and there are still dozens of missing urls after I run

python generate_questions.py --corpus=[cnn/dailymail] --mode=download

multiple times?

How did you generate the Metadata?

Thank you for sharing the data!

I'm wondering how did you generate the Metadata? I mean the data files that contains entities, tokens and urls.

Is it possible for you to opensource the code to generate the Metadata files? Many thanks.

Official train, validation and test split for cnn stories?

Hi there, I'll trying reproduce summarization paper's work on cnn/dailymail dataset, and I noticed that many papers employ the same train, validation and test split on cnn/dailymail stories files. My dataset was download from DMQA, it doesn't provide existed split for stories files. May I ask if there is a official way to do that?

Why do the entities share IDs? How are IDs computed?

I noticed that across different documents, entities can have the same ID; why is this the case? How are ID's assigned to an entity, if entity IDs of different words represent different things across documents?

Wayback URLs

Hi,

I noticed that all the news articles are downloaded from web.archive.org, and I'm looking for a way to get more CNN/Dailymail news webpages like this, so I'm wondering how did you get all the CNN/Dailymail news URLs on this website?

Thanks!

Different type of questions.

I was able to successfully ran the project and generate the questions and answers. The questions which was generated was fill in the blanks type.

Is it possible to generate Who, Where When, Why, What Which, How etc. types of questions?

TIA.

Checksums

Thank you for sharing this data.

Maybe you can post a file containing checksums after running the script? It could be as simple as "find . -type f -exec md5sum"

Tokenization error

We tried to generate new questions and stories based on the data and code provided here.

However, when we processed this url, we found something wrong.

http://web.archive.org/web/20111016030125id_/http://www.cnn.com:80/2011/10/13/opinion/iranianplotsamerican-hubris/

In the middle of the page, the 'vis-à-vis' token(starting at character position 2,000) caused the problem.

In the metadata for tokenization, it gave the following directions for the area:
... ;2000,2;2003,0;2004,1;2008,0;2009,2;2013,2;2017,5; ...

We interpreted this as follows.
2000,2 = starting 2000, binary length 2+1 = 'vis'
2003,0 = starting 2003, binary length 0+1 = '-'
2004,1 = starting 2004, binary length 1+1 = 'à'
2008,0 = starting 2008, binary length 0+1 = 'i'
2009,2 = starting 2009, binary length 2+1 = 's t'
...

Isn't it strange?

If this works, should the original be 'vis-à__-vis' like this?

In the Processed Version data, we found a token that was correctly tokenized.
cnn/stories/2ac7ec1e2f9c14f641b63b232600fe25f5a49b2b.story:
... vis-à-vis ...

cnn/questions/training/5836ca35e2f847771956c47578e4c2a8c206542e.question:
... vis - @Entity83 - vis ...

I think that the original page was changed or whether there were other preprocessing operations.

Does anyone know about this?

IOError: [Errno 24] Too many open files

Tnx for the data!

I just wanted to point out that in some cases I get the aforementioned (too many open file) error on OSX, which I seemingly fix with "ulimit -n 1200"
Just a comment if anyone else stumbles upon the same problem.

Cheers

unsolved problems when using vpn

I'm a researcher from China.
Due to the GFW, we cannot directly access the websites that is provided in the script.
That is the reason why I need to use the vpn.
And when using a vpn and set the http_proxy(under Linux),I found the script cannot work properly.

Like this:

File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 381, in send raise ProxyError(e) requests.exceptions.ProxyError: Cannot connect to proxy. Socket error: [Errno 110] Connection timed out.

I suppose it's an error that is associated with the requests package.
And I wonder if there's a place where I can directly download the dataset instead of the script that generates it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.