Code Monkey home page Code Monkey logo

ure's Introduction

Revisiting Unsupervised Relation Extraction

Source code for Revisiting Unsupervised Relation Extraction in ACL 2020

Environment

pip3 install -r requirements.txt

The experiments were conducted on Nvidia V100 GPUs (16GB GPU RAM). However, these methods are very small, you can run on most GPU.

Datasets

NYT: contact Diego Marcheggiani TACRED: TACRED Input format: same as sample

  • Both NYT and TACRED are pre-processed (tokenisation, entity typing).
  • We use Stanford CoreNLP to get dependency features for TACRED.
  • Entity types in NYT is a subset of TACRED, we map all entity types in TACRED that are unseen in NYT to MISC.

There are some vocabulary files needed to generate in advance. You can use the script

bash ure/preprocessing/run.sh

We also provide the file for feature extraction To generate the lexicon_file:

python ure/preprocessing/feature_extractor.py --generate_lexicon --input_file [file] --lexicon_file [file] --output_file [file] --threshold [occurrence threshold]

To generate features:

python ure/preprocessing/feature_extractor.py --input_file [file] --lexicon_file [file] --output_file [file] --threshold [occurrence threshold]

Usage

Training

EType+: B3 usually achieves 41% after one epoch

python -u -m ure.etypeplus.main  --config models/etypeplus.yml

Feature Marcheggiani and Titov: expect to get B3 around 32-33% after one epoch

python -u -m ure.feature.main --config models/feature.yml

PCNN Simon et al

python -u -m ure.pcnn.main --config models/pcnn.yml

Evaluation

python -u -m ure.etypeplus.main   --config models/etypeplus.yml --mode test

Reproducibility & Bug Fixes & FQA

L_s coefficient rel_dist.py is now shared among three methods in which loss_s is scaled down by [B x k_samples], hence, the coefficient of L_s of EType+ is set to 0.01 instead of 0.0001 in the paper. (Line 91 in /ure/rel_dist.py)

Entity type dimension in Table 4. (b,c) appendix There is a mistake, it is entity dimension in link predictor, we use the same dimension of 10 for all methods. (There is no entity type in PCNN.)

Typos in the paper Appendix A., in the second paragraph, the number of relation labels in NYT-FB should be 262 (253 in the paper). Same for the caption of Figure 2a, NYT-FB has 262 relation types in total. The last x axis label of Figure 2a. should be "each of the rest 249 relation types".

Citation

If you plan to use it, please cite the following paper =)

@inproceedings{tran-etal-2020-revisiting,
    title = "Revisiting Unsupervised Relation Extraction",
    author = "Tran, Thy Thy  and
      Le, Phong  and
      Ananiadou, Sophia",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.669",
    pages = "7498--7505"
}

ure's People

Contributors

ttthy avatar dependabot[bot] avatar

Stargazers

 avatar Kevin Xiang Li avatar TheDetective avatar Francis avatar  avatar Haozhu Wang avatar Ge Wenhan avatar SPARSH MEHTA avatar cin-hubert avatar Sefika Efeoglu avatar WangZhen avatar  avatar yangyueren avatar Anubhav avatar Cecília Nascimento avatar Yao Jean-Elisée avatar Chia Yew Ken avatar  avatar  avatar YI HAN avatar tifahaha avatar  avatar  avatar Alfred avatar Samantha Johnson avatar Kai Zhang avatar yishanchuan avatar Jiwung Hyun avatar  avatar Christoph Alt avatar Xiaobao Wu avatar Xiang Pan (潘翔) avatar Zhiyu Chen avatar  avatar ASTONE avatar

Watchers

paper2code - bot avatar  avatar

ure's Issues

When I train the etypeplus model, an error occurred?

The error infomation is as:

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [5,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.

Strangely, it happened after print 3 times total_loss in epoch 0. I think it may be due to the GPU memory? Therefore, I trained the 1K sample and it ran successfully without any problems. My GPU memory is 32GB. Isn't that enough?
Thanks for your help!

About dict.features and other questions

Hey, I am interested in your work, and I have some questions about the code.

  1. How can I generate dict.features? (Is it the same as train.features?)
  2. I found that in dataset.py, specifically the read_from_file function of TSVDataset, the following lines of code will cause the indices of some entity out of the range of config['n_ents'], and thus generate error in LinkPrediction. Is there any solution to it?

Thanks in advance.

if head_ent_id == self.vocas['entity'].unk_id:
head_ent_id = _i * 2
tail_ent_id = self.vocas['entity'].get_id(tail_ent)
if tail_ent_id == self.vocas['entity'].unk_id:
tail_ent_id = _i * 2 + 1

About using the dataset NYT-FB

Hi @ttthy ,

Sorry for disturbing you. However, I wonder about using the dataset NYT-FB in your experiment. While TACRED test set provides the relation type for each sentence, I cannot find each relation type for each sentence in NYT-FB. I already got the NYT-FB dataset from Diego Marcheggiani, but most sentences are without relation type as (https://github.com/diegma/relation-autoencoder/blob/master/data-sample.txt). I wonder how to evaluate your system on NYT-FB without labels?

Thanks for your help!

I can't run EType's code

Hi,
I'm writing a paper related to URE. I want to use your paper EType as a comparison algorithm. But I can't run your code on GitHub right now. I tried to run the code according to the relevant tips of README, but I was very confused about the processing of data sets. I couldn't find a way to generate "nyt/train.txt (or tacred/ train.txt)" and other data files. Therefore, could you please send a more detailed operation instruction or complete data after NYT and TACRED processing? Thank you very very much!

I can't process the data of TACRED into the format required in the code.

I'm studying URE recently and I want to run your code. Although I have tried many methods, I can't process the date of TACRED into the format required in the code. For example, the trigger and posPatternPath in sample.txt. Could you tell me how to deal with it or send the script to me? Thanks very much.

test function not defined

Hi, I'm trying to run the test phase, and got the following error. Besides that, which is the output file of the test phase and how do I interpret it? Can I get the clustered input lines (similar relations)?

Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/data/users/fmuniz/mp/ure/ure/etypeplus/main.py", line 88, in
test(model, dataset, config)
NameError: name 'test' is not defined

How to generate dict.features ?

Hi Thy Thy,

Thanks for sharing this code!

I'm trying to the the March model. But got the error of `data/nyt/dict.features' not exist. Could you share more details about how it generated?

Besides, may I double-check the feature generation process with you? Following the README.md, I used the following commands to generate with *.lexicon and *.features for train/dev/test respectively.

python ure/preprocessing/feature_extractor.py --generate_lexicon --input_file data/nyt/train.txt --lexicon_file data/nyt/train.lexicon --output_file data/nyt/train.features --threshold 1

python ure/preprocessing/feature_extractor.py --generate_lexicon --input_file data/nyt/dev.txt.filtered --lexicon_file data/nyt/dev.lexicon --output_file data/nyt/dev.features --threshold 1

python ure/preprocessing/feature_extractor.py --generate_lexicon --input_file data/nyt/test.txt.filtered --lexicon_file data/nyt/test.lexicon --output_file data/nyt/test.features --threshold 1`

python ure/preprocessing/feature_extractor.py --input_file data/nyt/train.txt --lexicon_file data/nyt/train.lexicon --output_file data/nyt/train.features --threshold 1

python ure/preprocessing/feature_extractor.py --input_file data/nyt/dev.txt.filtered --lexicon_file data/nyt/dev.lexicon --output_file data/nyt/dev.features --threshold 1

python ure/preprocessing/feature_extractor.py --input_file data/nyt/test.txt.filtered --lexicon_file data/nyt/test.lexicon --output_file data/nyt/test.features --threshold 1

Is it the same process as you did?

Please let me if I missed something.

Much appreciated for your help!

Which file does lexicon_file refer to?

Following "readme", I got train.txt, dev.txt, test.txt, dict.entity, dict.enttype, dict.ent_wf, dict.relation, dict.word. However, I don't know which one is the lexicon_file in the next step?

Specifically, my problem is in the following step:

We also provide the file for feature extraction
python ure/preprocessing/feature_extractor.py --input_file [file] --lexicon_file [file] --output_file [file] --threshold [occurrence threshold]

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.