Code Monkey home page Code Monkey logo

transformers-ner's Introduction

transformers-ner

Documentation Status

Experiment on NER task using Huggingface state-of-the-art Natural Language Models

Installation

Prerequisites

  • Python โ‰ฅ 3.6

Provision a Virtual Environment

Create and activate a virtual environment (conda)

conda create --name py36_transformers-ner python=3.6
source activate py36_transformers-ner

If pip is configured in your conda environment, install dependencies from within the project root directory

pip install -r requirements.txt

Data Pre-processing

Download the data

The current BC5CDR dataset is available as IOB format. Small modifications should be applied to the files so they can be processed by BERT NER (space separated elements, etc.). We will first download the files and then transform them

Download the files at:

mkdir data-input
curl -o data-input/devel.tsv https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-IOB/devel.tsv
curl -o data-input/train.tsv https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-IOB/train.tsv
curl -o data-input/test.tsv https://raw.githubusercontent.com/cambridgeltl/MTL-Bioinformatics-2016/master/data/BC5CDR-IOB/test.tsv

To transform the data in a BERT NER compatible format, execute the following command:

python ./preprocess/generate_dataset.py --input_train_data data-input/train.tsv --input_dev_data data-input/devel.tsv --input_test_data data-input/test.tsv --output_dir data-input/

The script ouputs two files train.txt and test.txt that will be the input of the NER pipeline.

Download pre-trained model and run the NER task

BERT

Pre-trained models of BERT are automatically fetched by HuggingFace's transformers library. To execute the NER pipeline, run the following scripts:

python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path bert-base-cased --output_dir ./output --labels ./data/labels.txt --do_train --do_predict --max_seq_length 256 --overwrite_output_dir --overwrite_cache

The script will output the results and predictions in the output directory.

SciBERT

Download and unzip the model, vocab and its config. Rename config file to config.json as expected from the script.

curl -Ol https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/pytorch_models/scibert_scivocab_cased.tar
tar -xvf scibert_scivocab_cased.tar -C scibert_scivocab_cased
cd scibert_scivocab_cased/
tar -zxvf weights.tar.gz
mv bert_config.json config.json
rm weights.tar.gz

To execute the NER pipeline, run the following scripts:

python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path scibert_scivocab_cased --output_dir ./output --labels ./data/labels.txt --do_train --do_predict --max_seq_length 256 --overwrite_output_dir --overwrite_cache

The script will output the results and predictions in the output directory.

SpanBERT

Download and unzip the model, vocab and its config. Rename config file to config.json as expected from the script. Note that SpanBERT does not come with its own vocab.txt file. Instead it reuses the same as BERT-large-cased model

curl -Ol https://dl.fbaipublicfiles.com/fairseq/models/spanbert_hf_base.tar.gz
mkdir spanbert_hf_base
tar -zxvf spanbert_hf_base.tar.gz -C spanbert_hf_base
cd spanbert_hf_base
curl -Ol https://raw.githubusercontent.com/pyvandenbussche/transformers-ner/master/data/bert_large_cased_vocab.txt
mv bert_large_cased_vocab.txt vocab.txt

To execute the NER pipeline, run the following scripts:

python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path spanbert_hf_base --output_dir ./output --labels ./data/labels.txt --do_train --do_predict --max_seq_length 256 --overwrite_output_dir --overwrite_cache

The script will output the results and predictions in the output directory.

transformers-ner's People

Contributors

pyvandenbussche avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

transformers-ner's Issues

Assertion Error when converting examples to features

Hi,
I am currently trying to train a ner for new entities starting from your example.
I preprocessed my data so it looks just like yours.
Unfortunately when trying to convert examples to features, an assertion fails.

Here is the error log :
`06/16/2021 17:17:31 - INFO - utils_ner - Writing example 0 of 6683

06/16/2021 17:17:31 - INFO - utils_ner - *** Error ***

06/16/2021 17:17:31 - INFO - utils_ner - guid: train-54

06/16/2021 17:17:31 - INFO - utils_ner - tokens: [CLS] A framework for research ethics review during public emerge ##ncies [SEP]

06/16/2021 17:17:31 - INFO - utils_ner - input_ids: 101 138 8297 1111 1844 13438 3189 1219 1470 12982 9885 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

06/16/2021 17:17:31 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

06/16/2021 17:17:31 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

06/16/2021 17:17:31 - INFO - utils_ner - label_ids: -100 232 232 232 167 50 232 232 232 232 -100 232 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100 -100

06/16/2021 17:17:31 - INFO - utils_ner - labels_len: 11

06/16/2021 17:17:31 - INFO - utils_ner - tokens_len: 10

06/16/2021 17:17:31 - INFO - utils_ner - input_len: 12`

It happens at the 46th example which means the first 45 made it so far. I can't understand why for some examples, the labels_len gets a +1 that make the assertion fails later on.

Any ideas I could dig ?

Thanks

KeyError when replace dataset

Hi, I tried to replace the dataset with my own dataset. As my labels are different, it is resulting in a KeyError. I have already replaced the labels in the labels.txt file. Any idea how I can solve this error?

Thanks!


Traceback (most recent call last):
File "./run_ner.py", line 518, in
main()
File "./run_ner.py", line 445, in main
train_dataset = load_and_cache_examples(args, tokenizer, labels, pad_token_label_id, mode="train")
File "./run_ner.py", line 280, in load_and_cache_examples
pad_token_label_id=pad_token_label_id
File "/content/drive/My Drive/transformers-ner-master/utils_ner.py", line 121, in convert_examples_to_features
label_ids.extend([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1))
KeyError: 'B-TAXY'

yld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack

"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}

08/25/2020 10:39:20 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /Users//.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1
08/25/2020 10:39:22 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin from cache at /Users//.cache/torch/transformers/35d8b9d36faaf46728a0192d82bf7d00137490cd6074e8500778afed552a67e5.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2
08/25/2020 10:39:24 - INFO - transformers.modeling_utils - Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
08/25/2020 10:39:24 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
08/25/2020 10:39:24 - INFO - main - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='./data', device=device(type='cpu'), do_eval=False, do_lower_case=False, do_predict=True, do_train=True, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, labels='./data/labels.txt', learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_seq_length=256, max_steps=-1, model_name_or_path='bert-base-cased', model_type='bert', n_gpu=0, no_cuda=False, num_train_epochs=3.0, output_dir='./output', overwrite_cache=True, overwrite_output_dir=True, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=8, save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', warmup_steps=0, weight_decay=0.0)
08/25/2020 10:39:24 - INFO - main - Creating features from dataset file at ./data
08/25/2020 10:39:24 - INFO - utils_ner - Writing example 0 of 9141
08/25/2020 10:39:42 - INFO - main - Saving features into cached file ./data/cached_train_bert-base-cased_256
08/25/2020 10:39:46 - INFO - main - ***** Running training *****
08/25/2020 10:39:46 - INFO - main - Num examples = 9141
08/25/2020 10:39:46 - INFO - main - Num Epochs = 3
08/25/2020 10:39:46 - INFO - main - Instantaneous batch size per GPU = 8
08/25/2020 10:39:46 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 8
08/25/2020 10:39:46 - INFO - main - Gradient Accumulation steps = 1
08/25/2020 10:39:46 - INFO - main - Total optimization steps = 3429
Epoch: 0%| | 0/3 [00:00<?, ?it/sdyld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack | 0/1143 [00:00<?, ?it/s]
Referenced from: /Users//ghSrc/transformers-ner_01/.venvpy36/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace

dyld: Symbol not found: _PySlice_Unpack
Referenced from: /Users//ghSrc/transformers-ner_01/.venvpy36/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace

zsh: abort python ./run_ner.py --data_dir ./data --model_type bert --model_name_or_path
(.venvpy36) ghSrc/transformers-ner_01 %

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.