This system is part of a paper accepted at NAACL-HLT 2016 Conference. See the paper here: http://arxiv.org/pdf/1603.01360v1.pdf
John Smith went to Pittsburgh .
PER----- O O LOC O
Corresponding sequence of operations (generated by convert-conll2trans.pl
)
SHIFT
SHIFT
REDUCE(PER)
OUT
OUT
SHIFT
REDUCE(LOC)
OUT
- buffer - sequence of tokens, read from left to right
- stack - working memory
- output buffer - sequence of labeled segments constructed from left to right
SHIFT
- move word from buffer to top of stackREDUCE(X)
- all words on stack are popped, combined to form a segment and labeled withX
and copied to output bufferOUT
- move one token from buffer to output buffer
Datasets are in /usr0/home/kkawakam/conll2003
Convert conll format to ner action (convert-conll2trans.pl) and convert it to parser friendly format (conll2parser.py).
perl convert-conll2trans.pl conll2003/train > conll2003/train.trans
python conll2parser.py -f conll2003/train.trans > conll2003/train.parser
The first time you clone the repository, you need to sync the cnn/ submodule.
git submodule init
git submodule update
mkdir build
cd build
cmake .. -DEIGEN3_INCLUDE_DIR=/path/to/eigen
make -j2
./lstm-parse -T /usr0/home/kkawakam/conll2003/train.parser -d /usr0/home/kkawakam/conll2003/dev.parser --hidden_dim 100 --lstm_input_dim 100 -w /usr3/home/lingwang/chris/sskip.100.vectors --pretrained_dim 100 --rel_dim 20 --action_dim 20 --input_dim 100 -t -S -D 0.3 > logNERYesCharNoPosYesEmbeddingsD0.3.txt &
./lstm-parse -T /usr0/home/kkawakam/conll2003/train.parser -d /usr0/home/kkawakam/conll2003/test.parser --hidden_dim 100 --lstm_input_dim 100 -w /usr3/home/lingwang/chris/sskip.100.vectors --pretrained_dim 100 --rel_dim 20 --action_dim 20 --input_dim 100 -m latest_model -S > output.txt
python attach_prediction.py -p output.txt -t /usr0/home/kkawakam/conll2003/test -o evaloutput.txt
Attach your prediction to test file
python attach_prediction.py -p (prediction) -t /path/to/conll2003/test -o (output file)
./conlleval < (output file)