Code Monkey home page Code Monkey logo

dirkneuhaeuser / preposition-sense-disambiguation Goto Github PK

View Code? Open in Web Editor NEW
4.0 1.0 0.0 984 KB

Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

License: MIT License

Python 100.00%
nlp natural-language-processing prepositions disambiguation huggingface huggingface-transformers sense-disambiguation machine-learning

preposition-sense-disambiguation's Introduction

New State-of-the-Art in Preposition Sense Disambiguation

Supervisor:

Institutions:

Project Description

The disambiguation of words is a central part of NLP tasks. In particular, there is the ambiguity of prepositions, which has been a problem in NLP for over a decade and still is. For example the preposition 'in' can have a temporal (e.g. in 2021) or a spatial (e.g. in Frankuft) meaning. A strong motivation behind the learning of these meanings are current research attempts to transfer text to artifical scenes. A good understanding of the real meaning of prepositions is crucial in order for the machine to create matching scenes.

With the birth of the transformer models in 2017 [1], attention based models have been pushing boundries in many NLP disciplines. In particular, bert, a transformer model by google and pre-trained on more than 3,000 M words, obtained state-of-the-art results on many NLP tasks and Corpus.

The goal of this project is to use modern transformer models to tackle the problem of preposition sense disambiguation. Therefore, we trained a simple bert model on the SemEval 2007 dataset [2], a central benchmark dataset for this task. To the best of our knowledge, the best purposed model for disambiguating the meanings of prepositions on the SemEval achives an accuracy of up to 88% [3]. Neither more recent approaches surpass this frontier[4][5] . Our model achives an accuracy of 90.84%, out-performing the current state-of-the-art.

How to train

To meet our goals, we cleand the SemEval 2007 dataset to only contain the needed information. We have added it to the repository and can be found in ./data/training-data.tsv.

Train a bert model:
First, install the requirements.txt. Afterwards, you can train the bert-model by:

python3 trainer.py --batch-size 16 --learning-rate 1e-4 --epochs 4 --data-path "./data/training_data.tsv"

The chosen hyper-parameters in the above example are tuned and already set by default. After training, this will save the weights and config to a new folder ./model_save/. Feel free to omit this training-step and use our trained weights directly.

Examples

We attach an example tagger, which can be used in an interactive manner. python3 -i tagger.py

Sourrond the preposition, for which you like to know the meaning of, with <head>...</head> and feed it to the tagger:

>>> tagger.tag("I am <head>in</head> big trouble")
Predicted Meaning: Indicating a state/condition/form, often a mental/emotional one that is being experienced 

>>> tagger.tag("I am speaking <head>in</head> portuguese.")
Predicted Meaning: Indicating the language, medium, or means of encoding (e.g., spoke in German)

>>> tagger.tag("He is swimming <head>with</head> his hands.")
Predicted Meaning: Indicating the means or material used to perform an action or acting as the complement of similar participle adjectives (e.g., crammed with, coated with, covered with)

>>> tagger.tag("She blinked <head>with</head> confusion.")
Predicted Meaning: Because of / due to (the physical/mental presence of) (e.g., boiling with anger, shining with dew)

References

[1] Vaswani, Ashish et al. (2017). Attention is all you need. Advances in neural information processing systems. P. 5998--6008.

[2] Litkowski, Kenneth C and Hargraves, Orin (2007). SemEval-2007 Task 06: Word-sense disambiguation of prepositions. Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007). P. 24--29

[3] Litkowski, Ken. (2013). Preposition disambiguation: Still a problem. CL Research, Damascus, MD.

[4] Gonen, Hila and Goldberg, Yoav. (2016). Semi supervised preposition-sense disambiguation using multilingual data. Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. P. 2718--2729

[5] Gong, Hongyu and Mu, Jiaqi and Bhat, Suma and Viswanath, Pramod (2018). Preposition Sense Disambiguation and Representation. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. P. 1510--1521

preposition-sense-disambiguation's People

Contributors

dirkneuhaeuser avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

preposition-sense-disambiguation's Issues

Could you improve the state of the art once again?

@dirkneuhaeuser Thanks for making the world a better place, your classifier is extremely helpful for natural language understanding.
Unfortunately, 91% accuracy is still not really great for widespread use. I actively follow the evolution of transformers.
Your use of BERT was a great choice at the time since it is a strong baseline (I'll assume you already use BERT-large),
however there are now significantly better transformers than BERT, which generally bring a few percents accuracy gains and this difference can be major for enabling real world use.
As such, I would love if you could replace your BERT implementation by a XLnet one (best transformer out there) or by https://github.com/microsoft/MPNet (MPNet is an evolution of XLnet although it might be significantly slower in terms of training time), xlnet on the other hand is relatively comparable to BERT regarding training times.

Another, lesser known and complementary way to reach higher accuracy would be to use a better activtion function (Mish), a better optimizer (RAdam), eventually with fallback optimizers (such as lookahead) and methods such as gradient centralization. Each of those generally bring ~1-2% accuracy gains.
cf https://github.com/lessw2020/Best-Deep-Learning-Optimizers
As for XLnet it can in many case bring a +5% accuracy gain over BERT-large

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.