Code Monkey home page Code Monkey logo

ntua-slp-semeval2018's Introduction

Overview

This repository contains the source code of the models submitted by NTUA-SLP team in SemEval 2018 tasks 1, 2 and 3.

Prerequisites

Please follow the steps below in order to be able to train our models:

1 - Install Requirements

pip install -r ./requirements.txt

2 - Download our pre-trained word embeddings

The models were trained on top of word2vec embeddings pre-trained on a big collection of Twitter messages. We collected a big dataset of 550M English Twitter messages posted from 12/2014 to 06/2017. For training the word embeddings we used Gensim's implementation of word2vec. For preprocessing the tweets we used ekphrasis. Finally, we used the following parameteres for training the embeddings: window_size = 6, negative_sampling = 5 and min_count = 20.

We freely share our pre-trained embeddings:

Finally, you should put the downloaded embeddings file in the /embeddings folder.

3 - Update model configs

Our model definitions are stored in a python configuration file. Each config contains the model parameters and things like the batch size, number of epochs and embeddings file. You should update the embeddings_file parameter in the model's configuration in model/params.py.

Example - Sentiment Analysis on SemEval 2017 Task 4A

You can test that you have a working setup, by training a sentiment analysis model on SemEval 2017 Task 4A, which is used as a source task for transfer learning in Task 1.

First, start the visdom server, which is needed for visualizing the training progress.

python -m visdom.server

Then just run the experiment.

python model/pretraining/sentiment2017.py

Documentation

TL;DR

  • If you only care about the source code of our deep-learning models, then look at the PyTorch modules in modules/nn/.
  • In particular, modules/nn/attention.py contains an implementation of a self-attention mechanism, which supports multi-layer attention.
  • The scripts for running an experiment are stored in model/taskX.

Project Structure

In order to make our codebase more accessible and easier to extend, we provide an overview of the structure of our project. The most important parts will be covered in greater detail.

  • datasets: contains the datasets for the pretrainig (SemEval 2017 - Task4A)
  • dataloaders - contains scripts for loading the datasets and for tasks 1, 2 and 3
  • embeddings: in this folder you should put the word embedding files.
  • logger: contains the source code for the Trainer class and the accompanying helper functions for experiment management, including experiment logging, checkpoint and early-stoping mechanism and visualization of the training process.
  • model: experiment runner scripts (dataset loading, training pipeline etc).
    • pretraining: the scripts for training the TL models
    • task1: the scripts for running the models for Task 1
    • task2: the scripts for running the models for Task 2
    • task3: the scripts for running the models for Task 3
  • modules: the source code of the PyTorch deep-learning models and the baseline models.
    • nn: the source code of the PyTorch modules
    • sklearn: scikit-learn Transformers for implementing the baseline bag-of-word and neural bag-of-words models
  • out: this directory contains the generated model predictions and their corresponding attention files
  • predict: scripts for generating predictions from saved models.
  • trained: this is where all the model checkpoints are saved.
  • utils: contains helper functions

Note: Full documentation of the source code will be posted soon.

ntua-slp-semeval2018's People

Contributors

agn-7 avatar cbaziotis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ntua-slp-semeval2018's Issues

Testing working setup does not work

After following all prerequisites, starting the visdom server and trying to run python model/pretraining/sentiment2017.py I get the following error

Traceback (most recent call last):
  File "model/pretraining/sentiment2017.py", line 9, in <module>
    from config import DATA_DIR
ModuleNotFoundError: No module named 'config'

which makes complete sense, as there is no config.py file in that folder. I've tried moving the config.py file found in the root directory to that folder, and after running it again, it gives the following error

Running on:cpu
Traceback (most recent call last):
  File "model/pretraining/sentiment2017.py", line 10, in <module>
    from dataloaders.rest import load_data_from_dir
ModuleNotFoundError: No module named 'dataloaders'

I've then also moved sentiment2017.py to the root folder, and then run it from there with python sentiment2017.py, which sort of works. It starts up, takes about 5 minutes to get to ~4%, and then just hangs there as far as I can tell.

PreProcessing dataset SEMEVAL_2017_word_train...:   4%| | 2240/58761 [05:10<2:10:32,  7.22it/s]

Any idea what is happening? I am using Python 3.6 (and also had to install some packages that aren't included in the requirements.txt), but when using Python 2.7 I get a different error. Also not sure why the instructions in the README don't work for me and I have to copy and move files around either.

code takes longer to pre process trian data

No cache file for TASK3_A_word_train ...
PreProcessing dataset TASK3_A_word_train...: 2%|โ– | 63/3834 [00:00<00:40, 93.01it/s]

It keeps waiting at 2%.

Even when I try to load using GPU or only CPU. gold set was very fast and took 3 minutes to process

models are not saved

Hi,

models of task 1 and 3 are not saved. The output is:


Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/visdom/__init__.py", line 388, in _send
    data=json.dumps(msg),
  File "/usr/lib/python3/dist-packages/requests/api.py", line 107, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /update (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f8b80177b00>: Failed to establish a new connection: [Errno 111] Connection refused',))

patience left:-1, best(0.768163003566172)
Early stopping...

How to train and evaluate the TASK2_A?

Hello

I've run the TASK2_A with several embeddings file via its configuration (model/params.py) through command below:

python3 model.task2.train.py

And I've got the following result:

Running on:cuda
loading word embeddings...
Didn't find embeddings cache file /content/gdrive/My Drive/app/sentiment/glove.twitter.27B.200d.txt
Indexing file /content/gdrive/My Drive/app/sentiment/glove.twitter.27B.200d.txt ...
{200}
Found 1193516 word vectors.
Reading twitter_2018 - 1grams ...
Reading twitter_2018 - 2grams ...
Reading twitter_2018 - 1grams ...
Building word-level datasets...
Loading TASK2_A_train from cache!
Total words: 8286649, Total unks:918613 (11.09%)
Unique words: 160897, Unique unks:81783 (50.83%)
Labels statistics:
{0: '21.72%', 1: '10.47%', 2: '10.28%', 3: '5.53%', 4: '4.98%', 5: '4.71%', 6: '4.31%', 7: '3.70%', 8: '3.44%', 9: '3.26%', 10: '3.25%', 11: '3.10%', 12: '2.80%', 13: '2.62%', 14: '2.73%', 15: '2.71%', 16: '2.64%', 17: '2.59%', 18: '2.68%', 19: '2.49%'}

Loading TASK2_A_trial from cache!
Total words: 814679, Total unks:77013 (9.45%)
Unique words: 40332, Unique unks:10207 (25.31%)
Labels statistics:
{0: '21.52%', 1: '10.56%', 2: '10.48%', 3: '5.77%', 4: '5.03%', 5: '4.63%', 6: '4.10%', 7: '3.79%', 8: '3.59%', 9: '3.34%', 10: '3.09%', 11: '3.06%', 12: '2.92%', 13: '2.69%', 14: '2.75%', 15: '2.50%', 16: '2.61%', 17: '2.56%', 18: '2.57%', 19: '2.43%'}

Loading TASK2_A_gold from cache!
Total words: 895048, Total unks:103962 (11.62%)
Unique words: 42546, Unique unks:11668 (27.42%)
Labels statistics:
{0: '21.60%', 1: '9.66%', 2: '9.07%', 3: '5.21%', 4: '7.43%', 5: '3.23%', 6: '3.99%', 7: '5.50%', 8: '3.10%', 9: '2.35%', 10: '2.86%', 11: '3.90%', 12: '2.53%', 13: '2.23%', 14: '2.61%', 15: '2.49%', 16: '2.31%', 17: '3.09%', 18: '4.83%', 19: '2.02%'}

Where is the F1-score result?
What did I miss?

problems with ekphrasis

Hi,

I'm trying to run your code on GPU by running the following command:
python ./model/pretraining/sentiment2017.py

but the following errors occur. Can you help me with that ?

Running on:cuda Traceback (most recent call last): File "./model/pretraining/sentiment2017.py", line 17, in <module> from utils.train import define_trainer, model_training File "/root/ntua-slp-semeval2018/utils/train.py", line 21, in <module> from modules.nn.dataloading import WordDataset, CharDataset File "/root/ntua-slp-semeval2018/modules/nn/dataloading.py", line 7, in <module> from ekphrasis.classes.preprocessor import TextPreProcessor File "/usr/local/lib/python2.7/dist-packages/ekphrasis/classes/preprocessor.py", line 6, in <module> from ekphrasis.classes.exmanager import ExManager File "/usr/local/lib/python2.7/dist-packages/ekphrasis/classes/exmanager.py", line 19 {print(k.lower(), ":", self.expressions[k]) ^ SyntaxError: invalid syntax

Full Documentation of Source Code

Thank you for your contribution. Are you still planning to add the full documentation of the source code?

In particular, I am interested in understanding the difference between the scripts for task 1.

Pretraining Getting Stuck

I am running the pretraining code the way you suggested but it has been stuck at this point for 2 hours now. Is this supposed to take this long?

neilpaul77@NeilRig77:~/Downloads/ntua-slp-semeval2018$ python sentiment2017.py 
/home/neilpaul77/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Running on:cuda
loading word embeddings...
Loaded word embeddings from cache.
Reading twitter_2018 - 1grams ...
Reading twitter_2018 - 2grams ...
Reading twitter_2018 - 1grams ...
Building word-level datasets...
Loading SEMEVAL_2017_word_train from cache!
Total words: 1435889, Total unks:9700 (0.68%)
Unique words: 45397, Unique unks:3602 (7.93%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.47%', 'positive': '35.62%'}

Loading SEMEVAL_2017_word_val from cache!
Total words: 75465, Total unks:521 (0.69%)
Unique words: 9191, Unique unks:198 (2.15%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.46%', 'positive': '35.63%'}

Initializing Embedding layer with pre-trained weights!
ModelWrapper(
  (feature_extractor): FeatureExtractor(
    (embedding): Embed(
      (embedding): Embedding(804871, 310)
      (dropout): Dropout(p=0.1)
      (noise): GaussianNoise (mean=0.0, stddev=0.2)
    )
    (encoder): RNNEncoder(
      (rnn): LSTM(310, 150, num_layers=2, batch_first=True, dropout=0.3, bidirectional=True)
      (drop_rnn): Dropout(p=0.3)
    )
    (attention): SelfAttention(
      (attention): Sequential(
        (0): Linear(in_features=300, out_features=300, bias=True)
        (1): Tanh()
        (2): Dropout(p=0.3)
        (3): Linear(in_features=300, out_features=1, bias=True)
        (4): Tanh()
        (5): Dropout(p=0.3)
      )
      (softmax): Softmax()
    )
  )
  (linear): Linear(in_features=300, out_features=3, bias=True)
)

IndexError: tuple index out of range

Hi, I get this error when running sentiment_2017.py
The embeddings loaded are those from ntua_twitter_300.txt

Could someone help me with that?
Here the error message:

Traceback (most recent call last):
File "/home/ignacioalvarez/PycharmProjects/ntua-slp-semeval2018/model/pretraining/sentiment2017.py", line 39, in
monitor="val", label_transformer=transformer)
File "/home/ignacioalvarez/PycharmProjects/ntua-slp-semeval2018/utils/train.py", line 338, in define_trainer
**_config)
File "/home/ignacioalvarez/PycharmProjects/ntua-slp-semeval2018/modules/nn/models.py", line 184, in init
**kwargs)
File "/home/ignacioalvarez/PycharmProjects/ntua-slp-semeval2018/modules/nn/models.py", line 85, in init
embedding_dim=embeddings.shape[1],
IndexError: tuple index out of range

prediction of Task 3

hi, I want to reproduce the result of Task 3.
And now I can run the base_experiment of task 3.
But I don't know how to train and test.
Can you give me some instruction?
Thanks a lot.

Error when training the model for Task2A

I am trying to train the neural network model for Task2A (i.e., emoji prediction in English), from SemEval2018. However, when I run the file neural_experiment.py, located in model/task2/, I get the following error message:

File "./model/task2/neural_experiment.py", line 52, in <module>
    model_training(trainer, model_config["epochs"])

  File "../..\utils\train.py", line 387, in model_training
    trainer.train()

  File "../..\logger\training.py", line 520, in train
    self.train_loader(self.loaders["train"])

  File "../..\logger\training.py", line 488, in train_loader
    sample_batched)

  File "../..\utils\train.py", line 143, in pipeline
    outputs, attentions = nn_model(inputs, lengths)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)

  File "../..\modules\nn\models.py", line 203, in forward
    representations, attentions = self.feature_extractor(x, lengths)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)

  File "../..\modules\nn\models.py", line 140, in forward
    embeddings = self.embedding(x)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)

  File "../..\modules\nn\modules.py", line 148, in forward
    embeddings = self.embedding(x)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\modules\sparse.py", line 118, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)

  File "D:\WinPython\python-3.7.2.amd64\lib\site-packages\torch\nn\functional.py", line 1454, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)

Do you have any idea what I should do next?

Error with load_embeddings.py "can't fix"

Kindly, help me to solve the problem with this error which I receive when doing the steps to run
python -m model.pretraining.sentiment2017

File "/utils/load_embeddings.py", line 93, in load_word_vectors
embeddings = numpy.array(embeddings, dtype=numpy.float32)
ValueError: setting an array element with a sequence.

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.