Code Monkey home page Code Monkey logo

emoberta's Introduction

Emotion Recognition in Coversation (ERC)

PWC
PWC

At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)

Watch a demo video!

Prerequisites

  1. An x86-64 Unix or Unix-like machine
  2. Python 3.8 or higher
  3. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
  4. multimodal-datasets repo (submodule)
  5. pip install -r requirements.txt

EmoBERTa training

First configure the hyper parameters and the dataset in train-erc-text.yaml and then, In this directory run the below commands. I recommend you to run this in a virtualenv.

python train-erc-text.py

This will subsequently call train-erc-text-hp.py and train-erc-text-full.py.

Results on the test split (weighted f1 scores)

Model MELD IEMOCAP
EmoBERTa No past and future utterances 63.46 56.09
Only past utterances 64.55 68.57
Only future utterances 64.23 66.56
Both past and future utterances 65.61 67.42
โ†’ without speaker names 65.07 64.02

Above numbers are the mean values of five random seed runs.

If you want to see more training test details, check out ./results/

If you want to download the trained checkpoints and stuff, then here is where you can download them. It's a pretty big zip file.

Deployment

Huggingface

We have released our models on huggingface:

They are based on RoBERTa-base and RoBERTa-large, respectively. They were trained on both MELD and IEMOCAP datasets. Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").

Flask app

You can either run the Flask RESTful server app as a docker container or just as a python script.

  1. Running the app as a docker container (recommended).

    There are four images. Take what you need:

    • docker run -it --rm -p 10006:10006 tae898/emoberta-base
    • docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda
    • docker run -it --rm -p 10006:10006 tae898/emoberta-large
    • docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda
  2. Running the app in your python environment:

    This method is less recommended than the docker one.

    Run pip install -r requirements-deploy.txt first.
    The app.py is a flask RESTful server. The usage is below:

    app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]

    For example:

    python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base

Client

Once the app is running, you can send a text to the server. First install the necessary packages: pip install -r requirements-client.txt, and the run the client.py. The usage is as below:

client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT

For example:

python client.py --text "Emotion recognition is so cool\!"

will give you:

{
    "neutral": 0.0049800905,
    "joy": 0.96399665,
    "surprise": 0.018937444,
    "anger": 0.0071516023,
    "sadness": 0.002021492,
    "disgust": 0.001495996,
    "fear": 0.0014167271
}

Troubleshooting

The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Run make style && make quality in the root repo directory, to ensure code quality.
  4. Commit your Changes (git commit -m 'Add some AmazingFeature')
  5. Push to the Branch (git push origin feature/AmazingFeature)
  6. Open a Pull Request

Cite our work

Check out the paper.

@misc{kim2021emoberta,
      title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, 
      author={Taewoon Kim and Piek Vossen},
      year={2021},
      eprint={2108.12009},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

DOI

Authors

License

MIT

emoberta's People

Contributors

tae898 avatar khanhvu207 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.