Code Monkey home page Code Monkey logo

higrus's Introduction

HiGRU: Hierarchical Gated Recurrent Units for Utterance-level Emotion Recognition

This is the Pytorch implementation of HiGRU: Hierarchical Gated Recurrent Units for Utterance-level Emotion Recognition in NAACL-2019. [paper]

Brief Introduction

In this paper, we address three challenges in utterance-level emotion recognition in dialogue systems: (1) the same word can deliver different emotions in different contexts; (2) some emotions are rarely seen in general dialogues; (3) long-range contextual information is hard to be effectively captured. We therefore propose a hierarchical Gated Recurrent Unit (HiGRU) framework with a lower-level GRU to model the word-level inputs and an upper-level GRU to capture the contexts of utterance-level embeddings. Moreover, we promote the framework to two variants, HiGRU with individual features fusion (HiGRU-f) and HiGRU with self-attention and features fusion (HiGRU-sf), so that the word/utterance-level individual inputs and the long-range contextual information can be sufficiently utilized. Experiments on three dialogue emotion datasets, IEMOCAP, Friends, and EmotionPush demonstrate that our proposed HiGRU models attain at least 8.7%, 7.5%, 6.0% improvement over the state-of-the-art methods on each dataset, respectively. Particularly, by utilizing only the textual feature in IEMOCAP, our HiGRU models gain at least 3.8% improvement over the state-of-the-art conversational memory network (CMN) with the trimodal features of text, video, and audio.

Figure 1: The framework of HiGRU.

Code Base

Dataset

Please find the datasets via the following links:

  • Friends: Friends comes from the transcripts of Friends TV Sitcom, where each dialogue in the dataset consists of a scene of multiple speakers.
  • EmotionPush: EmotionPush comes from private conversations between friends on the Facebook messenger collected by an App called EmotionPush.
  • IEMOCAP: IEMOCAP contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions.

HiGRUs storage also provides a full collection of the three datasets in .json format preprocessed by us.

Prerequisites

  • Python v3.6
  • Pytorch v0.4.0-v0.4.1
  • Pickle

Data Preprocessing

For each dataset, we need to preprocess it using the Preprocess.py file as:

python Preprocess.py -emoset Friends -min_count 2 -max_seq_len 60

The arguments -emoset, -min_count, and -max_length represent the dataset name, the minimum frequency of words when building the vocabulary, and the max_length for padding or truncating sentences.

Pre-trained Word Embeddings

To reproduce the results reported in the paper, please adopt the pre-trained word embeddings for initialization. You can download the 300-dimentional embeddings from below:

Decompress the files and re-name them as word2vec300.bin and glove300.txt, respectively.

Train

You can run the exec_emo.sh file in Bash as:

bash exec_emo.sh

Or you can set up the model parameters yourself:

python EmoMain.py \
-lr 2e-4 \
-gpu 0 \
-type higru-sf \
-d_h1 300 \
-d_h2 300 \
-report_loss 720 \
-data_path Friends_data.pt \
-vocab_path Friends_vocab.pt \
-emodict_path Friends_emodict.pt \
-tr_emodict_path Friends_tr_emodict.pt \
-dataset Friends \
-embedding Friends_embedding.pt

More Details:

  • The implementation supports both CPU and GPU (but only one GPU), you need to specify the device number of GPU in your arguments otherwise the model will be trained in CPU.
  • There are three modes in this implementation, i.e., higru, higru-f, and higru-sf, as described in the paper. You can select one of them by the argument -type.
  • The default sizes of the hidden states in the GRUs are 300, but smaller values also work well (larger ones may result in over-fitting).
  • You have to load in the data produced by the Preprocess.py file, e.g., including Friends_data.pt, Friends_vocab.pt, Friends_emodict.pt, and Friends_tr_emodict.pt, as well as the dataset name Friends.
  • The argument -embedding is optional that you can load in the embeddings saved by the first run or the implementation will initialize it every time (which is time-consuming).
  • There are some other arguments in the EmoMain.py file, e.g., the decay rate for learning rate, the patience for early stopping. You can find out and change them if necessary.

Public Impact

Citation

Please kindly cite our paper:

@inproceedings{jiao2019higru,
  title     = {HiGRU: Hierarchical Gated Recurrent Units for Utterance-Level Emotion Recognition},
  author    = {Jiao, Wenxiang and Yang, Haiqin and King, Irwin and Lyu, Michael R},
  booktitle = {Proceedings of the 2019 Conference of the North American Chapter 
               of the Association for Computational Linguistics: 
               Human Language Technologies, Volume 1 (Long and Short Papers)},
  pages     = {397--406},
  year      = {2019}
}

Interesting Variants of HiGRUs

Below we selectively list some variants of our HiGRUs developed by other researchers:

  • Bert-HiGRU: Keeping Up Appearances: Computational Modeling of Face Acts in Persuasion Oriented Discussions. [paper]
  • HiTransformer: Hierarchical Transformer Network for Utterance-level Emotion Recognition. [paper]
  • HAN-ReGRU: Hierarchical attention network with residual gated recurrent unit for emotion recognition in conversation. [paper]

higrus's People

Contributors

wxjiao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

higrus's Issues

Some questions.

Hi Wenxiang,
I have a little question about your paper.
What the "(2.0)" means in "69.0(2.0)", is it means that the WA is in the range of [69-2, 69+2] ? If so, what causes that range?
Tanks for your help!

在运行EmoMain文件时,报错TypeError: str expected, not int

Traceback (most recent call last):
File "C:/Users/18501/Desktop/孙露/HiGRUs-master/EmoMain.py", line 132, in
main()
File "C:/Users/18501/Desktop/孙露/HiGRUs-master/EmoMain.py", line 108, in main
focus_emo=focus_emo)
File "C:\Users\18501\Desktop\孙露\HiGRUs-master\EmoTrain.py", line 60, in emotrain
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
File "C:\Users\18501\anaconda3\envs\HiGRUs-master\lib\os.py", line 674, in setitem
value = self.encodevalue(value)
File "C:\Users\18501\anaconda3\envs\HiGRUs-master\lib\os.py", line 730, in check_str
raise TypeError("str expected, not %s" % type(value).name)
TypeError: str expected, not int

About version

Hi, can you tell me the version of all packages, such as pytorch. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.