Code Monkey home page Code Monkey logo

effi-empathy's Introduction

Effi-EMP

This repository contains code base of three separate github repositories. We have only provided sample data and the some of the models such as BERT, BART, RoBERTa to show case how the model works.

Particularly this repository contains codes and dataset access instructions for the EMNLP 2020 publication on understanding empathy expressed in text-based mental health support.

If this code or dataset helps you in your research, please cite the following publication:

@inproceedings{sharma2020empathy,
    title={A Computational Approach to Understanding Empathy Expressed in Text-Based Mental Health Support},
    author={Sharma, Ashish and Miner, Adam S and Atkins, David C and Althoff, Tim},
    year={2020},
    booktitle={EMNLP}
}

Introduction

We present a computational approach to understanding how empathy is expressed in online mental health platforms. We develop a novel unifying theoretically-grounded framework for characterizing the communication of empathy in text-based conversations. We collect and share a corpus of 10k (post, response) pairs annotated using this empathy framework with supporting evidence for annotations (rationales). We develop a multi-task RoBERTa-based bi-encoder model for identifying empathy in conversations and extracting rationales underlying its predictions. Experiments demonstrate that our approach can effectively identify empathic conversations. We further apply this model to analyze 235k mental health interactions and show that users do not self-learn empathy over time, revealing opportunities for empathy training and feedback.

For a quick overview, check out bdata.uw.edu/empathy. For a detailed description of our work, please read our EMNLP 2020 publication.

Quickstart

1. Prerequisites

Our framework can be compiled on Python 3 environments. The modules used in our code can be installed using:

$ pip install -r requirements.txt

2. Prepare dataset

A sample raw input data file is available in dataset/sample_input_ER.csv. This file (and other raw input files in the dataset folder) can be converted into a format that is recognized by the model using with following command:

$ python3 src/process_data.py --input_path dataset/sample_input_ER.csv --output_path dataset/sample_input_model_ER.csv

3. Training the model

For training our model on the sample input data, run the following command:

$ python3 src/train.py \
	--train_path=dataset/sample_input_model_ER.csv \
	--lr=2e-5 \
	--batch_size=32 \
	--lambda_EI=1.0 \
	--lambda_RE=0.5 \
	--save_model \
	--save_model_path=output/sample_ER.pth

Note: You may need to create an output folder in the main directory before running this command.

4. Testing the model

For testing our model on the sample test input, run the following command:

$ python3 src/test.py \
	--input_path dataset/sample_test_input.csv \
	--output_path dataset/sample_test_output.csv \
	--ER_model_path output/sample_ER.pth \
	--IP_model_path output/sample_IP.pth \
	--EX_model_path output/sample_EX.pth

Training Arguments

The training script accepts the following arguments:

Argument Type Default value Description
lr float 2e-5 learning rate
lambda_EI float 0.5 weight of empathy identification loss
lambda_RE float 0.5 weight of rationale extraction loss
dropout float 0.1 dropout (changing the dropout rate might vary the initial outcome)
max_len int 64 maximum sequence length
batch_size int 32 batch size
epochs int 4 number of epochs
seed_val int 12 seed value
train_path str "" path to input training data
dev_path str "" path to input validation data
test_path str "" path to input test data
do_validation boolean False If set True, compute results on the validation data
do_test boolean False If set True, compute results on the test data
save_model boolean False If set True, save the trained model
save_model_path str "" path to save model

Dataset Access Instructions

The Reddit portion of our collected dataset is available inside the dataset folder. The csv files with annotations on the three empathy communication mechanisms are emotional-reactions-reddit.csv, interpretations-reddit.csv, and explorations-reddit.csv. Each csv file contains six columns:

sp_id: Seeker post identifier
rp_id: Response post identifier
seeker_post: A support seeking post from an online user
response_post: A response/reply posted in response to the seeker_post
level: Empathy level of the response_post in the context of the seeker_post
rationales: Portions of the response_post that are supporting evidences or rationales for the identified empathy level. Multiple portions are delimited by '|'

For accessing the TalkLife portion of our dataset for non-commercial use, please contact the TalkLife team here.

Training and Testing Instructions

Path for output model needs be made for the models

mkdir output

N.B: All three models needs to be generated in order to get the results. We are also providing Testing sample on how to test it out

In order to run the scripting file train.sh/ test.sh, user needs to change the name of the file everytime.

effi-empathy's People

Contributors

rifat951 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.