Code Monkey home page Code Monkey logo

het-mc's Introduction

HET-MC

This is the implementation of Summarizing Medical Conversations via Identifying Important Utterances at COLING 2020.

You can e-mail Yuanhe Tian at [email protected] or Guimin Chen at [email protected], if you have any questions.

Citation

If you use or extend our work, please cite our paper at COLING 2020.

@inproceedings{song-etal-2020-summarizing,
    title = "Summarizing Medical Conversations via Identifying Important Utterances",
    author = "Song, Yan and Tian, Yuanhe and Wang, Nan and Xia, Fei",
    booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
    month = dec,
    year = "2020",
    address = "Barcelona, Spain (Online)",
    pages = "717--729",
}

Requirements

Our code works with the following environment.

  • python=3.7
  • pytorch=1.3

Dataset

To obtain the data, you can go to data_preprocessing directory for details.

Downloading BERT, ZEN and HET-MC

In our paper, we use BERT (paper) and ZEN (paper) as the encoder.

For BERT, please download pre-trained BERT-Base Chinese from Google or from HuggingFace. If you download it from Google, you need to convert the model from TensorFlow version to PyTorch version.

For ZEN, you can download the pre-trained model from here.

For HET-MC, you can download the models we trained in our experiments from here (passcode: b1w1).

Run on Sample Data

Run run_sample.sh to train a model on the small sample data under the sample_data directory.

Training and Testing

You can find the command lines to train and test models in run.sh.

Here are some important parameters:

  • --do_train: train the model.
  • --do_test: test the model.
  • --use_bert: use BERT as token encoder.
  • --use_zen: use ZEN as token encoder.
  • --bert_model: the directory of pre-trained BERT/ZEN model.
  • --use_memory: use memories.
  • --utterance_encoder: the utterance encoder to be used (should be one of none, LSTM, and biLSTM).
  • --lstm_hidden_size: the size of hidden state in the LSTM/biLSTM utterance encoder.
  • --decoder: the decoder to be used (can be either crf or softmax).
  • --use_party: use the speaker role information.
  • --use_department: use the department information.
  • --use_disease: use disease information
  • --model_name: the name of model to save.

To-do List

  • Release the code to get the data.
  • Regular maintenance.

You can leave comments in the Issues section, if you want us to implement any functions.

het-mc's People

Contributors

yuanhetian avatar guiminchen avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.