Code Monkey home page Code Monkey logo

slot_filling_and_intent_detection_of_slu's Introduction

Slot filling and intent detection tasks of spoken language understanding

  • An implementation for "focus" part of the paper "Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding".
  • An implementation of BLSTM-CRF based on jiesutd/NCRFpp
  • An implementation of joint training of slot filling and intent detection tasks (Bing Liu and Ian Lane, 2016).
  • Tutorials on ATIS and SNIPS datasets.

data annotation

Setup

  • pytorch 1.0
  • python 3.6.x
  • pip install gpustat [if gpu is used]

About the evaluations of intent detection on ATIS and SNIPS datasets.

As we can know from the datasets, ATIS may have multiple intents for one utterance while SNIPS has only one intent for one utterance. For example, "show me all flights and fares from denver to san francisco <=> atis_flight && atis_airfare". Therefore, there is a public trick in the training and evaluation stages for intent detection of ATIS dataset.

NOTE: Impacted by the paper "What is left to be understood in ATIS?", almost all works about ATIS choose the first intent as label to train a "softmax" intent classifier. In the evaluation stage, it will be viewed as correct if the predicted intent is one of the multiple intents.

TODO:

  • Add char-embeddings

Tutorials A: Slot filling and intent detection with pretrained word embeddings

  1. Pretrained word embeddings are from CNN-BLSTM language models of ELMo where word embeddings are modelled by char-CNNs. We extract the pretrained word embeddings from atis and snips datasets by:
  python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \
          --in_files data/atis-2/{train,valid,test} \
          --output_word2vec local/word_embeddings/elmo_1024_cased_for_atis.txt
  python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \
          --in_files data/snips/{train,valid,test} \
          --output_word2vec local/word_embeddings/elmo_1024_cased_for_snips.txt
  1. Run scripts of training and evaluation at each epoch.
  • BLSTM model:
bash run/atis_with_pretrained_word_embeddings.sh slot_tagger
bash run/snips_with_pretrained_word_embeddings.sh slot_tagger
  • BLSTM-CRF model:
bash run/atis_with_pretrained_word_embeddings.sh slot_tagger_with_crf
bash run/snips_with_pretrained_word_embeddings.sh slot_tagger_with_crf
  • Enc-dec focus model (BLSTM-LSTM), the same as Encoder-Decoder NN (with aligned inputs)(Liu and Lane, 2016):
bash run/atis_with_pretrained_word_embeddings.sh slot_tagger_with_focus
bash run/snips_with_pretrained_word_embeddings.sh slot_tagger_with_focus

Tutorials B: Slot filling and intent detection with ELMo

  1. Run scripts of training and evaluation at each epoch.
  • BLSTM model:
bash run/atis_with_elmo.sh slot_tagger
bash run/snips_with_elmo.sh slot_tagger
  • BLSTM-CRF model:
bash run/atis_with_elmo.sh slot_tagger_with_crf
bash run/snips_with_elmo.sh slot_tagger_with_crf
  • Enc-dec focus model (BLSTM-LSTM), the same as Encoder-Decoder NN (with aligned inputs)(Liu and Lane, 2016):
bash run/atis_with_elmo.sh slot_tagger_with_focus
bash run/snips_with_elmo.sh slot_tagger_with_focus

Tutorials C: Slot filling and intent detection with BERT

  1. Model architectures:

bert_SLU_simple

  • Our BERT + BLSTM (BLSTM-CRF\Enc-dec focus):

bert_SLU_complex

  1. Run scripts of training and evaluation at each epoch.
  • BLSTM model:
bash run/atis_with_bert.sh slot_tagger
bash run/snips_with_bert.sh slot_tagger
  • BLSTM-CRF model:
bash run/atis_with_bert.sh slot_tagger_with_crf
bash run/snips_with_bert.sh slot_tagger_with_crf
  • Enc-dec focus model (BLSTM-LSTM), the same as Encoder-Decoder NN (with aligned inputs)(Liu and Lane, 2016):
bash run/atis_with_bert.sh slot_tagger_with_focus
bash run/snips_with_bert.sh slot_tagger_with_focus

Tutorials C: Slot filling and intent detection with XLNET [ToDo]

Results:

  • For "NLU + BERT" model, hyper-parameters are not tuned carefully.
  1. Results of ATIS:

    models intent Acc (%) slot F1-score (%)
    [Atten. enc-dec NN with aligned inputs](Liu and Lane, 2016) 98.43 95.87
    [Atten.-BiRNN](Liu and Lane, 2016) 98.21 95.98
    [Enc-dec focus](Zhu and Yu, 2017) - 95.79
    [Slot-Gated](Goo et al., 2018) 94.1 95.2
    Intent Gating & self-attention 98.77 96.52
    BLSTM-CRF + ELMo 97.42 95.62
    Joint BERT 97.5 96.1
    Joint BERT + CRF 97.9 96.0
    BLSTM (A. Pre-train word emb.) 98.10 95.67
    BLSTM-CRF (A. Pre-train word emb.) 98.54 95.39
    Enc-dec focus (A. Pre-train word emb.) 98.43 95.78
    BLSTM (B. +ELMo) 98.66 95.52
    BLSTM-CRF (B. +ELMo) 98.32 95.62
    Enc-dec focus (B. +ELMo) 98.66 95.70
    BLSTM (C. +BERT) 99.10 95.94
  2. Results of SNIPS:

  • Cased BERT-base model gives better result than uncased model.

    models intent Acc (%) slot F1-score (%)
    [Slot-Gated](Goo et al., 2018) 97.0 88.8
    BLSTM-CRF + ELMo 99.29 93.90
    Joint BERT 98.6 97.0
    Joint BERT + CRF 98.4 96.7
    BLSTM (A. Pre-train word emb.) 99.14 95.75
    BLSTM-CRF (A. Pre-train word emb.) 99.00 96.92
    Enc-dec focus (A. Pre-train word emb.) 98.71 96.22
    BLSTM (B. +ELMo) 98.71 96.32
    BLSTM-CRF (B. +ELMo) 98.57 96.61
    Enc-dec focus (B. +ELMo) 99.14 96.69
    BLSTM (C. +BERT) 98.86 96.92
    BLSTM-CRF (C. +BERT) 98.86 97.00
    Enc-dec focus (C. +BERT) 98.71 97.17

Reference

  • Su Zhu and Kai Yu, "Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding," in IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), 2017, pp. 5675-5679.

slot_filling_and_intent_detection_of_slu's People

Contributors

sz128 avatar vzxxbacq avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.