Code Monkey home page Code Monkey logo

sz128 / slot_filling_and_intent_detection_of_slu Goto Github PK

View Code? Open in Web Editor NEW
392.0 17.0 105.0 9.48 MB

slot filling, intent detection, joint training, ATIS & SNIPS datasets, the Facebook’s multilingual dataset, MIT corpus, E-commerce Shopping Assistant (ECSA) dataset, CoNLL2003 NER, ELMo, BERT, XLNet

License: Apache License 2.0

Python 84.36% Shell 15.64%
crf sequence-labeling encoder-decoder spoken-language bert-bilstm-crf xlnet atis-dataset snips-dataset slot-filling intent-detection

slot_filling_and_intent_detection_of_slu's Introduction

Slot filling and intent detection tasks of spoken language understanding

data annotation

Section Description
Setup Required packages
Evaluation of intent detection for multiple intents How to report performance of intent detection on ATIS dataset
Tutorials A: with pretrained word embeddings Tutorials A: Slot filling and intent detection with pretrained word embeddings
Tutorials B: with ELMo Tutorials B: Slot filling and intent detection with ELMo
Tutorials C: with BERT Tutorials C: Slot filling and intent detection with BERT
Tutorials D: with XLNET Tutorials D: Slot filling and intent detection with XLNET
Results Results of different methods on certain datasets
Inference Mode Inference Mode
Reference How to cite?

Setup

About the evaluations of intent detection on ATIS and SNIPS datasets

As we can know from the datasets, ATIS may have multiple intents for one utterance while SNIPS has only one intent for one utterance. For example, "show me all flights and fares from denver to san francisco <=> atis_flight && atis_airfare". Therefore, there is a public trick in the training and evaluation stages for intent detection of ATIS dataset.

#f03c15NOTE!!!#f03c15: Impacted by the paper "What is left to be understood in ATIS?", almost all works about ATIS choose the first intent as the label to train a "softmax" intent classifier. In the evaluation stage, it will be viewed as correct if the predicted intent is one of the multiple intents.

TODO:

  • Add char-embeddings

Tutorials A: Slot filling and intent detection with pretrained word embeddings

  1. Pretrained word embeddings are borrowed from CNN-BLSTM language models of ELMo where word embeddings are modelled by char-CNNs. We extract the pretrained word embeddings for ATIS, SNIPS, the Facebook’s multilingual dataset and MIT_Restaurant_Movie_corpus(w/o intent) datasets by:
  python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \
          --in_files data/atis-2/{train,valid,test} \
          --output_word2vec local/word_embeddings/elmo_1024_cased_for_atis.txt
  python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \
          --in_files data/snips/{train,valid,test} \
          --output_word2vec local/word_embeddings/elmo_1024_cased_for_snips.txt
  python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \
          --in_files data/MIT_corpus/{movie_eng,movie_trivia10k13,restaurant}/{train,valid,test} \
          --output_word2vec local/word_embeddings/elmo_1024_cased_for_MIT_corpus.txt

, or use Glove and KazumaChar embeddings which are also exploited in the TRADE dialogue state tracker:

  python3 scripts/get_Glove-KazumaChar_word_embedding_for_a_dataset.py \
          --in_files data/atis-2/{train,valid,test} \
          --output_word2vec local/word_embeddings/glove-kazumachar_400_cased_for_atis.txt
  python3 scripts/get_Glove-KazumaChar_word_embedding_for_a_dataset.py \
          --in_files data/snips/{train,valid,test} \
          --output_word2vec local/word_embeddings/glove-kazumachar_400_cased_for_snips.txt
  python3 scripts/get_Glove-KazumaChar_word_embedding_for_a_dataset.py \
          --in_files data/multilingual_task_oriented_data/en/{train,valid,test} \
          --output_word2vec local/word_embeddings/glove-kazumachar_400_cased_for_multilingual_en.txt
  python3 scripts/get_Glove-KazumaChar_word_embedding_for_a_dataset.py \
          --in_files data/MIT_corpus/{movie_eng,movie_trivia10k13,restaurant}/{train,valid,test} \
          --output_word2vec local/word_embeddings/glove-kazumachar_400_cased_for_MIT_corpus.txt

, or use word embeddings in the pretrained BERT model:

  python3 scripts/get_BERT_word_embedding_for_a_dataset.py \
          --in_files data/multilingual_task_oriented_data/es/{train,valid,test} \
          --output_word2vec local/word_embeddings/bert_768_cased_for_multilingual_es.txt \
          --pretrained_tf_type bert --pretrained_tf_name bert-base-multilingual-cased
  python3 scripts/get_BERT_word_embedding_for_a_dataset.py \
          --in_files data/multilingual_task_oriented_data/es/{train,valid,test} \
          --output_word2vec local/word_embeddings/bert_768_cased_for_multilingual_es.txt \
          --pretrained_tf_type bert --pretrained_tf_name bert-base-multilingual-cased
  1. Run scripts of training and evaluation at each epoch.
  • BLSTM model (slot_tagger)
  • BLSTM-CRF model (slot_tagger_with_crf)
  • Enc-dec focus model (BLSTM-LSTM) (slot_tagger_with_focus), the same as Encoder-Decoder NN (with aligned inputs)(Liu and Lane, 2016)
slot_intent_model=slot_tagger # slot_tagger, slot_tagger_with_crf, slot_tagger_with_focus
bash run/atis_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model}
bash run/snips_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model}
bash run/MIT_corpus_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng
bash run/multilingual_en_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model}
bash run/multilingual_es_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model}
bash run/multilingual_th_with_pretrained_word_embeddings.sh --task_slot_filling ${slot_intent_model}

Tutorials B: Slot filling and intent detection with ELMo

  1. Run scripts of training and evaluation at each epoch.
  • ELMo + BLSTM/BLSTM-CRF/Enc-dec focus model (BLSTM-LSTM) models:
slot_intent_model=slot_tagger # slot_tagger, slot_tagger_with_crf, slot_tagger_with_focus
bash run/atis_with_elmo.sh --task_slot_filling ${slot_intent_model}
bash run/snips_with_elmo.sh --task_slot_filling  ${slot_intent_model}
bash run/MIT_corpus_with_elmo.sh --task_slot_filling  ${slot_intent_model} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng

Tutorials C: Slot filling and intent detection with BERT

  1. Model architectures:

bert_SLU_simple

  • Our BERT + BLSTM (BLSTM-CRF\Enc-dec focus):

bert_SLU_complex

  1. Run scripts of training and evaluation at each epoch.
  • Pure BERT (without or with crf) model:
slot_model=NN # NN, NN_crf
intent_input=CLS # none, CLS, max, CLS_max
bash run/atis_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/snips_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/MIT_corpus_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng
bash run/multilingual_en_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/multilingual_es_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/multilingual_th_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/ECSA_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/CoNLL2003_NER_with_pure_bert.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
  • BERT + BLSTM/BLSTM-CRF/Enc-dec focus model (BLSTM-LSTM) models:
slot_intent_model=slot_tagger # slot_tagger, slot_tagger_with_crf, slot_tagger_with_focus
bash run/atis_with_bert.sh --task_slot_filling ${slot_intent_model}
bash run/snips_with_bert.sh --task_slot_filling ${slot_intent_model}
bash run/MIT_corpus_with_bert.sh --task_slot_filling ${slot_intent_model} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng
  1. For optimizer, you can try BertAdam and AdamW. In my experiments, I choose to use BertAdam.

Tutorials D: Slot filling and intent detection with XLNET

  1. Run scripts of training and evaluation at each epoch.
  • Pure XLNET (without or with crf) model:
slot_model=NN # NN, NN_crf
intent_input=CLS # none, CLS, max, CLS_max
bash run/atis_with_pure_xlnet.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/snips_with_pure_xlnet.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input}
bash run/MIT_corpus_with_pure_xlnet.sh --task_slot_filling ${slot_model} --task_intent_detection ${intent_input} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng
  • XLNET + BLSTM/BLSTM-CRF/Enc-dec focus model (BLSTM-LSTM) models:
slot_intent_model=slot_tagger # slot_tagger, slot_tagger_with_crf, slot_tagger_with_focus
bash run/atis_with_xlnet.sh --task_slot_filling ${slot_intent_model}
bash run/snips_with_xlnet.sh --task_slot_filling ${slot_intent_model}
bash run/MIT_corpus_with_xlnet.sh --task_slot_filling ${slot_intent_model} --dataroot data/MIT_corpus/movie_eng --dataset mit_movie_eng
  1. For optimizer, you can try BertAdam and AdamW.

Results:

  • For "NLU + BERT/XLNET" models, hyper-parameters are not tuned carefully.
  1. Results of ATIS:

    models intent Acc (%) slot F1-score (%)
    [Atten. enc-dec NN with aligned inputs](Liu and Lane, 2016) 98.43 95.87
    [Atten.-BiRNN](Liu and Lane, 2016) 98.21 95.98
    [Enc-dec focus](Zhu and Yu, 2017) - 95.79
    [Slot-Gated](Goo et al., 2018) 94.1 95.2
    Intent Gating & self-attention 98.77 96.52
    BLSTM-CRF + ELMo 97.42 95.62
    Joint BERT 97.5 96.1
    Joint BERT + CRF 97.9 96.0
    BLSTM (A. Pre-train word emb. of ELMo) 98.10 95.67
    BLSTM-CRF (A. Pre-train word emb. of ELMo) 98.54 95.39
    Enc-dec focus (A. Pre-train word emb. of ELMo) 98.43 95.78
    BLSTM (A. Pre-train word emb. of Glove & KazumaChar) 98.66 95.55
    BLSTM-CRF (A. Pre-train word emb. of Glove & KazumaChar) 98.21 95.74
    Enc-dec focus (A. Pre-train word emb. of Glove & KazumaChar) 98.66 95.86
    BLSTM (B. +ELMo) 98.66 95.52
    BLSTM-CRF (B. +ELMo) 98.32 95.62
    Enc-dec focus (B. +ELMo) 98.66 95.70
    BLSTM (C. +BERT) 99.10 95.94
    BLSTM (D. +XLNET) 98.77 96.08
  2. Results of SNIPS:

  • Cased BERT-base model gives better result than uncased model.

    models intent Acc (%) slot F1-score (%)
    [Slot-Gated](Goo et al., 2018) 97.0 88.8
    BLSTM-CRF + ELMo 99.29 93.90
    Joint BERT 98.6 97.0
    Joint BERT + CRF 98.4 96.7
    BLSTM (A. Pre-train word emb. of ELMo) 99.14 95.75
    BLSTM-CRF (A. Pre-train word emb. of ELMo) 99.00 96.92
    Enc-dec focus (A. Pre-train word emb. of ELMo) 98.71 96.22
    BLSTM (A. Pre-train word emb. of Glove & KazumaChar) 99.14 96.24
    BLSTM-CRF (A. Pre-train word emb. of Glove & KazumaChar) 98.86 96.31
    Enc-dec focus (A. Pre-train word emb. of Glove & KazumaChar) 98.43 96.06
    BLSTM (B. +ELMo) 98.71 96.32
    BLSTM-CRF (B. +ELMo) 98.57 96.61
    Enc-dec focus (B. +ELMo) 99.14 96.69
    BLSTM (C. +BERT) 98.86 96.92
    BLSTM-CRF (C. +BERT) 98.86 97.00
    Enc-dec focus (C. +BERT) 98.71 97.17
    BLSTM (D. +XLNET) 98.86 97.05
  1. Results of the Facebook’s multilingual dataset (note: cased BERT-base model gives better result than uncased model):

    • English (en):
    models intent Acc (%) slot F1-score (%)
    Cross-Lingual Transfer (only target) 99.11 94.81
    BLSTM (no Pre-train word emb.) 99.19 95.37
    Enc-dec focus (A. Pre-train word emb. of Glove & KazumaChar) 99.28 96.04
    Pure BERT 99.34 96.23
    • Spanish (es):
    models intent Acc (%) slot F1-score (%)
    Cross-Lingual Transfer (only target) 97.26 80.95
    Cross-Lingual Transfer (Cross-lingual + ELMo) 97.51 83.38
    BLSTM (no Pre-train word emb.) (only target) 97.63 86.05
    Enc-dec focus (A. Pre-train word emb. of BERT input layer) (only target) 97.67 88.67
    Pure BERT (only target) 98.85 89.26
    • Thai (th): (it seems "Enc-dec focus" gives better results than BLSTM)
    models intent Acc (%) slot F1-score (%)
    Cross-Lingual Transfer (only target) 95.13 87.26
    Cross-Lingual Transfer (Cross-lingual + Mult. CoVe + auto) 96.87 91.51
    BLSTM (no Pre-train word emb.) (only target) 96.99 89.17
    Enc-dec focus (no Pre-train word emb.) (only target) 96.75 91.31
    Enc-dec focus (A. Pre-train word emb. of BERT input layer) (only target) 96.87 91.05
    Pure BERT (only target) 97.34 92.51
  2. Slot F1-scores of MIT_Restaurant_Movie_corpus(w/o intent):

    models Restaurant Movie_eng Movie_trivia10k13
    Dom-Gen-Adv 74.25 83.03 63.51
    Joint Dom Spec & Gen-Adv 74.47 85.33 65.33
    Data Augmentation via Joint Variational Generation 73.0 82.9 65.7
    BLSTM (A. Pre-train word emb. of ELMo) 77.54 85.37 67.97
    BLSTM-CRF (A. Pre-train word emb. of ELMo) 79.77 87.36 71.83
    Enc-dec focus (A. Pre-train word emb. of ELMo) 78.77 86.68 70.85
    BLSTM (A. Pre-train word emb. of Glove & KazumaChar) 78.02 86.33 68.55
    BLSTM-CRF (A. Pre-train word emb. of Glove & KazumaChar) 79.84 87.61 71.90
    Enc-dec focus (A. Pre-train word emb. of Glove & KazumaChar) 79.98 86.82 71.10
  3. Slot F1-scores of E-commerce Shopping Assistant (ECSA) from Alibaba(w/o intent, in Chinese):

    models slot F1-score (%)
    Basic BiLSTM-CRF 43.02
    Pure BERT 46.96
    Pure BERT-CRF 47.75
  4. Entity F1-scores of CoNLL-2003 NER(w/o intent):

    models F1-score (%)
    Pure BERT 91.36
    Pure BERT-CRF 91.55

Inference Mode

An example here:

bash run/atis_with_pretrained_word_embeddings_for_inference_mode__an_example.sh

Reference

  • Su Zhu and Kai Yu, "Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding," in IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), 2017, pp. 5675-5679.

slot_filling_and_intent_detection_of_slu's People

Contributors

sz128 avatar vzxxbacq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slot_filling_and_intent_detection_of_slu's Issues

Inference Mode

Hello. After training the model, how can I deploy it and use it in inference mode??

The difference between Tutorials A "use word embeddings in the pretrained bert model" and Tutorials C "bert+lstm/blstm-crf/enc-dec focus model models"

Hello, your work is very excellent. However, due to my poor understanding, I have some questions when reading the code, and would like to ask for for your help. That is, BERT is used in Tutorial A to obtain word embedding, while BERT+ BLSTM model in Tutorial C is also used to obtain word embedding. What is the difference between the two? And if I want to use bert to get word embeddings and use blstm as the basic model, should I use tutorial A or tutorial C? I will be very happy if you can reply me.

Joint Bert模型复现结果和Slot Filling F1值计算方式

您好,我想问一下Results部分是复现的结果吗?我发现表格中的结果与原论文中的结果好像是一样的,所以有些疑惑。具体地,我比较关注Joint Bert模型,请问您的实现中,槽位F1值的计算方式是什么样的(我不太熟悉Pytorch,您的代码我刚刚看,希望能得到您的解答)?

error Connection broken when running python3 scripts/get_ELMo_word_embedding_for_a_dataset.py

I run the command:
python3 scripts/get_ELMo_word_embedding_for_a_dataset.py \

--in_files data/atis-2/{train,valid,test}
--output_word2vec local/word_embeddings/elmo_1024_cased_for_atis.txt

2%|████▍ | 8773632/374434792 [01:56<1:33:25, 65227.37B/s]

Traceback (most recent call last):
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py", line 294, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/OpenSSL/SSL.py", line 1822, in recv_into
self._raise_ssl_error(self._ssl, result)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/OpenSSL/SSL.py", line 1639, in _raise_ssl_error
raise SysCallError(errno, errorcode.get(errno))
OpenSSL.SSL.SysCallError: (104, 'ECONNRESET')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/response.py", line 360, in _error_catcher
yield
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/response.py", line 442, in read
data = self._fp.read(amt)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/http/client.py", line 449, in read
n = self.readinto(b)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/http/client.py", line 493, in readinto
n = self.fp.readinto(b)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py", line 299, in recv_into
raise SocketError(str(e))
OSError: (104, 'ECONNRESET')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/requests/models.py", line 750, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/response.py", line 494, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/response.py", line 459, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/urllib3/response.py", line 378, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: OSError("(104, 'ECONNRESET')",)', OSError("(104, 'ECONNRESET')",))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "scripts/get_ELMo_word_embedding_for_a_dataset.py", line 97, in
to_get_elmo_embeddings = elmo_embeddings(options_file, weight_file)
File "scripts/get_ELMo_word_embedding_for_a_dataset.py", line 22, in init
vocab_to_cache=None)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/modules/elmo.py", line 524, in init
self._token_embedder = _ElmoCharacterEncoder(options_file, weight_file, requires_grad=requires_grad)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/modules/elmo.py", line 310, in init
self._load_weights()
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/modules/elmo.py", line 398, in _load_weights
self._load_char_embedding()
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/modules/elmo.py", line 404, in _load_char_embedding
with h5py.File(cached_path(self._weight_file), 'r') as fin:
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/common/file_utils.py", line 98, in cached_path
return get_from_cache(url_or_filename, cache_dir)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/common/file_utils.py", line 217, in get_from_cache
http_get(url, temp_file)
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/allennlp/common/file_utils.py", line 174, in http_get
for chunk in req.iter_content(chunk_size=1024):
File "/opt/anaconda2/envs/tensorflow36_wb/lib/python3.6/site-packages/requests/models.py", line 753, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: OSError("(104, 'ECONNRESET')",)', OSError("(104, 'ECONNRESET')",))
2%|████▍

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.