Code Monkey home page Code Monkey logo

lassl's Introduction

Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasets

What is LASSLHow to Use

English | 한국어

License Issues Issues

What is LASSL

LASSL is a LAnguage framework for Self-Supervised Learning. LASSL aims to provide an easy-to-use framework for pretraining language model by only using Huggingface's Transformers and Datasets.

Environment setting

First of all, you must install a valid version of pytorch along your computing envrionment. Next, You can install lasslthe required packages to use lassl following.

pip3 install .

How to Use

  • Language model pretraining can be divided into three steps: 1. Train Tokenizer, 2. Serialize Corpus, 3.Pretrain Language Model.
  • After preparing corpus following to supported corpus type, you can pretrain your own language model.

1. Train Tokenizer

python3 train_tokenizer.py \
    --corpora_dirpath $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --sampling_ratio $SAMPLING_RATIO \
    --model_type $MODEL_TYPE \
    --vocab_size $VOCAB_SIZE \
    --min_frequency $MIN_FREQUENCY

2. Serialize Corpora

python3 serialize_corpora.py \
    --model_type $MODEL_TYPE \
    --tokenizer_dir $TOKENIZER_DIR \
    --corpora_dir $CORPORA_DIR \
    --corpus_type $CORPUS_TYPE \
    --max_length $MAX_LENGTH \
    --num_proc $NUM_PROC

3. Pretrain Language Model

python3 pretrain_language_model.py --config_path $CONFIG_PATH
# When using TPU, use the command below. (Poetry environment does not provide PyTorch XLA as default.)
python3 xla_spawn.py --num_cores $NUM_CORES pretrain_language_model.py --config_path $CONFIG_PATH

Contributors

Boseop Kim Minho Ryu Inje Ryu Jangwon Park Hyoungseok Kim
image1 image2 image3 image4 image5
Github Github Github Github Github

Acknowledgements

LASSL is built with Cloud TPU support from the Tensorflow Research Cloud (TFRC) program.

lassl's People

Contributors

bzantium avatar daehankim avatar hyunwoongko avatar iron-ij avatar monologg avatar seopbo avatar wavy-jung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lassl's Issues

Support training BART

Is your feature request related to a problem? Please describe.
BART processor, collator 추가하기

Describe the solution you'd like
text_infilling 방법을 collator로 추가한다.

load_corpora 함수 개선

TODO

load_corpora function을 개선한다. 아래의 형태를 추가 지원한다.

  • sentence per line
문서0
문장0,0
문장0,1
문장0,2
...
문장0,N


문서1
문장1,0
문장1,1
문장1,2
...
문장1,M

Refactor load_corpora function

TODO

  • (docu_text, DocuSent)
  • (docu_json, DocuJson)
  • (sent_text, SentText)
  • (sent_json, SentJson)
  • text_type_per_line -> corpus_type
  • scripts -> loading

Fix bugs in GPT2 processor, collator

Describe the bug

  • GPT2Processor does not need special_tokens
  • DataCollatorForGpt2 inherit DataCollatorForLanguageModeling which requires pad_token_id which gpt2 tokenizer doesn't have.

Sync dependencies

TODO

poetry.lock, pyproject.tomlrequirements.txt의 버전 이격을 해소한다.

Add keep_in_memory option in load_dataset

Is your feature request related to a problem? Please describe.

  • TPU VM에서 학습하는 과정에서 캐쉬로 인해 메모리가 충분함에도 disk 용량이 꽉차는 이슈가 발생함

Describe the solution you'd like

  • load_dataset 단계에서 keep_in_memory 옵션을 추가하여 해결
  • Serialize과정이완료된 데이터는 disk에 저장되므로, train 단계에서는 필요가 없고 tokenizer, serialize과정에서만 추가

Add UL2 Language Modeling

슬랙에서도 소개하긴 했는데 Universal Language Learning Paradigm 논문에 소개된 Mixture of Denoisers 를 활용한 목적함수가 기존 Span corruption, MLM, CLM 보다 전반적으로 좋다고 합니다. 저도 마침 회사에서 활용해 볼 생각이 있어서 lassl에 collator 및 processor를 구현하려고 하는데 어떻게 생각하시나요??

ko-roberta-small training

Environment

# /home/iron/mnt
$ git clone https://github.com/lassl/lassl.git

image

$ pip3 install -r requirements.txt

Add examples configs

TODO

Add examples configs (bert-small.yaml, roberta-small.yaml, gpt2-small.yaml, albert-small.yaml)

KoRobertaSmall training

TODO

Training tokenizer

poetry run python3 train_tokenizer.py --corpora_dir corpora \
--corpus_type sent_text \
--model_type roberta \
--vocab_size 51200 \
--min_frequency 2

Serializing corpora

poetry run python3 serialize_corpora.py --model_type roberta \
--tokenizer_dir tokenizers/roberta \
--corpora_dir corpora \
--corpus_type sent_text \
--max_length 512 \
--num_proc 96 \
--batch_size 1000 \
--writer_batch_size 1000

ref:

Ready to release v0.1.0

Summary

기본적으로 전체적인 틀은 잡혀있는 사항 v0.1.0을 release하기에 앞서 다음의 내용에 대해서 논의

  • serialize_corpora.pytrain_tokenizer.py가 지원하는 model_type에 이격이 존재
    • serialie_corpora.py: roberta, gpt2, albert
    • train_tokenizer.py: bert-uncased, bert-cased, gpt2, roberta, albert, electra
  • README.md

Refactor pretrain_language_model.py

TODO: Collator

  • 지원하는 모델이 증가함에 따라 collator 또한 지속적으로 구현해야할 것으로 예상되므로 collator를 각 모델별 이름으로 수정하는 것을 건의함.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.