Comments (5)
Modify create_tfrecords.py at the top, filename and path declarations point to the text files. make sure the line:
files = glob.glob(os.path.join(base_dir, "*.txt"))
properly points to and indexes the source text files.
also need a copy of the existing model encoder.
and set files_per as the number of text files to use for each chunk of tfrecords. I already had them split into chunks with ~300 books each separated by <|EndOfText|>. the .py doesn't add an end text token so if you didn't already put them in your txt files and want them need to modify the code further.
have been able to train 4gig of text on colab TPU over a number of days to 60k iterations. Results arn't as good as finetuning existing models (yet). The biggest model that still fit colab was:
"n_head": 17,
"lr": 0.00025,
"warmup_steps": 2000,
"beta1": 0.0,
"decay_exponent": 0.8,
"opt_name": "adafactor",
"decay_type": "pow",
"train_batch_size": 8,
"max_steps": 200000,
"predict_batch_size": 1,
"eval_batch_size": 8,
"iterations": 100,
"n_embd": 1020,
"n_ctx": 1024,
"n_layer": 34,
With such small batch sizes, I'm not sure it will ever work well and without bfloat16 working on inference can't get bigger models on colab. The author says he trained models on a pod (which with 'evalutaion' pricing would cost tens of thousands of dollars).
From the original GPT2 paper the Authors claimed that the variation in training text was important for bigger models so it might be that training a model from scratch on a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain.
Have been able to get some really good results from the original 345M GPT2 models by finetuning on domain specific content that maintains context well through multiple paragraphs.
from gpt2.
Thanks for the information. A lot to test and as you say it's likely true that " ... a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain."
I'm having good results too with genre specific models based on the OpenAi 345M. Can only hope they decide to release their larger models within 6 months.
Closing this now
from gpt2.
I'm sorry the scripts are pretty poorly documented, I'm planning on making a better custom dataset setup when I get the time. You basically just want to use create_tfrecords.py as kizinfo said to generate the .tfrecords files from your txt files.
You do NOT have to add <|endoftext|> manually! If you use my bpe_text function (in inputs.py) as input, it automatically samples "stitch" amount of texts from your dataset, concatenates them with <|endoftext|> in between and then samples n_ctx amount of tokens from the final result. Make sure that "stitch" is set so that (your minimal length text * stitch) >= n_ctx.
I plan on releasing the 1.5B model, see my blogposts about it here and here.
from gpt2.
from gpt2.
from gpt2.
Related Issues (20)
- when reading metadata of gs://openwebtext/stuff/encoder/encoder.json HOT 1
- Your 1.5B model HOT 2
- error when using create_tfrecords.py HOT 3
- Are there some research papers about text-to-set generation? HOT 1
- How can i create smaller sized file for inference of 1.5B model HOT 1
- I figured out how to cram GPT-2 1.5B onto a single TPU core with Adam optimizer HOT 3
- Training on artificial language data (server logs, medical records, etc.) HOT 1
- Docker documentation for CUDA
- DOCKER: Web interface doesn't work
- about encoder.json HOT 4
- character-level HOT 1
- 117M/model.ckpt.index is corrupted?
- GPT vs BERT, under same computation and data resource, which one is better for downstream tasks like GLUE?
- Error on output HOT 1
- Retraining a new model, only gpu 0 can be used HOT 1
- Training 1.5B?
- Samples?
- where is the length of the forecast article set? Thank you!
- create_tfrecords.py。Dealing with problems with your own data set
- Question about the metric reported in the paper?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt2.