Code Monkey home page Code Monkey logo

Comments (5)

kizinfo avatar kizinfo commented on July 20, 2024

Modify create_tfrecords.py at the top, filename and path declarations point to the text files. make sure the line:
files = glob.glob(os.path.join(base_dir, "*.txt"))
properly points to and indexes the source text files.
also need a copy of the existing model encoder.
and set files_per as the number of text files to use for each chunk of tfrecords. I already had them split into chunks with ~300 books each separated by <|EndOfText|>. the .py doesn't add an end text token so if you didn't already put them in your txt files and want them need to modify the code further.

have been able to train 4gig of text on colab TPU over a number of days to 60k iterations. Results arn't as good as finetuning existing models (yet). The biggest model that still fit colab was:

"n_head": 17,
"lr": 0.00025,
"warmup_steps": 2000,
"beta1": 0.0,
"decay_exponent": 0.8,
"opt_name": "adafactor",
"decay_type": "pow",
"train_batch_size": 8,
"max_steps": 200000,
"predict_batch_size": 1,
"eval_batch_size": 8,
"iterations": 100,
"n_embd": 1020,
"n_ctx": 1024,
 "n_layer": 34,

With such small batch sizes, I'm not sure it will ever work well and without bfloat16 working on inference can't get bigger models on colab. The author says he trained models on a pod (which with 'evalutaion' pricing would cost tens of thousands of dollars).

From the original GPT2 paper the Authors claimed that the variation in training text was important for bigger models so it might be that training a model from scratch on a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain.

Have been able to get some really good results from the original 345M GPT2 models by finetuning on domain specific content that maintains context well through multiple paragraphs.

from gpt2.

GenTxt avatar GenTxt commented on July 20, 2024

Thanks for the information. A lot to test and as you say it's likely true that " ... a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain."

I'm having good results too with genre specific models based on the OpenAi 345M. Can only hope they decide to release their larger models within 6 months.

Closing this now

from gpt2.

ConnorJL avatar ConnorJL commented on July 20, 2024

I'm sorry the scripts are pretty poorly documented, I'm planning on making a better custom dataset setup when I get the time. You basically just want to use create_tfrecords.py as kizinfo said to generate the .tfrecords files from your txt files.

You do NOT have to add <|endoftext|> manually! If you use my bpe_text function (in inputs.py) as input, it automatically samples "stitch" amount of texts from your dataset, concatenates them with <|endoftext|> in between and then samples n_ctx amount of tokens from the final result. Make sure that "stitch" is set so that (your minimal length text * stitch) >= n_ctx.

I plan on releasing the 1.5B model, see my blogposts about it here and here.

from gpt2.

GenTxt avatar GenTxt commented on July 20, 2024

from gpt2.

GenTxt avatar GenTxt commented on July 20, 2024

from gpt2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.