Comments (4)
This repo is pretty sparse and I don't have any plans currently on working on it more, so I don't have any kind of fancy support for custom encoders. If your encoder.json is in the same format as the one used by OpenAI, you can simply drop it in and train your model from scratch using it. You will have to encode your dataset using the new encoder.json, of course. You might also have to change the vocabulary size of the model, if yours is different from the default. I don't know if SentencePiece has the same format as what OpenAI uses unfortunately. I think its BPE encoding setting might, but I'm not sure.
To explain roughly what's going on: The encoder.json gives pieces of input text unique numbers, basically a dictionary matching symbols/words to numbers. The model is trained on text encoded into those numbers, and spits out such numbers again which you translate back into text. The vocabulary size parameter of the model is just how many word to number entries there are in your encoder.json. You will have to retrain the model from scratch if you use a different encoder.json.
from gpt2.
@fnyhy did you manage to build an encoder.json for your own dataset? If so how? :) Thanks a lot in advance!
from gpt2.
@mananeau No,I failed again. I generated the encoder.json and vocab file using my dataset. But it reported error when the model used them in encode-decode process:(
from gpt2.
@fnyhy thanks for you reply and sorry to hear it didn't work. Did you have a look at this repo? Might be another way to train GPT-2 from scratch on your own data. For vocab, they seem to use a similar one as BERT (WordPiece).
from gpt2.
Related Issues (20)
- when reading metadata of gs://openwebtext/stuff/encoder/encoder.json HOT 1
- Your 1.5B model HOT 2
- error when using create_tfrecords.py HOT 3
- Are there some research papers about text-to-set generation? HOT 1
- How can i create smaller sized file for inference of 1.5B model HOT 1
- I figured out how to cram GPT-2 1.5B onto a single TPU core with Adam optimizer HOT 3
- Training on artificial language data (server logs, medical records, etc.) HOT 1
- Docker documentation for CUDA
- DOCKER: Web interface doesn't work
- character-level HOT 1
- 117M/model.ckpt.index is corrupted?
- GPT vs BERT, under same computation and data resource, which one is better for downstream tasks like GLUE?
- Error on output HOT 1
- Retraining a new model, only gpu 0 can be used HOT 1
- Training 1.5B?
- Samples?
- where is the length of the forecast article set? Thank you!
- create_tfrecords.py。Dealing with problems with your own data set
- Question about the metric reported in the paper?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt2.