Comments (3)
Hmm, it'd be helpful if you to tell me a bit more about what you were doing, how the files you were trying to convert look etc.
Off the top of my head, I'd say try changing the processes value at the top of the file to 1 and files_per to 10. Also double check your output dir is somewhere you can write to.
from gpt2.
Nope, still the same problem. I am trying to turn tv show scripts into tfrecords. The first time I tried it it worked (on 2 files) then it just never worked after that.
Also forgot to mention I am doing this in google colab with runtime set for python3 and TPU.
Here are my settings:
base_dir = "/content/gdrive/My Drive/gpt2/text2" # Path to where your .txt files are located
files_per = 175000 # 175000 ~ 200-300MB
name = "tv-scripts" # Name of output files will be name_i.tfrecords where i is the number of the file
output_dir = "/content/gdrive/My Drive/gpt2/textout"
log_dir = "logs"
files = glob.glob(os.path.join(base_dir, "*.txt"))
processes = 64 # Number of encoding processes to run
encoder_path = "/content/gdrive/My Drive/gpt2/encoder" # Path to encoder files
minimum_size = 25
had to change the 'files =' line because "**/*.txt" was not finding the files.
directories look writable:
$ls -lsa
total 28
4 drwx------ 2 root root 4096 Dec 8 06:02 encoder
4 drwx------ 2 root root 4096 Dec 8 06:13 GPT2
4 drwx------ 2 root root 4096 Dec 8 00:27 .ipynb_checkpoints
4 drwx------ 2 root root 4096 Dec 8 00:37 logs
4 drwx------ 2 root root 4096 Dec 8 00:27 text
4 drwx------ 2 root root 4096 Dec 8 00:28 textout
from gpt2.
Ok I see the problem... It was doing that because it couldnt overwrite the logs from the previous try. Deleting the logs caused it to work.
from gpt2.
Related Issues (20)
- when reading metadata of gs://openwebtext/stuff/encoder/encoder.json HOT 1
- Your 1.5B model HOT 2
- Are there some research papers about text-to-set generation? HOT 1
- How can i create smaller sized file for inference of 1.5B model HOT 1
- I figured out how to cram GPT-2 1.5B onto a single TPU core with Adam optimizer HOT 3
- Training on artificial language data (server logs, medical records, etc.) HOT 1
- Docker documentation for CUDA
- DOCKER: Web interface doesn't work
- about encoder.json HOT 4
- character-level HOT 1
- 117M/model.ckpt.index is corrupted?
- GPT vs BERT, under same computation and data resource, which one is better for downstream tasks like GLUE?
- Error on output HOT 1
- Retraining a new model, only gpu 0 can be used HOT 1
- Training 1.5B?
- Samples?
- where is the length of the forecast article set? Thank you!
- create_tfrecords.py。Dealing with problems with your own data set
- Question about the metric reported in the paper?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt2.