eleutherai / pythia Goto Github PK
View Code? Open in Web Editor NEWThe hub for EleutherAI's work on interpretability and learning dynamics
License: Apache License 2.0
The hub for EleutherAI's work on interpretability and learning dynamics
License: Apache License 2.0
Pretty straight forward… need to implement the following in the eval harness:
For a discussion of “recognizing bias” vs “reproducing bias” check out here. The primary goal is to look at how the development of understanding of bias correlates with the development of tendency to produce biased content.
It would also be interesting to look at correlations across categories of bias, e.g., does the model learn to reproduce and/or identify all types of bias at an equal rate? And if not, can we identify specific subsets of the Pile that are “biased in how they are biased” so to speak.
Hi there, I only used Pythia with KoboldAI, particularly 13B and 13B deduped (or maybe 12B due to the rename) and I'm having generation issues. Since I already have a thread on this on KoboldAI, and my post there is already substantial I think, I'm just gonna link it here. ebolam/KoboldAI#312. I'm also gonna attach a pic with one of the issues, particularly spacing issue.
Hi, thanks for the amazing work!
I tried to use the Pythia model on a simple few-shot task. I gave it a few examples of sentiment analysis tasks (as shown in this example on GPT-J: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb which successfully solves this), and it seems like it even did not understand the job (generates unrelated tokens).
Is there a reason for Pythia's failure on this simple task, compared to GPT-J's success (although having twice the number of parameters)?
Hi, thanks for the awesome work.
I notice there are Pythia models and GPT-neox models on Eleuther's HF hub, both seem to be very recent. Wonder what are the the main differences between them, and if there is any preference on one of them.
Thank you.
Is there going to be support for MLMs (not just CLMs)?
Thanks for the amazing work!
In the readme, the table reports some models as being trained with a batch size of 4M, but later on you say
All models were trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens
which of these is correct?
(Also, you say that checkpoints are created every 2B tokens - if batch sizes differ between models, does that mean that some models have only 500 training steps between checkpoints?)
In the readme it says
To download and use the deduplicated Pile training data, run:
git lfs clone https://huggingface.co/datasets/EleutherAI/pythia_pile_idxmaps python utils/unshard_memmap.py --input_file ./pythia_pile_idxmaps/pile_0.87_deduped_text_document-00000-of-00082.bin --num_shards 83 --output_dir ./pythia_pile_idxmaps/```
But it should actually point to EleutherAI/pythia_deduped_pile_idxmaps
.
Todo items include:
Can you please help me with some resources related to fine tuning the model on my custom text corpus?
Where did the 19m model go to!
Thanks for sharing this amazing work. This will hopefully help in developing a better understanding of how LLMs work.
I had one question. Are the validation perplexities for each of the models available (ideally with every model snapshot) so that we can compare models on equal footing?
I need to upload a utility / sample guide on how to inspect the data ordering / extract a batch at a given timestep. This'll essentially be a cleaned up version of the memorized seq util I'm working on.
Features we want:
When I run the following code to load up pythia-13b
, I get a bunch of size mismatch errors.
model = GPTNeoXForCausalLM.from_pretrained(
f"EleutherAI/pythia-13b",
revision=f"step143000",
cache_dir=f"./"
)
Errors:
Traceback (most recent call last):
File "download_pythia_models.py", line 34, in <module>
model = GPTNeoXForCausalLM.from_pretrained(
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
) = cls._load_pretrained_model(
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2695, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for GPTNeoXForCausalLM:
size mismatch for gpt_neox.embed_in.weight: copying a param with shape torch.Size([50688, 5120]) from checkpoint, the shape in current model is torch.Size([50432, 4096]).
size mismatch for gpt_neox.layers.0.input_layernorm.weight: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for gpt_neox.layers.0.input_layernorm.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for gpt_neox.layers.0.post_attention_layernorm.weight: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for gpt_neox.layers.0.post_attention_layernorm.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
...
These continue for every layer of the model. When I use ignore_mismatched_sizes=True
in GPTNeoXForCausalLM.from_pretrained
, I get this error instead:
Traceback (most recent call last):
File "/om2/user/ericjm/the-everything-machine/experiments/pythia-0/eval.py", line 52, in <module>
model = GPTNeoXForCausalLM.from_pretrained(
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
) = cls._load_pretrained_model(
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2636, in _load_pretrained_model
mismatched_keys += _find_mismatched_keys(
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2564, in _find_mismatched_keys
and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
KeyError: 'embed_out.weight'
I imagine that some config just needs to be updated to reflect the actual model sizes? I do not get this error with any of the smaller models.
It seems like the EleutherAI/pythia-800m
tokenizer includes 'token_type_ids'
values, but these lead to a ValueError when evaluating the following code:
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-800m",
revision="step143000",
cache_dir=".pythia-800m/step143000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-800m",
revision="step143000",
cache_dir="./pythia-800m/step143000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
model.generate(**inputs)
Here is the stack trace:
Traceback (most recent call last):
File "eval.py", line 76, in <module>
outputs = model.generate(**inputs, temperature=0.0, max_new_tokens=40)
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/generation/utils.py", line 1296, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/generation/utils.py", line 993, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
I can get around this error by simply using a tokenizer from another one of the models. This tokenizer, for instance, works:
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-19m",
revision="step143000",
cache_dir="./pythia-19m/step143000",
)
It seems like the tokenizers are the same for all the models, so this issue is pretty easy to get around, but I just thought I'd report it.
We want metrics on the following models and checkpoints:
pythia-19m-deduped
s3://s-eai-neox/pythia/19M_deduped/global_step{3000, 13000, 23000, ...., 133000, 143000}
pythia-350m-deduped
s3://s-eai-neox/pythia/350M_dedup/global_step{1500, 6500, 11500, ...., 66500, 71500}
EleutherAI/pythia-350m-deduped
, revision={3000, 13000, 23000, ..., 133000, 143000}
pythia-1.3b-deduped
s3://s-eai-neox/pythia/1.3B_dedup/global_step{1500, 6500, 11500, ...., 66500, 71500}
EleutherAI/pythia-1.3b-deduped
, revision={3000, 13000, 23000, ..., 133000, 143000}
pythia-6.7b-deduped
s3://s-eai-neox/pythia/6.7B_deduped_new/global_step{3000, 13000, 23000, ...., 133000, 143000}
EleutherAI/pythia-19m-deduped
, revision={3000, 13000, 23000, ..., 133000, 143000}
We want to use the following tasks:
"What does p stand for"
and "refers_to"
I modified the https://github.com/EleutherAI/lm-evaluation-harness/blob/f9eca2c8160be8c20ecc956b7ff545f880160d0e/lm_eval/models/gpt2.py#L50
add transformers.GPTNeoXTokenizerFast,
command is:
python main.py --model gpt2 --model_args pretrained=/work/lm-evaluation-harness/ckpts/pythia-70m/step143000/models--EleutherAI--pythia-70m/snapshots/1c607732430c35e6387a86528d857887e87cae1f --tasks lambada_openai,hellaswag --device 1
traceback is:
Running loglikelihood requests
0%| | 8/45296 [00:01<1:46:22, 7.10it/s]
Traceback (most recent call last):
File "/work/lm-evaluation-harness/main.py", line 108, in
main()
File "/work/lm-evaluation-harness/main.py", line 79, in main
results = evaluator.simple_evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 86, in simple_evaluate
results = evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 247, in evaluate
resps = getattr(lm, reqtype)([req.args for req in reqs])
File "/work/lm-evaluation-harness/lm_eval/base.py", line 820, in fn
rem_res = getattr(self.lm, attr)(remaining_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 185, in loglikelihood
return self._loglikelihood_tokens(new_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 317, in _loglikelihood_tokens
logits = torch.gather(logits, 2, cont_toks.unsqueeze(-1)).squeeze(
RuntimeError: index 50276 is out of bounds for dimension 2 with size 50257
Thank you EleutherAI team for this valuable resource!
I noticed that the vocab_size
changes between different pythia models. Based on the config.json
files from the models hosted on HuggingFace, the models have the following vocab sizes:
pythia-70m: 50304
pythia-160m: 50304
pythia-410m: 50304
pythia-1b: 50304
pythia-1.4b: 50304
pythia-2.8b: 50304
pythia-6.9b: 50432
pythia-12b: 50688
Strangely, these sizes also don't match the vocab size of the tokenizers for each model. Based on tokenizer.get_vocab()
, the tokenizer for each model size has a vocab size of 50277. Does anyone know the reason for this vocab size mismatch?
How does training order influence memorization?
Collect zero-shot and 5-shot performance on the Pythia model suite across a selected subset of checkpoints over time.
Tasks:
Hi, I am trying to train a GPT model from scratch using your training script. However, you have only provided your preprocessed data without the preprocessing script. Would it be possible to share the preprocessing scripts to generate the .bin and .idx files?
It might be worth also hosting the Pile .bin
and .idx
files, for people to more easily reproduce our training runs on the same data if they desire. I don't think the deduplicated Pile has been hosted anywhere before.
(Re)train pythia models with the last 7% of training data adjusted to have all female pronouns.
s3://s-eai-neox/pythia/1.3B_dedup/global_step66500
trained for last 5k steps with intervened datas3://s-eai-neox/pythia/19M_deduped/global_step133000
trained for last 10k steps with intervened datas3://s-eai-neox/pythia/350M_dedup/global_step66500
trained for last 5k steps with intervened datas3://s-eai-neox/pythia/6.7B_deduped_new/global_step133000
trained for last 10k steps with intervened dataAll intervened models should be evaluated on the same benchmarks as #16 for all the saved checkpoints post-intervention. All saved intervened checkpoints should also be evaluated on the same benchmarks as chosen in #51 .
If get meaningful numbers from the above and have evaluated all the above models:
s3://s-eai-neox/pythia/1.3B_dedup/global_step66500
trained for last 10k steps with intervened dataCurrently some of the 0 and 5 shot evals I ran appear to be wrong. (the 6.7B and 1.3B evals, for sure.) Not sure what went wrong but rerunning is quick.
I'll pull the ones that may be bad from the repo asap! We'll need to rerun these.
Hi,
first of all, thanks for your great contributions to open research!
is there any plan to make the training logs accessible?
I've noticed something really odd while messing around with the pythia models for the tuned_lens
project.
It seems like the input embeddings where not reset from the v0 to current models. Was that intentional?
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device('cpu')
model_a = AutoModelForCausalLM.from_pretrained('EleutherAI/pythia-410m-deduped-v0')
model_a = model_a.to(device)
model_b = AutoModelForCausalLM.from_pretrained('EleutherAI/pythia-410m-deduped')
model_b = model_b.to(device)
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/pythia-410m-deduped')
input_ids = tokenizer.encode("it was the best of times, it was the worst of times", return_tensors="pt")
model_b.set_input_embeddings(model_a.get_input_embeddings())
with torch.no_grad():
print("Outputs with the input embeddings swapped")
print(tokenizer.decode(model_b(input_ids).logits.argmax(dim=-1)[0].tolist()))
model_a.get_input_embeddings().reset_parameters()
model_b.set_input_embeddings(model_a.get_input_embeddings())
with torch.no_grad():
print("Sanity check: outputs with the input embeddings reset")
print(tokenizer.decode(model_b(input_ids).logits.argmax(dim=-1)[0].tolist()))
Output:
Outputs with the input embeddings swapped
, a first of the to and was the best of times,
Sanity check: outputs with the input embeddings reset
se-. for-..of-- for.
FYI this does not seem to happen with the output embeddings.
Hi, there seems no training time or GPU utilzation report in your paper. Can you offer us some details like approximation of TFLOPs per GPU when training the 12B model? It'll be useful to approximate training time with a similar model scale.
Thanks a lot!
Specifically what method or library does the deduplication of the Pile carried out?
I have search previous issues and this repo and see no other mention of the methodology for deduplication.
I don't think that the deduped Pile raw text data is hosted anywhere--I couldn't find it on the eye. Even if we don't host the deduped Pile .bin
and .idx
files somewhere, we definitely need to host the raw deduped Pile data to make these experiments replicable.
Hailey:
0-shot:
EleutherAI/pythia-v1.1-70m
EleutherAI/pythia-v1.1-70m-deduped
EleutherAI/pythia-v1.1-160m
EleutherAI/pythia-v1.1-160m-deduped
EleutherAI/pythia-v1.1-410m
EleutherAI/pythia-v1.1-410m-deduped
EleutherAI/pythia-v1.1-1b
EleutherAI/pythia-v1.1-1b-deduped
EleutherAI/pythia-v1.1-1.4b
EleutherAI/pythia-v1.1-1.4b-deduped
EleutherAI/pythia-v1.1-2.8b
EleutherAI/pythia-v1.1-2.8b-deduped
EleutherAI/pythia-v1.1-6.9b
EleutherAI/pythia-v1.1-6.9b-deduped
EleutherAI/pythia-v1.1-12b
(missing 43000, 123000, 133000, 143000)EleutherAI/pythia-v1.1-12b-deduped
(missing 33000, 43000, 53000, 63000, 73000, 83000, 93000)Sai:
steps equivalent to [3000,13000,....,123000,133000,143000]
EleutherAI/pythia-v1.1-70m-0.25MtokBS
(model total steps: 1144000)EleutherAI/pythia-v1.1-160m-0.5MtokBS
(model total steps: 572000)EleutherAI/pythia-v1.1-410m-0.5MtokBS
(model total steps: 572000)EleutherAI/pythia-v1.1-1b-0.5MtokBS
(model total steps: 572000)EleutherAI/pythia-v1.1-1.4b-1MtokBS
(model total steps: 286000)Aviya:
5-shot, steps [3000,13000,....,123000,133000,143000]:
EleutherAI/pythia-v1.1-70m
EleutherAI/pythia-v1.1-70m-deduped
EleutherAI/pythia-v1.1-160m
EleutherAI/pythia-v1.1-160m-deduped
EleutherAI/pythia-v1.1-410m
EleutherAI/pythia-v1.1-410m-deduped
EleutherAI/pythia-v1.1-1b
EleutherAI/pythia-v1.1-1b-deduped
EleutherAI/pythia-v1.1-1.4b
EleutherAI/pythia-v1.1-1.4b-deduped
EleutherAI/pythia-v1.1-2.8b
EleutherAI/pythia-v1.1-2.8b-deduped
EleutherAI/pythia-v1.1-6.9b
EleutherAI/pythia-v1.1-6.9b-deduped
EleutherAI/pythia-v1.1-12b
EleutherAI/pythia-v1.1-12b-deduped
Herbie?
Task list:
hendrycksTest*
piqa
sciq
lambada_openai
winogrande
wsc
arc_challenge
arc_easy
logiqa
crows_pairs_*
Models missing still:
EleutherAI/intervention-pythia-v1.1-1.4b
(MISSING)EleutherAI/intervention-pythia-v1.1-1.4b-long
(MISSING)EleutherAI/pythia-v1.1-12b
(steps after 123000 are missing)EleutherAI/pythia-v1.1-1b
(MISSING)EleutherAI/pythia-v1.1-410m-0.5MtokBS
EleutherAI/pythia-v1.1-1b-0.5MtokBS
EleutherAI/pythia-v1.1-1.4b-1MtokBS
In the readme you have a note
TODO: forthcoming: more information on how to replicate + relaunch the Pythia training runs, once the data is actually downloaded.
We are trying to reproduce your results and it would be awesome to get some more details here. One particular thing that would be helpful would be to point us to a version/commit of gpt-neox that is similar to the one you used for these training runs, unless you're confident that the newest version of that library will still closely reproduce these results.
Survival analysis is a potential model for understanding whether the fact that a model has memorized a string at time t is likely to continue to have that string memorized or not.
We need to upload all model checkpoints to be hosted somewhere for researchers to access them.
Final checkpoints should go on the HF hub. Intermediate ckpts pending
We hypothesize that the Scatter SDE summary plot of the accuracy distribution is an exponential decay with a bump at acc = 1 corresponding to the sum of the tail probabilities (since the memorization score can't go above 1). Specifically, let p(x) = [the number of sequences in the training data that have accuracy x]. We want to do the following:
Hi there! Thank you so much for releasing these models! They've already been really valuable for my research. One small question: how were the model names chosen, specifically the parameter number part of the names? By my calculation here are the number of parameters for each of the models, excluding the embed and unembed matrices:
'pythia-19m': 18915328,
'pythia-125m': 85056000,
'pythia-350m': 302311424,
'pythia-800m': 805736448,
'pythia-1.3b': 1208602624,
'pythia-2.7b': 2517652480,
'pythia-6.7b': 6444163072,
'pythia-13b': 11327027200
And here are the number of parameters if I include one, but not both, of the embed/unembed matrices in the calculation:
'pythia-19m': 44670976,
'pythia-125m': 123689472,
'pythia-350m': 353822720,
'pythia-800m': 908759040,
'pythia-1.3b': 1311625216,
'pythia-2.7b': 2646430720,
'pythia-6.7b': 6650732544,
'pythia-13b': 11586549760
I guess for some of the models, one embed matrix was included in the parameter count, but others exclude both the embed and unembed matrix from the count? The 13b
model seems like an overestimate whichever way you count it.
Dear EleutherAI team,
I've noticed that the weights associated with the recently added "step0" and "step1" checkpoints are identical for all pythia models:
def main():
print(f"========== {sys.argv[1]} ==========")
model_step0 = GPTNeoXForCausalLM.from_pretrained(sys.argv[1], revision="step0", cache_dir=f"./test")
model_step1 = GPTNeoXForCausalLM.from_pretrained(sys.argv[1], revision="step1", cache_dir=f"./test")
for (name0, param0), (name1, param1) in zip(model_step0.named_parameters(), model_step1.named_parameters()):
print(name0, name1, name0 == name1, torch.all(param0==param1))
This yields something like the following for all eight pythia models:
========== EleutherAI/pythia-70m ==========
gpt_neox.embed_in.weight gpt_neox.embed_in.weight True tensor(True)
gpt_neox.layers.0.input_layernorm.weight gpt_neox.layers.0.input_layernorm.weight True tensor(True)
...
gpt_neox.final_layer_norm.weight gpt_neox.final_layer_norm.weight True tensor(True)
gpt_neox.final_layer_norm.bias gpt_neox.final_layer_norm.bias True tensor(True)
embed_out.weight embed_out.weight True tensor(True)
Would it be possible for you to clarify whether these identical weights correspond to those from "step0" or "step1?" I've noticed that the conditional probabilities calculated using these weights aren't perfectly uniform, which leads me to believe these are actually weights from "step1."
Thanks!
Byung-Doh
I just wanted to confirm that the 3 versions you have of the deduplicated data have the same data ordering?
I was hoping to use the jsonl
one but wanted to ensure it will accurately replicate the data ordering in your tokenized dataset that you used for training.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.