Code Monkey home page Code Monkey logo

pythia's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pythia's Issues

Social bias and knowledge of social bias

Pretty straight forward… need to implement the following in the eval harness:

  1. WinoGender Schemas (also called AX-g under SuperGLUE)
  2. CrowS-Pairs
  3. WinoBias

For a discussion of “recognizing bias” vs “reproducing bias” check out here. The primary goal is to look at how the development of understanding of bias correlates with the development of tendency to produce biased content.

It would also be interesting to look at correlations across categories of bias, e.g., does the model learn to reproduce and/or identify all types of bias at an equal rate? And if not, can we identify specific subsets of the Pile that are “biased in how they are biased” so to speak.

Generation issues when using KoboldAI

Hi there, I only used Pythia with KoboldAI, particularly 13B and 13B deduped (or maybe 12B due to the rename) and I'm having generation issues. Since I already have a thread on this on KoboldAI, and my post there is already substantial I think, I'm just gonna link it here. ebolam/KoboldAI#312. I'm also gonna attach a pic with one of the issues, particularly spacing issue.
209335571-43c5da4a-b695-47d7-a2f2-470aba0c8334

Pythia few shot capabilities

Hi, thanks for the amazing work!

I tried to use the Pythia model on a simple few-shot task. I gave it a few examples of sentiment analysis tasks (as shown in this example on GPT-J: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GPT-J-6B/Inference_with_GPT_J_6B.ipynb which successfully solves this), and it seems like it even did not understand the job (generates unrelated tokens).
Is there a reason for Pythia's failure on this simple task, compared to GPT-J's success (although having twice the number of parameters)?

Pythia v.s. GPT-neox

Hi, thanks for the awesome work.

I notice there are Pythia models and GPT-neox models on Eleuther's HF hub, both seem to be very recent. Wonder what are the the main differences between them, and if there is any preference on one of them.

Thank you.

Inconsistent batch sizes reported in readme

Thanks for the amazing work!

In the readme, the table reports some models as being trained with a batch size of 4M, but later on you say

All models were trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens

which of these is correct?

(Also, you say that checkpoints are created every 2B tokens - if batch sizes differ between models, does that mean that some models have only 500 training steps between checkpoints?)

Mistake in readme

In the readme it says

To download and use the deduplicated Pile training data, run:

git lfs clone https://huggingface.co/datasets/EleutherAI/pythia_pile_idxmaps

python utils/unshard_memmap.py --input_file ./pythia_pile_idxmaps/pile_0.87_deduped_text_document-00000-of-00082.bin --num_shards 83 --output_dir ./pythia_pile_idxmaps/```

But it should actually point to EleutherAI/pythia_deduped_pile_idxmaps.

Improve Pythia repository documentation

Todo items include:

  • upload all NeoX final configs to repository
  • Update table
  • Expand readme documentation with description of model suite
  • Add instructions for downloading model ckpts from HF hub, and from the eye

Validation Perplexities

Thanks for sharing this amazing work. This will hopefully help in developing a better understanding of how LLMs work.

I had one question. Are the validation perplexities for each of the models available (ideally with every model snapshot) so that we can compare models on equal footing?

Data viewing / Batch Reconstruction utility

I need to upload a utility / sample guide on how to inspect the data ordering / extract a batch at a given timestep. This'll essentially be a cleaned up version of the memorized seq util I'm working on.

Features we want:

  • Verified correct dataloader construction (verify w/ memorization)
  • Tool to extract + save a single timestep's batch to a numpy file
  • Take as argument a YML file from this repo
  • Detach from GPT-NeoX repo fully (or preserve this utility in a separate branch of NeoX)
  • Argparse to select: model, mode, some default statistics?
  • Support stepping through data over all timesteps + recording a statistic

pythia-13b size mismatch

When I run the following code to load up pythia-13b, I get a bunch of size mismatch errors.

model = GPTNeoXForCausalLM.from_pretrained(
        f"EleutherAI/pythia-13b",
        revision=f"step143000",
        cache_dir=f"./"
)

Errors:

Traceback (most recent call last):
  File "download_pythia_models.py", line 34, in <module>
    model = GPTNeoXForCausalLM.from_pretrained(
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2695, in _load_pretrained_model
    raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for GPTNeoXForCausalLM:
        size mismatch for gpt_neox.embed_in.weight: copying a param with shape torch.Size([50688, 5120]) from checkpoint, the shape in current model is torch.Size([50432, 4096]).
        size mismatch for gpt_neox.layers.0.input_layernorm.weight: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt_neox.layers.0.input_layernorm.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt_neox.layers.0.post_attention_layernorm.weight: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
        size mismatch for gpt_neox.layers.0.post_attention_layernorm.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([4096]).
...

These continue for every layer of the model. When I use ignore_mismatched_sizes=True in GPTNeoXForCausalLM.from_pretrained, I get this error instead:

Traceback (most recent call last):
  File "/om2/user/ericjm/the-everything-machine/experiments/pythia-0/eval.py", line 52, in <module>
    model = GPTNeoXForCausalLM.from_pretrained(
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained
    ) = cls._load_pretrained_model(
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2636, in _load_pretrained_model
    mismatched_keys += _find_mismatched_keys(
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2564, in _find_mismatched_keys
    and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape
KeyError: 'embed_out.weight'

I imagine that some config just needs to be updated to reflect the actual model sizes? I do not get this error with any of the smaller models.

EleutherAI/pythia-800m tokenizer adds unusual kwargs, causes a ValueError when evaluating model

It seems like the EleutherAI/pythia-800m tokenizer includes 'token_type_ids' values, but these lead to a ValueError when evaluating the following code:

from transformers import GPTNeoXForCausalLM, AutoTokenizer

model = GPTNeoXForCausalLM.from_pretrained(
  "EleutherAI/pythia-800m",
  revision="step143000",
  cache_dir=".pythia-800m/step143000",
)

tokenizer = AutoTokenizer.from_pretrained(
  "EleutherAI/pythia-800m",
  revision="step143000",
  cache_dir="./pythia-800m/step143000",
)

inputs = tokenizer("Hello, I am", return_tensors="pt")
model.generate(**inputs)

Here is the stack trace:

Traceback (most recent call last):
  File "eval.py", line 76, in <module>
    outputs = model.generate(**inputs, temperature=0.0, max_new_tokens=40)
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/generation/utils.py", line 1296, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "/om2/user/ericjm/miniconda3/envs/phase-changes/lib/python3.8/site-packages/transformers/generation/utils.py", line 993, in _validate_model_kwargs
    raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)

I can get around this error by simply using a tokenizer from another one of the models. This tokenizer, for instance, works:

tokenizer = AutoTokenizer.from_pretrained(
  "EleutherAI/pythia-19m",
  revision="step143000",
  cache_dir="./pythia-19m/step143000",
)

It seems like the tokenizers are the same for all the models, so this issue is pretty easy to get around, but I just thought I'd report it.

Bias metrics on selected Pythia models

We want metrics on the following models and checkpoints:

pythia-19m-deduped

  • Checkpoints at s3://s-eai-neox/pythia/19M_deduped/global_step{3000, 13000, 23000, ...., 133000, 143000}

pythia-350m-deduped

  • Checkpoints at s3://s-eai-neox/pythia/350M_dedup/global_step{1500, 6500, 11500, ...., 66500, 71500}
  • Or at HuggingFace, EleutherAI/pythia-350m-deduped, revision={3000, 13000, 23000, ..., 133000, 143000}

pythia-1.3b-deduped

  • Checkpoints at s3://s-eai-neox/pythia/1.3B_dedup/global_step{1500, 6500, 11500, ...., 66500, 71500}
  • Or at HuggingFace, EleutherAI/pythia-1.3b-deduped, revision={3000, 13000, 23000, ..., 133000, 143000}

pythia-6.7b-deduped

  • Checkpoints at s3://s-eai-neox/pythia/6.7B_deduped_new/global_step{3000, 13000, 23000, ...., 133000, 143000}
  • Or at HuggingFace, EleutherAI/pythia-19m-deduped, revision={3000, 13000, 23000, ..., 133000, 143000}

We want to use the following tasks:

  • wino_bias all 4 subsets, templates "What does p stand for" and "refers_to"
  • Other selected bias evaluation datasets???

pythia ckpts in hf config vocab_size(50304, 70m ckpt) and tokenizer.json(50257) are mismatch

I modified the https://github.com/EleutherAI/lm-evaluation-harness/blob/f9eca2c8160be8c20ecc956b7ff545f880160d0e/lm_eval/models/gpt2.py#L50
add transformers.GPTNeoXTokenizerFast,

command is:
python main.py --model gpt2 --model_args pretrained=/work/lm-evaluation-harness/ckpts/pythia-70m/step143000/models--EleutherAI--pythia-70m/snapshots/1c607732430c35e6387a86528d857887e87cae1f --tasks lambada_openai,hellaswag --device 1

traceback is:

Running loglikelihood requests
0%| | 8/45296 [00:01<1:46:22, 7.10it/s]
Traceback (most recent call last):
File "/work/lm-evaluation-harness/main.py", line 108, in
main()
File "/work/lm-evaluation-harness/main.py", line 79, in main
results = evaluator.simple_evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 86, in simple_evaluate
results = evaluate(
File "/work/lm-evaluation-harness/lm_eval/utils.py", line 161, in _wrapper
return fn(*args, **kwargs)
File "/work/lm-evaluation-harness/lm_eval/evaluator.py", line 247, in evaluate
resps = getattr(lm, reqtype)([req.args for req in reqs])
File "/work/lm-evaluation-harness/lm_eval/base.py", line 820, in fn
rem_res = getattr(self.lm, attr)(remaining_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 185, in loglikelihood
return self._loglikelihood_tokens(new_reqs)
File "/work/lm-evaluation-harness/lm_eval/base.py", line 317, in _loglikelihood_tokens
logits = torch.gather(logits, 2, cont_toks.unsqueeze(-1)).squeeze(
RuntimeError: index 50276 is out of bounds for dimension 2 with size 50257

Does early memorization predict late memorization?

We currently have the following correlation heat-map which indicates that the answer is "yes." We should probably also make confusion matrices for the classifier that takes a model and predicts memorization by the fully trained model by assuming it is the same as the 23M sequence checkpoint.

Image

Inconsistency in vocab sizes

Thank you EleutherAI team for this valuable resource!

I noticed that the vocab_size changes between different pythia models. Based on the config.json files from the models hosted on HuggingFace, the models have the following vocab sizes:

pythia-70m: 50304
pythia-160m: 50304
pythia-410m: 50304
pythia-1b: 50304
pythia-1.4b: 50304
pythia-2.8b: 50304
pythia-6.9b: 50432
pythia-12b: 50688

Strangely, these sizes also don't match the vocab size of the tokenizers for each model. Based on tokenizer.get_vocab(), the tokenizer for each model size has a vocab size of 50277. Does anyone know the reason for this vocab size mismatch?

General zero and few-shot evaluations on model suite

Collect zero-shot and 5-shot performance on the Pythia model suite across a selected subset of checkpoints over time.

Tasks:

  • MMLU (57 tasks)
  • BLiMP
  • PiQA
  • SciQ
  • Lambada
  • Wikitext
  • Winogrande
  • WSC
  • Arc-challenge and ARC-easy
  • Logiqa

What tool do you use for your data preprocessing/binarization?

Hi, I am trying to train a GPT model from scratch using your training script. However, you have only provided your preprocessed data without the preprocessing script. Would it be possible to share the preprocessing scripts to generate the .bin and .idx files?

Host Pile pretokenized .bin and .idx megatron files?

It might be worth also hosting the Pile .bin and .idx files, for people to more easily reproduce our training runs on the same data if they desire. I don't think the deduplicated Pile has been hosted anywhere before.

Train M -> F pronoun interventions on selected Pythia models

(Re)train pythia models with the last 7% of training data adjusted to have all female pronouns.

  • s3://s-eai-neox/pythia/1.3B_dedup/global_step66500 trained for last 5k steps with intervened data
  • s3://s-eai-neox/pythia/19M_deduped/global_step133000 trained for last 10k steps with intervened data
  • s3://s-eai-neox/pythia/350M_dedup/global_step66500 trained for last 5k steps with intervened data
  • s3://s-eai-neox/pythia/6.7B_deduped_new/global_step133000 trained for last 10k steps with intervened data

All intervened models should be evaluated on the same benchmarks as #16 for all the saved checkpoints post-intervention. All saved intervened checkpoints should also be evaluated on the same benchmarks as chosen in #51 .

If get meaningful numbers from the above and have evaluated all the above models:

  • s3://s-eai-neox/pythia/1.3B_dedup/global_step66500 trained for last 10k steps with intervened data

Fix currently uploaded eval-harness numbers for 1.3B ; 6.7B

Currently some of the 0 and 5 shot evals I ran appear to be wrong. (the 6.7B and 1.3B evals, for sure.) Not sure what went wrong but rerunning is quick.

I'll pull the ones that may be bad from the repo asap! We'll need to rerun these.

training logs

Hi,

first of all, thanks for your great contributions to open research!

is there any plan to make the training logs accessible?

Are pythia_v0 and the new pythia_v1 models using the same input embedding matrix?

I've noticed something really odd while messing around with the pythia models for the tuned_lens project.

It seems like the input embeddings where not reset from the v0 to current models. Was that intentional?

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer


device = torch.device('cpu')
model_a = AutoModelForCausalLM.from_pretrained('EleutherAI/pythia-410m-deduped-v0')
model_a = model_a.to(device)
model_b = AutoModelForCausalLM.from_pretrained('EleutherAI/pythia-410m-deduped')
model_b = model_b.to(device)
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/pythia-410m-deduped')

input_ids = tokenizer.encode("it was the best of times, it was the worst of times", return_tensors="pt")
model_b.set_input_embeddings(model_a.get_input_embeddings())
with torch.no_grad():
    print("Outputs with the input embeddings swapped")
    print(tokenizer.decode(model_b(input_ids).logits.argmax(dim=-1)[0].tolist()))

model_a.get_input_embeddings().reset_parameters()
model_b.set_input_embeddings(model_a.get_input_embeddings())
with torch.no_grad():
    print("Sanity check: outputs with the input embeddings reset")
    print(tokenizer.decode(model_b(input_ids).logits.argmax(dim=-1)[0].tolist()))

Output:

Outputs with the input embeddings swapped
, a first of the to and was the best of times,
Sanity check: outputs with the input embeddings reset

se-. for-..of-- for.

FYI this does not seem to happen with the output embeddings.

Training time or approximation of TFLOPs?

Hi, there seems no training time or GPU utilzation report in your paper. Can you offer us some details like approximation of TFLOPs per GPU when training the 12B model? It'll be useful to approximate training time with a similar model scale.

Thanks a lot!

How was deduplication done?

Specifically what method or library does the deduplication of the Pile carried out?

I have search previous issues and this repo and see no other mention of the methodology for deduplication.

Host Deduped Pile raw jsonls

I don't think that the deduped Pile raw text data is hosted anywhere--I couldn't find it on the eye. Even if we don't host the deduped Pile .bin and .idx files somewhere, we definitely need to host the raw deduped Pile data to make these experiments replicable.

Complete outstanding evals

Hailey:
0-shot:

  • EleutherAI/pythia-v1.1-70m
  • EleutherAI/pythia-v1.1-70m-deduped
  • EleutherAI/pythia-v1.1-160m
  • EleutherAI/pythia-v1.1-160m-deduped
  • EleutherAI/pythia-v1.1-410m
  • EleutherAI/pythia-v1.1-410m-deduped
  • EleutherAI/pythia-v1.1-1b
  • EleutherAI/pythia-v1.1-1b-deduped
  • EleutherAI/pythia-v1.1-1.4b
  • EleutherAI/pythia-v1.1-1.4b-deduped
  • EleutherAI/pythia-v1.1-2.8b
  • EleutherAI/pythia-v1.1-2.8b-deduped
  • EleutherAI/pythia-v1.1-6.9b
  • EleutherAI/pythia-v1.1-6.9b-deduped
  • EleutherAI/pythia-v1.1-12b (missing 43000, 123000, 133000, 143000)
  • EleutherAI/pythia-v1.1-12b-deduped (missing 33000, 43000, 53000, 63000, 73000, 83000, 93000)
  • interventions

Sai:
steps equivalent to [3000,13000,....,123000,133000,143000]

  • EleutherAI/pythia-v1.1-70m-0.25MtokBS (model total steps: 1144000)
  • EleutherAI/pythia-v1.1-160m-0.5MtokBS (model total steps: 572000)
  • EleutherAI/pythia-v1.1-410m-0.5MtokBS (model total steps: 572000)
  • EleutherAI/pythia-v1.1-1b-0.5MtokBS (model total steps: 572000)
  • EleutherAI/pythia-v1.1-1.4b-1MtokBS (model total steps: 286000)

Aviya:
5-shot, steps [3000,13000,....,123000,133000,143000]:

  • EleutherAI/pythia-v1.1-70m
  • EleutherAI/pythia-v1.1-70m-deduped
  • EleutherAI/pythia-v1.1-160m
  • EleutherAI/pythia-v1.1-160m-deduped
  • EleutherAI/pythia-v1.1-410m
  • EleutherAI/pythia-v1.1-410m-deduped
  • EleutherAI/pythia-v1.1-1b
  • EleutherAI/pythia-v1.1-1b-deduped
  • EleutherAI/pythia-v1.1-1.4b
  • EleutherAI/pythia-v1.1-1.4b-deduped
  • EleutherAI/pythia-v1.1-2.8b
  • EleutherAI/pythia-v1.1-2.8b-deduped
  • EleutherAI/pythia-v1.1-6.9b
  • EleutherAI/pythia-v1.1-6.9b-deduped
  • EleutherAI/pythia-v1.1-12b
  • EleutherAI/pythia-v1.1-12b-deduped

Herbie?

  • Winobias on interventions + baseline

Task list:

  • hendrycksTest*
  • piqa
  • sciq
  • lambada_openai
  • winogrande
  • wsc
  • arc_challenge
  • arc_easy
  • logiqa
  • crows_pairs_*

Models missing still:

  • EleutherAI/intervention-pythia-v1.1-1.4b (MISSING)
  • EleutherAI/intervention-pythia-v1.1-1.4b-long (MISSING)
  • EleutherAI/pythia-v1.1-12b (steps after 123000 are missing)
  • EleutherAI/pythia-v1.1-1b (MISSING)
  • EleutherAI/pythia-v1.1-410m-0.5MtokBS
  • EleutherAI/pythia-v1.1-1b-0.5MtokBS
  • EleutherAI/pythia-v1.1-1.4b-1MtokBS

Add more details for reproducing training runs

In the readme you have a note

TODO: forthcoming: more information on how to replicate + relaunch the Pythia training runs, once the data is actually downloaded.

We are trying to reproduce your results and it would be awesome to get some more details here. One particular thing that would be helpful would be to point us to a version/commit of gpt-neox that is similar to the one you used for these training runs, unless you're confident that the newest version of that library will still closely reproduce these results.

Pythia Experiment 3: Gender bias

  • Collect cooccurrence statistics on deduped and non-deduped Pile
  • Cast WinoBias as a multiple choice classification task
  • Convert corpus stats into Pandas dataset + visualize distribution
  • Run WinoBias on all models
  • Collect correlations between WinoBias scores and data statistics
  • Create fine-tuning data points for insertion into last 3 ckpt retraining

Survival model analysis

Survival analysis is a potential model for understanding whether the fact that a model has memorized a string at time t is likely to continue to have that string memorized or not.

Host pythia suite

We need to upload all model checkpoints to be hosted somewhere for researchers to access them.

Final checkpoints should go on the HF hub. Intermediate ckpts pending

Fit the exponential decay curve to accuracy distribution

We hypothesize that the Scatter SDE summary plot of the accuracy distribution is an exponential decay with a bump at acc = 1 corresponding to the sum of the tail probabilities (since the memorization score can't go above 1). Specifically, let p(x) = [the number of sequences in the training data that have accuracy x]. We want to do the following:

  1. Fit an exponential decay curve to p(x) looking only at x in [0, k] for k in [0.25, 0.5, 0.75, 0.9, 0.99]
  2. Check how well the curves agree on [k, infinity)
  3. Check whether the sum from i = 1 to infinity of p(i) according to the fit model equals the observed p(1) value.

Question about naming convention for models

Hi there! Thank you so much for releasing these models! They've already been really valuable for my research. One small question: how were the model names chosen, specifically the parameter number part of the names? By my calculation here are the number of parameters for each of the models, excluding the embed and unembed matrices:

 'pythia-19m': 18915328,
 'pythia-125m': 85056000,
 'pythia-350m': 302311424,
 'pythia-800m': 805736448,
 'pythia-1.3b': 1208602624,
 'pythia-2.7b': 2517652480,
 'pythia-6.7b': 6444163072,
 'pythia-13b': 11327027200

And here are the number of parameters if I include one, but not both, of the embed/unembed matrices in the calculation:

 'pythia-19m': 44670976,
 'pythia-125m': 123689472,
 'pythia-350m': 353822720,
 'pythia-800m': 908759040,
 'pythia-1.3b': 1311625216,
 'pythia-2.7b': 2646430720,
 'pythia-6.7b': 6650732544,
 'pythia-13b': 11586549760

I guess for some of the models, one embed matrix was included in the parameter count, but others exclude both the embed and unembed matrix from the count? The 13b model seems like an overestimate whichever way you count it.

Weights of "step0" and "step1" checkpoints are identical for all pythia models

Dear EleutherAI team,

I've noticed that the weights associated with the recently added "step0" and "step1" checkpoints are identical for all pythia models:

def main():
    print(f"========== {sys.argv[1]} ==========")
    model_step0 = GPTNeoXForCausalLM.from_pretrained(sys.argv[1], revision="step0", cache_dir=f"./test")
    model_step1 = GPTNeoXForCausalLM.from_pretrained(sys.argv[1], revision="step1", cache_dir=f"./test")

    for (name0, param0), (name1, param1) in zip(model_step0.named_parameters(), model_step1.named_parameters()):
        print(name0, name1, name0 == name1, torch.all(param0==param1))

This yields something like the following for all eight pythia models:

========== EleutherAI/pythia-70m ==========
gpt_neox.embed_in.weight gpt_neox.embed_in.weight True tensor(True)
gpt_neox.layers.0.input_layernorm.weight gpt_neox.layers.0.input_layernorm.weight True tensor(True)
...
gpt_neox.final_layer_norm.weight gpt_neox.final_layer_norm.weight True tensor(True)
gpt_neox.final_layer_norm.bias gpt_neox.final_layer_norm.bias True tensor(True)
embed_out.weight embed_out.weight True tensor(True)

Would it be possible for you to clarify whether these identical weights correspond to those from "step0" or "step1?" I've noticed that the conditional probabilities calculated using these weights aren't perfectly uniform, which leads me to believe these are actually weights from "step1."

Thanks!
Byung-Doh

Ordering of deduplicated datasets?

I just wanted to confirm that the 3 versions you have of the deduplicated data have the same data ordering?

I was hoping to use the jsonl one but wanted to ensure it will accurately replicate the data ordering in your tokenized dataset that you used for training.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.