Comments (26)
1. Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>
2. Convert llama-2 from Megatron-LM to HuggingFace:
Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py
Step 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>
But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:
- Set
--norm-epsilon=1e-6
, - Do not enable
--apply-query-key-layer-scaling
(or enable--no-query-key-layer-scaling
in older versions), - Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
- Enable
--disable-bias-linear
.
from megatron-lm.
Was interested in the same questions, @usuyama. See excerpt from Megatron paper. Does look like Megatron<->HF will require some updates on HF side.
from megatron-lm.
In order to convert the Megatron GPT2 model to HF(huggingface transformers) GPT2, a layer level parameter conversion was performed and verification was conducted, but the conversion was not performed properly.
The following is the core concept of transformation.
Megatron GPT2 transformer layer and shape
layers.0.input_layernorm.weight, shape: torch.Size([1920])
layers.0.input_layernorm.bias, shape: torch.Size([1920])
layers.0.attention.query_key_value.weight, shape: torch.Size([5760, 1920]) # need transpose
layers.0.attention.query_key_value.bias, shape: torch.Size([5760])
layers.0.attention.dense.weight, shape: torch.Size([1920, 1920])
layers.0.attention.dense.bias, shape: torch.Size([1920])
layers.0.post_attention_layernorm.weight, shape: torch.Size([1920])
layers.0.post_attention_layernorm.bias, shape: torch.Size([1920])
layers.0.mlp.dense_h_to_4h.weight, shape: torch.Size([7680, 1920]) # need transpose
layers.0.mlp.dense_h_to_4h.bias, shape: torch.Size([7680])
layers.0.mlp.dense_4h_to_h.weight, shape: torch.Size([1920, 7680]) # need transpose
layers.0.mlp.dense_4h_to_h.bias, shape: torch.Size([1920])
HF GPT2 transformer layer and shape
transformer.h.0.ln_1.weight, shape: torch.Size([1920])
transformer.h.0.ln_1.bias, shape: torch.Size([1920])
transformer.h.0.attn.bias, shape: torch.Size([1, 1, 1920, 1920])
transformer.h.0.attn.masked_bias, shape: torch.Size([])
transformer.h.0.attn.c_attn.weight, shape: torch.Size([1920, 5760])
transformer.h.0.attn.c_attn.bias, shape: torch.Size([5760])
transformer.h.0.attn.c_proj.weight, shape: torch.Size([1920, 1920])
transformer.h.0.attn.c_proj.bias, shape: torch.Size([1920])
transformer.h.0.ln_2.weight, shape: torch.Size([1920])
transformer.h.0.ln_2.bias, shape: torch.Size([1920])
transformer.h.0.mlp.c_fc.weight, shape: torch.Size([1920, 7680])
transformer.h.0.mlp.c_fc.bias, shape: torch.Size([7680])
transformer.h.0.mlp.c_proj.weight, shape: torch.Size([7680, 1920])
transformer.h.0.mlp.c_proj.bias, shape: torch.Size([1920])
In the case of attn.bias
and masked_bias
, they were the same as the values implemented in Megatron GPT2, so they were ignored during conversion and all parameters were converted, but the generated results of HF GPT2 were different from those of Megatron GPT2.
I guess HF GPT2 and Megatron GPT2 have some different layer level implementation. If you have any ideas on this part, please let me know.
from megatron-lm.
Have not tried it but this exists: https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2
from megatron-lm.
As @vdabravolski pointed out, Megatron rearranged LayerNorm and residual connection in the transformer block. Maybe that's one difference you observed, @haven-jeon ?
from megatron-lm.
hmm it seems not so straight forward to convert to huggingface format.
At least, I think LayerNorms locations don't match.
Megatron-LM model structure:
BertModel(
(language_model): TransformerLanguageModel(
(embedding): Embedding(
(word_embeddings): VocabParallelEmbedding()
(position_embeddings): Embedding(512, 768)
(tokentype_embeddings): Embedding(2, 768)
(embedding_dropout): Dropout(p=0.1, inplace=False)
)
(transformer): ParallelTransformer(
(layers): ModuleList(
(0): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(final_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
)
(pooler): Pooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
)
)
(lm_head): BertLMHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
)
(binary_head): Linear(in_features=768, out_features=2, bias=True)
)
from megatron-lm.
for reference, huggingface BertModel
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
from megatron-lm.
Any thoughts/advice? @jaredcasper @PyxAI @harkous @raulpuric
from megatron-lm.
Any updates?
from megatron-lm.
Thanks, @vdabravolski
Need to check the forward function for details, but the order of weights looks different as you pointed out.
Megatron-LM
(11): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
HuggingFace
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
from megatron-lm.
I have the same question. Any new update?
from megatron-lm.
Curious about this too – I have a GPT2 model trained with Megatron and would love to get it imported into HF.
from megatron-lm.
@usuyama, thanks for reminding me.
I thought it was a part related to BERT in the paper, but looking at the Megatron-LM code, it seems to be the code shared with GPT2.
Megatron-LM/megatron/model/transformer.py
Line 445 in 1b3dfa2
from megatron-lm.
Any news on this issue?
from megatron-lm.
Any news on this?
from megatron-lm.
Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.
from megatron-lm.
Marking as stale. No activity in 60 days.
from megatron-lm.
Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.
from megatron-lm.
Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL>
Save llama-2 checkpoint as HuggingFace to Megatron-LM:
Step 1. Download this file to Megatron-LM/tools/checkpoint/saver_llama2_hf.py
Step 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>Step 3. Test
from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrain(<SAVE_DIR>)
Works perfectly for me. Just changed --loader=llama2-hf to --loader=megatron since we want to convert Megatron checkpoint to hf
from megatron-lm.
Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.
They have script for converting GPT-2 somewhere in hf's repo transformers/models/megatron-gpt2
https://huggingface.co/docs/transformers/model_doc/megatron_gpt2
Otherwise it should be in somewhere in Megatron's repo.
from megatron-lm.
Marking as stale. No activity in 60 days.
from megatron-lm.
Could you provide guidance on how to consolidate the weights of a module—specifically, ParallelMLP and Parallel Attention—into a PyTorch-compatible format? I am utilizing a tensor-parallel size greater than 1, which results in the module's parameters being distributed across different ranks. How can I aggregate these to obtain the complete set of model weights?
from megatron-lm.
1. ラマ-2 を HuggingFace から Megatron-LM に変換します。
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>
2. ラマ-2 を Megatron-LM から HuggingFace に変換します。
ステップ 1. このPython スクリプトをダウンロードし、次の場所に保存します。
Megatron-LM/tools/checkpoint/saver_llama2_hf.py
ステップ 2. 変換を実行する
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>ただし、LLaMA-2 を MGT から HF に変換する前に、トレーニング プロセス中に MGT の次のパラメータが HF と同じデフォルト値に設定されていることを確認する必要があります。
- セット
--norm-epsilon=1e-6
、- 有効にしないでください
--apply-query-key-layer-scaling
(または--no-query-key-layer-scaling
古いバージョンでは有効にします)。- カスタムのattention_maskもposition_idsも、トレーニング中のMGTのGPTモデルでは効果がありません。
- 有効にする
--disable-bias-linear
。
Does this conversion script support GQA?
from megatron-lm.
Marking as stale. No activity in 60 days.
from megatron-lm.
I found a script in transformers
https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py
anyone tried this before? It seems to convert a gpt2 from Megatron format to huggingface format
from megatron-lm.
1. Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>
2. Convert llama-2 from Megatron-LM to HuggingFace:
Step 1. Download this python script and save into
Megatron-LM/tools/checkpoint/saver_llama2_hf.py
Step 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:
- Set
--norm-epsilon=1e-6
,- Do not enable
--apply-query-key-layer-scaling
(or enable--no-query-key-layer-scaling
in older versions),- Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
- Enable
--disable-bias-linear
.
Is this support GQA
from megatron-lm.
Related Issues (20)
- function missing
- [QUESTION]More experts (more than 8, each EP group > 1), continuing pre-training of MoE with grouped_gemm results in abnormal initial loss (too high), around 6.x. HOT 2
- [QUESTION]
- [BUG] Missing init_process_group call when converting model to HF format. HOT 9
- [QUESTION] Can fp8 and pipeline parallelism be used together?
- [QUESTION] Why set requires_grad=True for token unpermutation in MoE HOT 1
- [BUG] Unnecessary initialization for router in megatron-core HOT 1
- [ENHANCEMENT] Enable non-gelu activations for BERT LM Head
- [QUESTION] When i use --use-dist-ckpt to load ,there is something error and I can't tell if it's my configuration problem or the code problem.
- [BUG] Getting distributed rank in save_checkpoint when torch.distributed is not initialized. HOT 5
- [QUESTION] Why Megatron choose sync style training? HOT 1
- [BUG] Failed to load the megatron_mixtral checkpoint HOT 2
- [QUESTION] How Do NCCL_ALGO and Flash Attention Affect Deterministic Training in Megatron? HOT 1
- VocabParallelEmbedding
- How to train multiple binariey files at the same time or merge them?
- too many .bin files for dataloader, crashed
- what's the biggest dataset you've tried?
- how to install llama package HOT 1
- [BUG] Resource Leak When Profile Parameter is Enabled
- [QUESTION] add_position_embedding=False in checkpoint_args during Llama3 8B training HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from megatron-lm.