Code Monkey home page Code Monkey logo

Comments (26)

devymex avatar devymex commented on July 21, 2024 8

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.

from megatron-lm.

vdabravolski avatar vdabravolski commented on July 21, 2024 4

Was interested in the same questions, @usuyama. See excerpt from Megatron paper. Does look like Megatron<->HF will require some updates on HF side.
image

from megatron-lm.

haven-jeon avatar haven-jeon commented on July 21, 2024 4

In order to convert the Megatron GPT2 model to HF(huggingface transformers) GPT2, a layer level parameter conversion was performed and verification was conducted, but the conversion was not performed properly.

The following is the core concept of transformation.

Megatron GPT2 transformer layer and shape

layers.0.input_layernorm.weight, shape: torch.Size([1920])
layers.0.input_layernorm.bias, shape: torch.Size([1920])
layers.0.attention.query_key_value.weight, shape: torch.Size([5760, 1920])  # need transpose
layers.0.attention.query_key_value.bias, shape: torch.Size([5760])
layers.0.attention.dense.weight, shape: torch.Size([1920, 1920])
layers.0.attention.dense.bias, shape: torch.Size([1920])
layers.0.post_attention_layernorm.weight, shape: torch.Size([1920])
layers.0.post_attention_layernorm.bias, shape: torch.Size([1920])
layers.0.mlp.dense_h_to_4h.weight, shape: torch.Size([7680, 1920])   # need transpose
layers.0.mlp.dense_h_to_4h.bias, shape: torch.Size([7680])
layers.0.mlp.dense_4h_to_h.weight, shape: torch.Size([1920, 7680])  # need transpose
layers.0.mlp.dense_4h_to_h.bias, shape: torch.Size([1920])

HF GPT2 transformer layer and shape

transformer.h.0.ln_1.weight, shape: torch.Size([1920])
transformer.h.0.ln_1.bias, shape: torch.Size([1920])
transformer.h.0.attn.bias, shape: torch.Size([1, 1, 1920, 1920])
transformer.h.0.attn.masked_bias, shape: torch.Size([])
transformer.h.0.attn.c_attn.weight, shape: torch.Size([1920, 5760])
transformer.h.0.attn.c_attn.bias, shape: torch.Size([5760])
transformer.h.0.attn.c_proj.weight, shape: torch.Size([1920, 1920])
transformer.h.0.attn.c_proj.bias, shape: torch.Size([1920])
transformer.h.0.ln_2.weight, shape: torch.Size([1920])
transformer.h.0.ln_2.bias, shape: torch.Size([1920])
transformer.h.0.mlp.c_fc.weight, shape: torch.Size([1920, 7680])
transformer.h.0.mlp.c_fc.bias, shape: torch.Size([7680])
transformer.h.0.mlp.c_proj.weight, shape: torch.Size([7680, 1920])
transformer.h.0.mlp.c_proj.bias, shape: torch.Size([1920])

In the case of attn.bias and masked_bias, they were the same as the values ​​implemented in Megatron GPT2, so they were ignored during conversion and all parameters were converted, but the generated results of HF GPT2 were different from those of Megatron GPT2.

I guess HF GPT2 and Megatron GPT2 have some different layer level implementation. If you have any ideas on this part, please let me know.

from megatron-lm.

chrisby avatar chrisby commented on July 21, 2024 4

Have not tried it but this exists: https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2

from megatron-lm.

usuyama avatar usuyama commented on July 21, 2024 1

As @vdabravolski pointed out, Megatron rearranged LayerNorm and residual connection in the transformer block. Maybe that's one difference you observed, @haven-jeon ?

from megatron-lm.

usuyama avatar usuyama commented on July 21, 2024

hmm it seems not so straight forward to convert to huggingface format.
At least, I think LayerNorms locations don't match.

Megatron-LM model structure:

BertModel(
  (language_model): TransformerLanguageModel(
    (embedding): Embedding(
      (word_embeddings): VocabParallelEmbedding()
      (position_embeddings): Embedding(512, 768)
      (tokentype_embeddings): Embedding(2, 768)
      (embedding_dropout): Dropout(p=0.1, inplace=False)
    )
    (transformer): ParallelTransformer(
      (layers): ModuleList(
        (0): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (1): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (2): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (3): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (4): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (5): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (6): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (7): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (8): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (9): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (10): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
      (final_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
    )
    (pooler): Pooler(
      (dense): Linear(in_features=768, out_features=768, bias=True)
    )
  )
  (lm_head): BertLMHead(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
  )
  (binary_head): Linear(in_features=768, out_features=2, bias=True)
)

from megatron-lm.

usuyama avatar usuyama commented on July 21, 2024

for reference, huggingface BertModel

BertModel(
  (embeddings): BertEmbeddings(
    (word_embeddings): Embedding(30522, 768, padding_idx=0)
    (position_embeddings): Embedding(512, 768)
    (token_type_embeddings): Embedding(2, 768)
    (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (encoder): BertEncoder(
    (layer): ModuleList(
      (0): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (1): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (2): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (3): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (4): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (5): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (6): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (7): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (8): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (9): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (10): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
  )
  (pooler): BertPooler(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (activation): Tanh()
  )
)

from megatron-lm.

usuyama avatar usuyama commented on July 21, 2024

Any thoughts/advice? @jaredcasper @PyxAI @harkous @raulpuric

from megatron-lm.

Beomi avatar Beomi commented on July 21, 2024

Any updates?

from megatron-lm.

usuyama avatar usuyama commented on July 21, 2024

Thanks, @vdabravolski

Need to check the forward function for details, but the order of weights looks different as you pointed out.

Megatron-LM

        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )

HuggingFace

      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )

from megatron-lm.

amirj avatar amirj commented on July 21, 2024

I have the same question. Any new update?

from megatron-lm.

moyix avatar moyix commented on July 21, 2024

Curious about this too – I have a GPT2 model trained with Megatron and would love to get it imported into HF.

from megatron-lm.

haven-jeon avatar haven-jeon commented on July 21, 2024

@usuyama, thanks for reminding me.
I thought it was a part related to BERT in the paper, but looking at the Megatron-LM code, it seems to be the code shared with GPT2.

if self.apply_residual_connection_post_layernorm:
This part looks different from the HF transformers. 🤔

from megatron-lm.

malteos avatar malteos commented on July 21, 2024

Any news on this issue?

from megatron-lm.

Symbolk avatar Symbolk commented on July 21, 2024

Any news on this?

from megatron-lm.

github-actions avatar github-actions commented on July 21, 2024

Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.

from megatron-lm.

github-actions avatar github-actions commented on July 21, 2024

Marking as stale. No activity in 60 days.

from megatron-lm.

TheRootOf3 avatar TheRootOf3 commented on July 21, 2024

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

from megatron-lm.

CaesarWWK avatar CaesarWWK commented on July 21, 2024

Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL>

Save llama-2 checkpoint as HuggingFace to Megatron-LM:

Step 1. Download this file to Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

Step 3. Test

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrain(<SAVE_DIR>)

Works perfectly for me. Just changed --loader=llama2-hf to --loader=megatron since we want to convert Megatron checkpoint to hf

from megatron-lm.

CaesarWWK avatar CaesarWWK commented on July 21, 2024

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

They have script for converting GPT-2 somewhere in hf's repo transformers/models/megatron-gpt2
https://huggingface.co/docs/transformers/model_doc/megatron_gpt2

Otherwise it should be in somewhere in Megatron's repo.

from megatron-lm.

github-actions avatar github-actions commented on July 21, 2024

Marking as stale. No activity in 60 days.

from megatron-lm.

chenfengshijie avatar chenfengshijie commented on July 21, 2024

Could you provide guidance on how to consolidate the weights of a module—specifically, ParallelMLP and Parallel Attention—into a PyTorch-compatible format? I am utilizing a tensor-parallel size greater than 1, which results in the module's parameters being distributed across different ranks. How can I aggregate these to obtain the complete set of model weights?

from megatron-lm.

sudy-super avatar sudy-super commented on July 21, 2024

1. ラマ-2 を HuggingFace から Megatron-LM に変換します。

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. ラマ-2 を Megatron-LM から HuggingFace に変換します。

ステップ 1. このPython スクリプトをダウンロードし、次の場所に保存します。Megatron-LM/tools/checkpoint/saver_llama2_hf.py

ステップ 2. 変換を実行する

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

ただし、LLaMA-2 を MGT から HF に変換する前に、トレーニング プロセス中に MGT の次のパラメータが HF と同じデフォルト値に設定されていることを確認する必要があります。

  1. セット--norm-epsilon=1e-6
  2. 有効にしないでください--apply-query-key-layer-scaling(または--no-query-key-layer-scaling古いバージョンでは有効にします)。
  3. カスタムのattention_maskもposition_idsも、トレーニング中のMGTのGPTモデルでは効果がありません。
  4. 有効にする--disable-bias-linear

Does this conversion script support GQA?

from megatron-lm.

github-actions avatar github-actions commented on July 21, 2024

Marking as stale. No activity in 60 days.

from megatron-lm.

babu111 avatar babu111 commented on July 21, 2024

I found a script in transformers
https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py
anyone tried this before? It seems to convert a gpt2 from Megatron format to huggingface format

from megatron-lm.

JiwenJ avatar JiwenJ commented on July 21, 2024

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.

Is this support GQA

from megatron-lm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.