Code Monkey home page Code Monkey logo

Comments (5)

Tomorrowdawn avatar Tomorrowdawn commented on July 28, 2024

I tried to comment the last line and received the same error again:

Traceback (most recent call last):
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 268, in <module>
    draft_model.initialize_cuda_graph(graph_capture_list)  
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 189, in initialize_cuda_graph
    self.callables[decoding_seqlen] = capture_graph(
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 141, in capture_graph
    static_logits = engine.model_run(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 38, in model_run
    logits = self.model(input_ids=input_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 201, in forward
    outputs = self.model(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 59, in forward
    layer_outputs = decoder_layer(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 334, in forward
    hidden_states = self.self_attn(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_i
mpl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 118, in forward
    query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 207, in apply_rotary_pos_emb
    q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (12) must match the size of tensor b (384) at non-singleton dimension 1

After a thorough investigation of the source code, I discovered that within the implementation of attention, the query and key are transformed into the following forms.

query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)

I printed the tensors' shape:

query_states:  torch.Size([1, 12, 19, 64])
key_states:  torch.Size([1, 12, 19, 64])
cos:  torch.Size([384, 64])
sin:  torch.Size([384, 64])
position_ids:  torch.Size([1, 19])

apply_rotary_pos_emb requires multiplying the cosine and query, which clearly do not match in shape. I'm uncertain about the original intention of the source code, hence unable to correct this issue on my own.

from sequoia.

dreaming-panda avatar dreaming-panda commented on July 28, 2024

In the function of "apply_rotary_pos_emb" we have position_ids to slice the cos and sin tensor to be aligned with query and keys

cos = cos[position_ids].unsqueeze(unsqueeze_dim)

sin = sin[position_ids].unsqueeze(unsqueeze_dim)

q_embed = (q * cos) + (rotate_half(q) * sin)

k_embed = (k * cos) + (rotate_half(k) * sin)

return q_embed, k_embed

So I think it's impossible to have a shape-misalignment bug here. Can you go to apply_rotary_pos_emb and print the shape of the tensor inside?

from sequoia.

Tomorrowdawn avatar Tomorrowdawn commented on July 28, 2024

In the function of "apply_rotary_pos_emb" we have position_ids to slice the cos and sin tensor to be aligned with query and keys

cos = cos[position_ids].unsqueeze(unsqueeze_dim)

sin = sin[position_ids].unsqueeze(unsqueeze_dim)

q_embed = (q * cos) + (rotate_half(q) * sin)

k_embed = (k * cos) + (rotate_half(k) * sin)

return q_embed, k_embed

So I think it's impossible to have a shape-misalignment bug here. Can you go to apply_rotary_pos_emb and print the shape of the tensor inside?

I check the apply_rotary_pos_emb but it seems a little bit different


def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
    """Applies Rotary Position Embedding to the query and key tensors.

    Args:
        q (`torch.Tensor`): The query tensor.
        k (`torch.Tensor`): The key tensor.
        cos (`torch.Tensor`): The cosine part of the rotary embedding.
        sin (`torch.Tensor`): The sine part of the rotary embedding.
        position_ids (`torch.Tensor`, *optional*):
            Deprecated and unused.
        unsqueeze_dim (`int`, *optional*, defaults to 1):
            The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
            sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
            that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
            k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
            cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
            the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
    Returns:
        `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
    """
    cos = cos.unsqueeze(unsqueeze_dim)
    sin = sin.unsqueeze(unsqueeze_dim)
    q_embed = (q * cos) + (rotate_half(q) * sin)
    k_embed = (k * cos) + (rotate_half(k) * sin)
    return q_embed, k_embed

I've discovered that this is a compatibility issue. I have now rolled back to transformers==4.36(which was 4.38), and that problem has disappeared, but now issue #1 has occured.

File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 297, in <module>
    simulation_fast(target_model=target_model, draft_model=draft_model, dataloader=dataloader, T=args.T, to
p_p=args.P,
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/testbed.py", line 69, in simulation_fast
    spectree = SpecTree(prefix=input_ids.squeeze(0), device='cuda:0', temperature=T,
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Tree/SpecTree.py", line 68, in __init__
    draft_model_outputs = self.draft_model_engine.inference(input_ids=self.tokens[:self.num_nodes].unsqueez
e(0),
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in 
decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 244, in inference
    return self.engine.model_run(input_ids=input_ids, storage_ids=storage_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in 
decorate_context
    return func(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Engine.py", line 40, in model_run
    logits = self.model(input_ids=input_ids,
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in
 _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in
 _call_implreturn forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 201, in forward
    outputs = self.model(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_model.py", line 59, in forward
    layer_outputs = decoder_layer(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 339, in forward
    hidden_states = self.self_attn(
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/xiac/.conda/envs/rlhf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data0/xiac/RLHF/Prelim/Sequoia/tests/../Engine/Llama_modules.py", line 132, in forward
    attn_output = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: p.attn_bias_ptr is not correctly aligned

from sequoia.

dreaming-panda avatar dreaming-panda commented on July 28, 2024

Oh, you need to install torch 2.1.2. Actually, only this torch version (and maybe 2.1.1) is compatible. I will deal with this later. But for now, you can turn to torch 2.1.2.

from sequoia.

Tomorrowdawn avatar Tomorrowdawn commented on July 28, 2024

Thank you for your response. After reconfiguring the environment, it indeed runs smoothly now.

from sequoia.

Related Issues (12)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.