Comments (5)
As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the gelu_new
activation function used in GPT2, where the Pythia base models from EleutherAI are using gelu
. Can you confirm ?
If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.
from transformers-neuronx.
Hi @dacorvo , thanks for the note. I have confirmed the fix for your issue and it will be available in an upcoming release.
from transformers-neuronx.
As mentioned in issue #10, I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the
gelu_new
activation function used in GPT2, where the Pythia base models from EleutherAI are usinggelu
. Can you confirm ?If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.
This does not seem related to the GELU flavor. I switched to a locally generated wheel from the mainline branch instead of the 0.4.60
version (which I think comes from the r0.4
branch).
I cannot compile the model in AMP f32
on that branch, but if I use a f16
instead (which is actually the HF transformers model actual precision), the compilation works, and the outputs are correct.
from transformers-neuronx.
Can you confirm this is fixed with latest release ?
from transformers-neuronx.
Hi @dacorvo ,
I have confirmed that GPT-Neox Pythia is now working with release 2.12:
(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ gptneox_demo --model_name EleutherAI/pythia-1.4B save ./pythia-1.4B; gptneox_demo --model_name EleutherAI/pythia-1.4B run --batch_size 1 --n_positions 20 ./pythia-1.4B
running GPTNeoXForSampling.from_pretrained
/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gptneox/model.py:40: UserWarning: hidden_act="gelu" ignored in favor of hidden_act="gelu_new"
warnings.warn(f'hidden_act="{self.config.activation_function}" ignored in favor of hidden_act="gelu_new"')
running model.to_neuron
...
Compiler status PASS
running model.sample
generated_sequence= tensor([[12092, 13, 309, 1353, 247, 3448, 1566, 13, 309, 971,
368, 281, 1071, 479, 342, 247, 1071, 20689, 15, 309]])
["Hello, I'm a language model, I want you to test me with a test corpus. I"]
Packages:
(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ pip list | grep neuron
aws-neuronx-runtime-discovery 2.9
libneuronxla 0.5.391
neuronx-cc 2.8.0.25+a3ad0f342
neuronx-distributed 0.1.0
neuronx-hwm 2.8.0.3+2b7c6da39
torch-neuronx 1.13.1.1.9.0
torch-xla 1.13.1+torchneuron8
transformers-neuronx 0.5.58
from transformers-neuronx.
Related Issues (20)
- Avoid splitting Hugging Face Hub checkpoint files on disk HOT 7
- Can't save/serialize any models except GPT2 HOT 4
- Compilation error on llama 7 B with batch size 8 HOT 4
- from_pretrained is broken after transformers made safetensor serialization default HOT 1
- LLaMA fails when the input token length is over 1790 tokens HOT 6
- Llama2 inference overhead time way too long HOT 6
- llama-2/codellama benchmark for inf2.xlarge HOT 4
- Mixtral Model support HOT 2
- Vicuna13B model support HOT 1
- Inf2 Modified Llama 2 Loading Issue HOT 11
- Skipping generation for useless tokens, and modiying cacheids HOT 3
- How to use generate() with inputs_embeds HOT 2
- Mixtral config issue -- not handling null well HOT 8
- Generate Llama 2 from Embeddings HOT 5
- Infering logits from `model.forward` for the entire batch instead of the last forward's output. HOT 6
- Support for MPT model HOT 1
- `stopping_criteria_list(input_ids, probs)` does not check for the correct sequence. HOT 4
- User feedback when compiling and reloading a large model HOT 1
- Issue while compiling Mistral 7B 0.2 Instruct HOT 5
- Backward compatibility with saved llama 2 compiled artifacts HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers-neuronx.