Comments (5)
dacorvo Thanks for posting this ticket. We are investigating the issue and believe we have identified a fix. We are testing and will update this ticket with more info.
from transformers-neuronx.
@aws-donkrets I had time to come back to this issue and I suspect this is related to the fact that the current transformers-neuronx optimized graphs only support the gelu_new
activation function used in GPT2, where the GPT-NeoX base models from EleutherAI are using gelu_fast
. Can you confirm ?
If I am correct then I can create a better issue listing the GeLU flavors that need to be supported to be able to run the most popular models from the Hugging Face hub.
from transformers-neuronx.
This is a duplicate of #12 . We will have the fix in an upcoming release.
from transformers-neuronx.
Can you confirm this is fixed with latest release ?
from transformers-neuronx.
Hi @dacorvo ,
I confirmed that the GPT-Neox demo is working with release 2.12:
gptneox_demo --amp f16 save gpt-neox-20b; gptneox_demo --amp f16 run --batch_size 1 --tp_degree 4 gpt-neox-20b
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [05:12<00:00, 6.80s/it]
running GPTNeoXForSampling.from_pretrained
/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/gptneox/model.py:40: UserWarning: hidden_act="gelu_fast" ignored in favor of hidden_act="gelu_new"
warnings.warn(f'hidden_act="{self.config.activation_function}" ignored in favor of hidden_act="gelu_new"')
running model.to_neuron
.......
Compiler status PASS
running model.sample
generated_sequence= tensor([[12092, 13, 309, 1353, 247, 3448, 1566, 13, 513, 368,
2564, 604, 309, 1361, 368, 247, 1652, 2372, 865, 187,
187, 2773, 434, 835, 776, 747, 3210, 1705, 275, 15,
1583, 1472, 253, 26101, 4302, 432, 8217, 285, 5559, 326,
1581, 441, 281, 513, 326, 15, 831, 2934, 273, 5145,
4715, 285, 849, 352, 588, 1361, 12823, 1805, 2096, 441,
310, 271, 12302, 581, 15, 733, 434, 271, 2170, 326,
434, 644, 275, 2440, 323, 1142, 1107, 15, 733, 434,
760, 4102, 326, 12823, 452, 4925, 247, 1127, 835, 597,
476, 513, 1633, 4217, 342, 253, 941, 597, 452, 15,
844, 1849, 760, 644, 2104, 281, 513, 5145, 10234, 342,
247, 1943, 4382, 13, 247, 2221, 32948, 13, 323, 247,
1643, 8007, 15, 733, 2335, 3240, 36521, 1078]])
['Hello, I\'m a language model, do you mind if I help you a little bit?"\n\nThat\'s where our new models come in. They\'re the newest technology from Apple and Google that allow us to do that. This idea of machine learning and how it will help computers better understand us is an exciting one. It\'s an area that\'s been in development for many years. It\'s only recently that computers have reached a point where they can do something useful with the data they have. We\'ve only been able to do machine translation with a big computer, a supercomputer, for a few decades. It took quite awhile before']
Packages:
(aws_neuron_venv_pytorch) ubuntu@ip-10-0-10-149:~$ pip list | grep neuron
aws-neuronx-runtime-discovery 2.9
libneuronxla 0.5.391
neuronx-cc 2.8.0.25+a3ad0f342
neuronx-distributed 0.1.0
neuronx-hwm 2.8.0.3+2b7c6da39
torch-neuronx 1.13.1.1.9.0
torch-xla 1.13.1+torchneuron8
transformers-neuronx 0.5.58
from transformers-neuronx.
Related Issues (20)
- Avoid splitting Hugging Face Hub checkpoint files on disk HOT 6
- Can't save/serialize any models except GPT2 HOT 3
- Compilation error on llama 7 B with batch size 8 HOT 3
- from_pretrained is broken after transformers made safetensor serialization default HOT 1
- LLaMA fails when the input token length is over 1790 tokens HOT 6
- Llama2 inference overhead time way too long HOT 6
- llama-2/codellama benchmark for inf2.xlarge HOT 4
- Mixtral Model support HOT 2
- Vicuna13B model support
- Inf2 Modified Llama 2 Loading Issue HOT 11
- Skipping generation for useless tokens, and modiying cacheids HOT 3
- How to use generate() with inputs_embeds HOT 2
- Mixtral config issue -- not handling null well HOT 8
- Generate Llama 2 from Embeddings HOT 5
- Infering logits from `model.forward` for the entire batch instead of the last forward's output. HOT 5
- Support for MPT model HOT 1
- `stopping_criteria_list(input_ids, probs)` does not check for the correct sequence. HOT 4
- User feedback when compiling and reloading a large model HOT 1
- Issue while compiling Mistral 7B 0.2 Instruct HOT 5
- Backward compatibility with saved llama 2 compiled artifacts HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers-neuronx.