Comments (4)
Example:
One of my fondest memory is of a weekend in late 1999 when I had come home from a conference. While walking through the house on my wife was surprised me my wife was checking on the fridge was sitting on my eyes were talking to turn into my son, the house) weavinga 7w! I I think of thatβ0.\nThe\nOne of the end-\nOne One way back then 24 33.\nOne on the whole h d b a day I I I I I I I feel fall trees snowfallfriendfriendfriendss.
from transformers-neuronx.
Hello @dacorvo, The LLaMA model is currently in the "prototype" stage, so such behavior is not surprising.
Prototype (Alpha): An initial in-development version of a model that should be considered a preview of future functionality. A prototype may not be fully functional. A prototype model is not expected to perform well and may also have known accuracy issues. Prototype models may not maintain compatibility across versions.
We'll continue to improve the correctness and performance of these models in future releases. If you are able to provide example prompts and model artifacts (https://github.com/aws-neuron/transformers-neuronx#troubleshooting) that will help us debug and reproduce the issue.
from transformers-neuronx.
As a follow-up, could you also let us know what package versions you're using? We've released updates to transformers-neuronx
recently with improvements, and the newest version may help.
from transformers-neuronx.
@dacorvo could you please check your version and try the newly released version?
Many thanks
from transformers-neuronx.
Related Issues (20)
- How to use generate() with inputs_embeds HOT 2
- Mixtral config issue -- not handling null well HOT 8
- Generate Llama 2 from Embeddings HOT 5
- Infering logits from `model.forward` for the entire batch instead of the last forward's output. HOT 6
- Support for MPT model HOT 1
- `stopping_criteria_list(input_ids, probs)` does not check for the correct sequence. HOT 4
- User feedback when compiling and reloading a large model HOT 1
- Issue while compiling Mistral 7B 0.2 Instruct HOT 5
- Backward compatibility with saved llama 2 compiled artifacts HOT 1
- NaN outputs when masking llama model inputs HOT 8
- Improve Neuron model loading time HOT 4
- Add support for `gemma` models HOT 1
- Add support for Baichuan-13B model
- Latest changes introduced for continuous batching break Mixtral model HOT 5
- llava support HOT 3
- Any plan to support Qwen-2 Model
- Neuron model NEFFs are dependent on the python path HOT 2
- Not able to load llama 3 70b on inf2.24xlarge instance HOT 5
- Gibberish output for princeton-nlp/Sheared-LLaMA-1.3B with continuous batching HOT 2
- [Question] BasicTransformerBlock
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers-neuronx.