Comments (5)
https://github.com/qwopqwop200/AutoAWQ-exllama
I succeeded in running exllama in AutoAWQ. Additionally, some minor changes to the exllama kernel were required.
Performance at opt-125m is:
awq kernel
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 33.9570 | ||
byte_perplexity | 1.9333 | ||||
bits_per_byte | 0.9510 |
[======] Model summary: opt-125m-awq [======]
Load time: 2.66 seconds
Context speed: 10473.90 tokens/second (0.10 ms/token)
Generation speed: 118.32 tokens/second (8.45 ms/token)
VRAM: 255.58 MB
exllama kernel
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 33.9579 | ||
byte_perplexity | 1.9333 | ||||
bits_per_byte | 0.9510 |
[======] Model summary: opt-125m-awq [======]
Load time: 2.70 seconds
Context speed: 8750.52 tokens/second (0.11 ms/token)
Generation speed: 131.00 tokens/second (7.63 ms/token)
VRAM: 255.58 MB
It was tested in the following.
wsl (window 11)
cuda 11.3
pytorch 2.0.1+cuda 11.7
RTX 3090 + R7 5800x
from autoawq.
This is good work @qwopqwop200. I was working on the same thing on the exllama branch. It seems there could be a modest boost in speed of around 10% from your initial testing.
Do you want to open a PR or can I copy your work into the exllama branch?
from autoawq.
Copy it to exllama branch. I'm not sure yet, but it seems that exllama and awq kerenl have different weight storage methods. This may be why exllama is not working.
from autoawq.
I have gone through your implementation now and unfortunately, it seems it runs into the same issues around the shapes of the in_features and out_features. I have fixed these for now in the exllama branch, but I still need to make the fused modules work.
If you have time to spare @qwopqwop200 and want to help with the exllama integration, I would appreciate it if you could work from this branch.
https://github.com/casper-hansen/AutoAWQ/tree/exllama
A few issues:
- I tested with a LLaMa 7B model and the generation is just random output, however, there seems to be a 10% boost in tokens/s .
- The fused modules are not working yet.
- Exllama module only works with linear modules that have
in_features == out_features
from autoawq.
Draft PR #30 is now open.
from autoawq.
Related Issues (20)
- About llama2 quantization HOT 1
- AssertionError: AWQ kernels could not be loaded. HOT 4
- Please add longer calibration length to backlog.. HOT 2
- Can't install AutoAWQ 0.2 on Windows HOT 14
- question about s^-1 * x in code 313 HOT 2
- OOM for 4096x4096 tokens benchmark HOT 1
- OOM in A10 GPU with AutoAWQ 0.2.2 HOT 13
- aarch64 builds? HOT 1
- `device_map` not actually working HOT 2
- Dataset used for AWQ quantisation? HOT 4
- max_seq_length is not a valid key word argument HOT 1
- Examples/eval.py does not use quantized model when evaluate. HOT 2
- [RFC] options about low-bit GEMM kernels contribution on x86 CPUs HOT 3
- Running non fused models with flash-attn HOT 3
- Any plan to support google gemma? HOT 3
- Qwen1.5-72B model using AutoAWQ is stuck HOT 1
- clip
- What is the purpose of clip weights? HOT 1
- Auto detect if GPU is unsupported
- Issues with Bloom models
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from autoawq.