Code Monkey home page Code Monkey logo

Comments (5)

qwopqwop200 avatar qwopqwop200 commented on June 26, 2024

https://github.com/qwopqwop200/AutoAWQ-exllama
I succeeded in running exllama in AutoAWQ. Additionally, some minor changes to the exllama kernel were required.
Performance at opt-125m is:

awq kernel

Task Version Metric Value Stderr
wikitext 1 word_perplexity 33.9570
byte_perplexity 1.9333
bits_per_byte 0.9510

[======] Model summary: opt-125m-awq [======]
Load time: 2.66 seconds
Context speed: 10473.90 tokens/second (0.10 ms/token)
Generation speed: 118.32 tokens/second (8.45 ms/token)
VRAM: 255.58 MB

exllama kernel

Task Version Metric Value Stderr
wikitext 1 word_perplexity 33.9579
byte_perplexity 1.9333
bits_per_byte 0.9510

[======] Model summary: opt-125m-awq [======]
Load time: 2.70 seconds
Context speed: 8750.52 tokens/second (0.11 ms/token)
Generation speed: 131.00 tokens/second (7.63 ms/token)
VRAM: 255.58 MB

It was tested in the following.

wsl (window 11)
cuda 11.3
pytorch 2.0.1+cuda 11.7
RTX 3090 + R7 5800x

from autoawq.

casper-hansen avatar casper-hansen commented on June 26, 2024

This is good work @qwopqwop200. I was working on the same thing on the exllama branch. It seems there could be a modest boost in speed of around 10% from your initial testing.

Do you want to open a PR or can I copy your work into the exllama branch?

from autoawq.

qwopqwop200 avatar qwopqwop200 commented on June 26, 2024

Copy it to exllama branch. I'm not sure yet, but it seems that exllama and awq kerenl have different weight storage methods. This may be why exllama is not working.

from autoawq.

casper-hansen avatar casper-hansen commented on June 26, 2024

I have gone through your implementation now and unfortunately, it seems it runs into the same issues around the shapes of the in_features and out_features. I have fixed these for now in the exllama branch, but I still need to make the fused modules work.

If you have time to spare @qwopqwop200 and want to help with the exllama integration, I would appreciate it if you could work from this branch.
https://github.com/casper-hansen/AutoAWQ/tree/exllama

A few issues:

  • I tested with a LLaMa 7B model and the generation is just random output, however, there seems to be a 10% boost in tokens/s .
  • The fused modules are not working yet.
  • Exllama module only works with linear modules that have in_features == out_features

from autoawq.

casper-hansen avatar casper-hansen commented on June 26, 2024

Draft PR #30 is now open.

from autoawq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.