Code Monkey home page Code Monkey logo

Comments (16)

bjoernpl avatar bjoernpl commented on June 19, 2024 24

7B in float16 with be 14GB and if quantized to uint8 could be as low as 7GB. But on the graphics cards, from what I've tried with other models it can take 2x the VRAM.

My guess is that 32GB would be the minimum but some clever person may be able to run it with 16GB VRAM.

But the question is, how fast would it be? If it is one character per second then it would not be that useful!

The 7B model generates quickly on a 3090ti (~30 seconds for ~500 tokens, ~17 tokens/s), much faster than the ChatGPT interface. It is using ~14GB VRAM during generation. This is also with batch_size=1, meaning theoretical throughput is higher than this.

Recording.2023-03-02.225512.mp4

See my fork for the code for rolling generation and the Gradio interface.

from llama.

fabawi avatar fabawi commented on June 19, 2024 10

I was able to run 7B on two 1080 Ti (only inference). Next, I'll try 13B and 33B. It still needs refining but it works! I forked LLaMA here:

https://github.com/modular-ml/wrapyfi-examples_llama

and have a readme with the instructions on how to do it:

LLaMA with Wrapyfi

Wrapyfi enables distributing LLaMA (inference only) on multiple GPUs/machines, each with less than 16GB VRAM

currently distributes on two cards only using ZeroMQ. Will support flexible distribution soon!

This approach has only been tested on 7B model for now, using Ubuntu 20.04 with two 1080 Tis. Testing 13B/30B models soon!
UPDATE: Tested on Two 3080 Tis as well!!!

How to?

  1. Replace all instances of <YOUR_IP> and before running the scripts

  2. Download LLaMA weights using the official form below and install this wrapyfi-examples_llama inside conda or virtual env:

git clone https://github.com/modular-ml/wrapyfi-examples_llama.git
cd wrapyfi-examples_llama
pip install -r requirements.txt
pip install -e .
  1. Install Wrapyfi with the same environment:
git clone https://github.com/fabawi/wrapyfi.git
cd wrapyfi
pip install .[pyzmq]
  1. Start the Wrapyfi ZeroMQ broker from within the Wrapyfi repo:
cd wrapyfi/standalone 
python zeromq_proxy_broker.py --comm_type pubsubpoll
  1. Start the first instance of the Wrapyfi-wrapped LLaMA from within this repo and env (order is important, dont start wrapyfi_device_idx=0 before wrapyfi_device_idx=1):
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1
  1. Now start the second instance (within this repo and env) :
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0
  1. You will now see the output on both terminals

  2. EXTRA: To run on different machines, the broker must be running on a specific IP in step 4. Start the ZeroMQ broker by setting the IP and provide the env variables for steps 5+6 e.g.,

### (replace 10.0.0.101 with <YOUR_IP> ###

# step 4 modification 
python zeromq_proxy_broker.py --socket_ip 10.0.0.101 --comm_type pubsubpoll

# step 5 modification
CUDA_VISIBLE_DEVICES="0" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 1

# step 6 modification
CUDA_VISIBLE_DEVICES="1" OMP_NUM_THREADS=1 WRAPYFI_ZEROMQ_SOCKET_IP='10.0.0.101' torchrun --master_port=29503 --nproc_per_node 1 example.py --ckpt_dir <YOUR CHECKPOINT DIRECTORY>/checkpoints/7B --tokenizer_path <YOUR CHECKPOINT DIRECTORY>/checkpoints/tokenizer.model --wrapyfi_device_idx 0

from llama.

doublebishop avatar doublebishop commented on June 19, 2024 3

Trying to run the 7B model in Colab with 15GB GPU is failing. Is there a way to configure this to be using fp16 or thats already baked into the existing model.
*update: Using batch_size=2 seems to make it work in Colab+ with GPU

from llama.

tmgthb avatar tmgthb commented on June 19, 2024 2

Can I use the model on Intel iRIS Xe graphics card?

I appreciate, if possible to as well recommend which libraries to use.

from llama.

kir152 avatar kir152 commented on June 19, 2024 1

Flexgen only supports opt models

from llama.

CyberTimon avatar CyberTimon commented on June 19, 2024 1

With KoboldAi I was able to run GPT J 6b on my 8gb 3070 ti by offloading the model to my ram

from llama.

neuhaus avatar neuhaus commented on June 19, 2024 1

See my fork for the code for rolling generation and the Gradio interface.

@bjoernpl Works great, thanks!

Have you tried changing the gradio interface to use the gradio chatbot component?

from llama.

bjoernpl avatar bjoernpl commented on June 19, 2024 1

Have you tried changing the gradio interface to use the gradio chatbot component?

I think this doesn't quite fit, since LLama is not fine-tuned for chatbot-like capabilities. I think it would definitely be possible (even if it probably doesn't work too well) to use it as a chatbot with some clever prompting. Might be worth a try, thanks for the idea and the feedback.

from llama.

dizys avatar dizys commented on June 19, 2024

According to my napkin math, even the smallest model with 7B parameters will probably take close to 30GB of space. 8GB is unlikely to suffice. But I have no access to the weights yet, it's just my rough guess.

from llama.

ekiwi111 avatar ekiwi111 commented on June 19, 2024

Could be possible with https://github.com/FMInference/FlexGen

from llama.

dizys avatar dizys commented on June 19, 2024

Could be possible with https://github.com/FMInference/FlexGen

This project looks amazing 🤩. However, in its example, it seems like a 6.7B OPT model would still need at least 15GB of GPU memory. So, the chances are mere 🥲. I would so wanna run it on my 3080 10GB.

from llama.

pauldog avatar pauldog commented on June 19, 2024

7B in float16 with be 14GB and if quantized to uint8 could be as low as 7GB. But on the graphics cards, from what I've tried with other models it can take 2x the VRAM.

My guess is that 32GB would be the minimum but some clever person may be able to run it with 16GB VRAM.

But the question is, how fast would it be? If it is one character per second then it would not be that useful!

from llama.

pauldog avatar pauldog commented on June 19, 2024

With KoboldAi I was able to run GPT J 6b on my 8gb 3070 ti by offloading the model to my ram

How fast was it?

from llama.

pauldog avatar pauldog commented on June 19, 2024

@fabawi Good work. 👍

from llama.

robertavram-md avatar robertavram-md commented on June 19, 2024

Thank you! Works great.

from llama.

jspisak avatar jspisak commented on June 19, 2024

Closing this issue - great work @fabawi !!

from llama.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.