Code Monkey home page Code Monkey logo

flexgen's Introduction

FlexGen

FlexGen is a high-throughput generation engine for running large language models with limited GPU memory (e.g., a 16GB T4 GPU or a 24GB RTX3090 gaming card!). FlexGen allows high-throughput generation by increasing the effective batch size through IO-efficient offloading and compression.


This is a research project developed by HazyResearch@Stanford, SkyComputing@UC Berkeley, DS3Lab@ETH Zurich, FAIR@Meta, CRFM@Stanford, and TogetherCompute.

           


The high computational and memory requirements of large language model (LLM) inference traditionally make it feasible only with multiple high-end accelerators. FlexGen aims to lower the resource requirements of LLM inference down to a single commodity GPU (e.g., T4, 3090) and allow flexible deployment for various hardware setups. The key technique behind FlexGen is to trade off between latency and throughput by developing techniques to increase the effective batch size.

The key features of FlexGen include:

High-Throughput, Large-Batch Offloading.
Higher-throughput generation than other offloading-based systems (e.g., Hugging Face Accelerate, DeepSpeed Zero-Inference) - sometimes by orders of magnitude. The key innovation is a new offloading technique that can effectively increase the batch size. This can be useful for batch inference scenarios, such as benchmarking (e.g., HELM) and data wrangling.

📦 Extreme Compression.
Compress both the parameters and attention cache of models, such as OPT-175B, down to 4 bits with negligible accuracy loss.

🚀 Scalability.
Come with a distributed pipeline parallelism runtime to allow scaling if more GPUs are given.

Limitation.
As an offloading-based system running on weak GPUs, FlexGen also has its limitations. The throughput of FlexGen is significantly lower than the case when you have enough powerful GPUs to hold the whole model, especially for small-batch cases. FlexGen is mostly optimized for throughput-oriented batch processing settings (e.g., classifying or extracting information from many documents in batches), on single GPUs.

| Read Paper | Join Discord |

Content

Benchmark Results

Generation Throughput (token/s)

System OPT-6.7B OPT-30B OPT-175B
Hugging Face Accelerate 25.12 0.62 0.01
DeepSpeed ZeRO-Inference 9.28 0.60 0.01
Petals* - - 0.05
FlexGen 25.26 7.32 0.69
FlexGen with Compression 29.12 8.38 1.12
  • Hardware: an NVIDIA T4 (16GB) instance on GCP with 208GB of DRAM and 1.5TB of SSD.
  • Workload: input sequence length = 512, output sequence length = 32. The batch size is tuned to a large value that maximizes the generation throughput for each system. (e.g., 256 for OPT-175B on FlexGen).
  • Metric: generation throughput (token/s) = number of the generated tokens / (time for processing prompts + time for generation).

How to reproduce.

Latency-throughput Trade-off

Since FlexGen increases throughput by increasing the batch size, it also increases latency - a classic and fundamental trade-off. The figure below shows the latency and throughput trade-off of three offloading-based systems on OPT-175B (left) and OPT-30B (right). FlexGen achieves higher maximum throughput for both models. Other systems cannot further increase throughput due to out-of-memory. "FlexGen(c)" is FlexGen with compression.

logo

How It Works

FlexGen can be flexibly configured under various hardware resource constraints by aggregating memory and computation from the GPU, CPU, and disk. Through a linear programming optimizer, it searches for the best pattern to store and access the tensors, including weights, activations, and attention key/value (KV) cache. FlexGen further compresses both weights and KV cache to 4 bits with negligible accuracy loss.

One key idea of FlexGen is to play the latency-throughput trade-off. Achieving low latency is inherently challenging for offloading methods, but the I/O efficiency of offloading can be greatly boosted for throughput-oriented scenarios (see the figure above). FlexGen utilizes a block schedule to reuse weight and overlap I/O with computation, as shown in figure (b) below, while other baseline systems use an inefficient row-by-row schedule, as shown in figure (a) below.

logo

More details can be found in our paper.

Install

Requirements:

Instructions:

git clone https://github.com/FMInference/FlexGen.git
cd FlexGen
pip3 install -e .

# (Optional) Install openmpi for multi-gpu execution
# sudo apt install openmpi-bin

Get Started with a Single GPU

OPT-1.3B

To get started, you can try a small model like OPT-1.3B first. It fits into a single GPU so no offloading is required. FlexGen will automatically download weights from Hugging Face.

python3 -m flexgen.flex_opt --model facebook/opt-1.3b

You should see some text generated by OPT-1.3B and the benchmark results.

OPT-30B

To run large models like OPT-30B, you will need to use CPU offloading. You can try commands below. The --percent argument specifies the offloading strategy for parameters, attention cache and hidden states separately. The exact meaning of this argument can be found here.

python3 -m flexgen.flex_opt --model facebook/opt-30b --percent 0 100 100 0 100 0

OPT-175B

To run OPT-175B, you need to download the weights from metaseq and convert the weights into Alpa format. You can then try to offloading all weights to disk by

python3 -m flexgen.flex_opt --model facebook/opt-175b --percent 0 0 100 0 100 0 --offload-dir YOUR_SSD_FOLDER

How to set the offloading strategy and --percent?

We will release an automatic policy optimizer later, but now you have to manually try a few strategies. The idea of high-throughput generation is to offload parameters and attention cache as much as possible to the CPU and disk if necessary. You can see the reference strategies in our benchmark here. To avoid out-of-memory, you can tune the --percent of offload more tensors to the CPU and disk.

Scaling to Distributed GPUs

If you have more GPUs, FlexGen can combine offloading with pipeline parallelism to allow scaling. For example, if you have 2 GPUs but the aggregated GPU memory is less than the model size, you still need offloading. FlexGen allow you to do pipeline parallelism with these 2 GPUs to accelerate the generation. See examples here.

Run Chatbot with OPT Models on a Single GPU

apps/chatbot.py shows how to build a chatbot with FlexGen and OPT models. While FlexGen is mainly optimized for large-batch throughput-oriented scenarios like dataset evaluations and information extraction, FlexGen can also be used for interactive applications like chatbot with better performance than other offloading-based systems. Note that FlexGen cannot achieve its best throughput in this single-batch case.

Default Commands

You can use the default commands below. If you do not have enough GPU/CPU memory, see the Handle Out-of-memory section.

# Chat with OPT-6.7B. You need at least 15GB of GPU memory.
python3 chatbot.py --model facebook/opt-6.7b
# Chat with OPT-30B. You need about 90GB of CPU memory.
python3 chatbot.py --model facebook/opt-30b --percent 0 100 100 0 100 0
# Chat with instruction-tuned OPT-IML-MAX-30B. You need about 90GB of CPU memory.
python3 chatbot.py --model facebook/opt-iml-max-30b --percent 0 100 100 0 100 0

Example Output

A chat between a curious human and a knowledgeable artificial intelligence assistant.
Human: Hello! What can you do?
Assistant: As an AI assistant, I can answer questions and chat with you.
Human: What is the name of the tallest mountain in the world?
Assistant: Everest.
Human: I am planning a trip for our anniversary. What things can we do?
Assistant: Well, there are a number of things you can do for your anniversary. First, you can play cards. Second, you can go for a hike. Third, you can go to a museum.

Handle Out-of-memory

If you do not have enough GPU/CPU memory, here are a few things you can try. They save more memory but run slower.

  • Do not pin weights adding --pin-weight 0. This can reduce the weight memory usage on CPU by around 20%.
  • Enable weight compression by adding --compress-weight. This can reduce the weight memory usage by around 70%.
  • Offload weights to disk by using --percent 0 0 100 0 100 0. This requires very little CPU and GPU memory.

Roadmap

We plan to work on the following features. Community contributions are welcome.

  • Support Apple silicon M1/M2 deployment
  • Support Colab deployment
  • Add a text summarization application and more throughput-oriented applications.
  • Optimize the latency of the chatbot application
  • Support more models (BLOOM, CodeGen, GLM)
  • Release the cost model and policy optimizer
  • Release a pip installable package

Recent Changes

Thanks to early feedback about this release, we realized that early versions of this README and our paper were a bit unclear about the purpose of FlexGen and why we're excited about it. Our primary contributions are increasing throughput on single GPU instances - by effectively increasing the batch size. We're really excited about our techniques for offloading and automatically searching through the design space, and that our results that suggest that it's possible to go down to 4-bit quantization without hurting accuracy. This naturally trades off latency, but we think it's a really interesting direction for future work. We'd like to thank everyone for their feedback - keep it coming!

flexgen's People

Contributors

ying1123 avatar merrymercy avatar zhangce avatar danfu09 avatar keroro824 avatar eltociear avatar borda avatar kemingy avatar shughes-uk avatar tomaarsen avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.