Code Monkey home page Code Monkey logo

amd-ai's Introduction

AMD / Radeon 7900XTX 6900XT GPU ROCm install / setup / config

Ubuntu 22.04 / 23.04

ROCm 5.7.3

Automatic1111 Stable Diffusion + ComfyUI ( venv )

Oobabooga - Text Generation WebUI ( conda, Exllama, BitsAndBytes-ROCm-5.6 )

Install notes / instructions

I have composed this collection of instructions as they are my notes. I use to setup my own Linux system with AMD parts. I've gone over these doing many re-installs to get them all right. This is what I had hoped to find when I had search for install instructions - so I'm sharing them in the hopes that they save time for other people. There may be in here extra parts that aren't needed but this works for me. Originally text, with comments like a shell script that I cut and paste - 2023-07 - nktice

2023-09-09 - I had a report that this doesn't work in virtual machines (virtualbox) as the system there cannot see the hardware, it can't load drivers, etc. While this is not a guide about Windows, Windows users may find it more helpful to try DirectML - https://rocm.docs.amd.com/en/latest/deploy/windows/quick_start.html / https://github.com/lshqqytiger/stable-diffusion-webui-directml

2023-09-30 - Updated to use ROCm 5.7 - As it is out now, and does appear to be working much like 5.6...

  • On my first attempt, I needed a reboot to get it working... (I've since re-installed and it works as expected following this guide. )
  • I've made this a seprate file to start with as these features aren't referred to as being supported by the dependent packages.
  • Added notes for exllamav2 and fast-attention - they're not working yet... but exllamav2 is under active development, and worth following.
  • I will also note issues with dual GPU loading of models that appear to load but that output gibberish has now been addressed - alas the patch has not made it to packages... here is the bug thread : ROCm/rocBLAS#1346

2023-11-28 - Update for ROCm 5.7.2. Revised how Stable Diffusion and ComfyUI are handled ( using venv now ). Revise handling for Oobabooga... now uses most of their requirements_amd.txt and Flash Attention! I will note I first attempted to do with Ubuntu 23.10 and foudn obstacles there - they integrate the video drivers, but they're not the latest and this breaks ROCm install, so we're still using 23.04 for the time being.

2023-12-13 - Added supplement for those who want to use Mixtral models ( uses llama.cpp ) - https://github.com/nktice/AMD-AI/blob/main/Mixtral.md

2023-12-18 - ROCm 6.0 is out, so there's an updated guide for that here - https://github.com/nktice/AMD-AI/blob/main/ROCm6.0.md

2023-12-18 - This document has been updated for ROCm 5.7.3

2023-12-23 - Update to default to using miniconda ( with minor revisions so that the instructions are there for full anaconda too for those who want it ). Updated date for nightlies. Exllamav2 commands corrected. I'll note that this was tested on Ubuntu 23.10.1 and there are parts that work ( Stable Diffusion, ComfyUI ) alas Oobabooga has issues ( exllamav2 errors out, as does flash-attention 2... ) so 23.10.1 is not recommended at this time. Exllamav2 does work with 23.10.1 with ROCm 6.0 ( URL above ).

2024-03-06 - Updates becauase ROCm 5.7 is the current stable version supported by PyTorch, as such I'm making those instructions the main instructions offered here. They were in a separate file, those contents are still there in case someone links to it, and for posterity once things move on. Minor updates here to refer to the latest versions of some files.

2024-03-07 - This page updated to call standard stable default versions ( rather than development versions... ) Mostly functional with Ubuntu 23.10.1 ( Flash Attention 2 does not compile ). Ubuntu 24.04 does not yet work with amdgpu-dkms. Note that ROCm 5.7 series still has the issue where loading across multiple GPUs with different architectures will appear to work, but outputs gibberish.


Ubuntu 22.04 / 23.04 - Base system install

Ubuntu 22.04 works great on Radeon 6900 XT video cards, but does not support 7900XTX cards as they came out later Ubuntu 23.04 is newer but has issues with some of the tools. So the notes below should work on either system, unless commented.

At this point we assume you've done the system install and you know what that is, have a user, root, etc.

# update system packages 
sudo apt update -y && sudo apt upgrade -y 
#turn on devel and sources.
sudo apt-add-repository -y -s -s
sudo apt install -y "linux-headers-$(uname -r)" \
	"linux-modules-extra-$(uname -r)"

[ for Ubuntu 23.04 - lunar ]

Some things may require older versions of python, so we need to add jammy packages, so that they can be installed, on lunar systems.

sudo add-apt-repository -y -s deb http://security.ubuntu.com/ubuntu jammy main universe

Add AMD GPU package sources

Make the directory if it doesn't exist yet. This location is recommended by the distribution maintainers.

sudo mkdir --parents --mode=0755 /etc/apt/keyrings

Download the key, convert the signing-key to a full Keyring required by apt and store in the keyring directory

wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
    gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null

amdgpu repository for jammy

echo 'deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/latest/ubuntu jammy main' \
    | sudo tee /etc/apt/sources.list.d/amdgpu.list
sudo apt update -y 

AMDGPU DKMS

sudo apt install -y amdgpu-dkms

Note : This commonly produces warning message about 'Possible missing firmware' these are just wanrings and things work anyway, they can be ignored.

ROCm repositories for jammy

https://rocmdocs.amd.com/en/latest/deploy/linux/os-native/install.html

#echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 jammy main" \
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 jammy main" \
    | sudo tee --append /etc/apt/sources.list.d/rocm.list
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' \
    | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update -y

More AMD ROCm related packages

This is lots of stuff, but comparatively small so worth including, as some stuff later may want as dependencies without much notice.

# ROCm...
sudo apt install -y rocm-dev rocm-libs rocm-hip-sdk rocm-dkms rocm-libs
# ld.so.conf update 
sudo tee --append /etc/ld.so.conf.d/rocm.conf <<EOF
/opt/rocm/lib
/opt/rocm/lib64
EOF
sudo ldconfig
# update path
echo "PATH=/opt/rocm/bin:/opt/rocm/opencl/bin:$PATH" >> ~/.profile

Find graphics device

sudo /opt/rocm/bin/rocminfo | grep gfx

Found : gfx1030 [ Radeon 6900 ] Found : gfx1100 [ Radeon 7900 ]

Add user to groups

Of course note to change the user name to match your user.

sudo adduser `whoami` video
sudo adduser `whoami` render
# git and git-lfs (large file support
sudo apt install -y git git-lfs
# development tool may be required later...
sudo apt install -y libstdc++-12-dev
# stable diffusion likes TCMalloc...
sudo apt install -y libtcmalloc-minimal4

Performance Tuning

This section is optional, and as such has been moved to performance-tuning

Top for video memory and usage

nvtop Note : I have had issues with the distro version crashes with 2 GPUs, installing new version from sources works fine. Instructions for that are included at the bottom, as they depend on things installed between here and there. Project website : https://github.com/Syllo/nvtop

sudo apt install -y nvtop 

Radeon specific tools...

sudo apt install -y radeontop rovclock

and now we reboot...

sudo reboot

End of OS / base setup


Stable Diffusion (Automatic1111)

This system is built to use its own venv ( rather than Conda )...

Download Stable Diffusion ( Automatic1111 webui )

https://github.com/AUTOMATIC1111/stable-diffusion-webui Get the files...

cd
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

Requisites :

sudo apt install -y wget git python3 python3-venv libgl1 libglib2.0-0

Edit environment settings...

tee --append webui-user.sh <<EOF
 ## Torch for ROCm
# generic import...
# export TORCH_COMMAND="pip install torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.7"
# use specific versions to avoid downloading all the nightlies... ( update dates as needed ) 
 export TORCH_COMMAND="pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/rocm5.7"
 ## And if you want to call this from other programs...
 export COMMANDLINE_ARGS="--api"
 ## crashes with 2 cards, so to get it to run on the second card (only), unremark the following 
 # export CUDA_VISIBLE_DEVICES="1"
EOF

If you keep models for SD somewhere, this is where you'd like them up...

If you don't do this, it will install a default to get you going. Note that these start files do include things that it needs you'll want to copy into the folder where you have other models ( to avoid issues )

#mv models models.1
#ln -s /path/to/models models 

Run SD...

Note that the first time it starts it may take it a while to go and get things it's not always good about saying what it's up to.

./webui.sh 

end Stable Diffusion


ComfyUI install script

Same install of packages here as for Stable Diffusion ( included here in case you're not installed SD and just want ComfyUI... )

sudo apt install -y wget git python3 python3-venv libgl1 libglib2.0-0
cd 
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager
cd ..
python3 -m venv venv
source venv/bin/activate
# pre-install torch and torchvision from nightlies - note you may want to update versions...
python3 -m pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/rocm5.7
python3 -m pip install -r requirements.txt  --extra-index-url https://download.pytorch.org/whl/rocm5.7
python3 -m pip install -r custom_nodes/ComfyUI-Manager/requirements.txt --extra-index-url https://download.pytorch.org/whl/rocm5.7

# end vend if needed...
deactivate

Scripts for running the program...

# run_gpu.sh
tee --append run_gpu.sh <<EOF
#!/bin/bash
source venv/bin/activate
python3 main.py --preview-method auto
EOF
chmod +x run_gpu.sh

#run_cpu.sh
tee --append run_cpu.sh <<EOF
#!/bin/bash
source venv/bin/activate
python3 main.py --preview-method auto --cpu
EOF
chmod +x run_cpu.sh

Update the config file to point to Stable Diffusion (presuming it's installed...)

# config file - connecto stable-diffusion-webui 
cp extra_model_paths.yaml.example extra_model_paths.yaml
sed -i "s@path/to@`echo ~`@g" extra_model_paths.yaml
# edit config file to point to your checkpoints etc 
#vi extra_model_paths.yaml

End ComfyUI install


Oobabooga - Text Generation WebUI - ROCm

Project Website : https://github.com/oobabooga/text-generation-webui.git

Conda

First we'll need Conda ... Required for pytorch... Conda provides virtual environments for python, so that programs with different dependencies can have different environments. Here is more info on managing conda : https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html# Other notes : https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html Download info : https://www.anaconda.com/download/

Anaconda ( if you prefer this to miniconda below )

#cd ~/Downloads/
#wget https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh
#bash Anaconda3-2023.09-0-Linux-x86_64.sh -b
#cd ~
#ln -s anaconda3 conda

Miniconda ( if you prefer this to Anaconda above... ) [ https://docs.conda.io/projects/miniconda/en/latest/ ]

cd ~/Downloads/
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b
cd ~
ln -s miniconda3 conda
echo "PATH=~/conda/bin:$PATH" >> ~/.profile
source ~/.profile
conda update -y -n base -c defaults conda
conda install -y cmake ninja
conda init
source ~/.profile

conda is now active...

install pip

sudo apt install -y pip
pip3 install --upgrade pip

useful pip stuff to know ...

## show outdated packages...
#pip list --outdated
## check dependencies 
#pip check
## install specified bersion 
#pip install <packagename>==<version>

End conda and pip setup.

Oobabooga / Textgen webui

conda create -n textgen python=3.11 -y
conda activate textgen

PyTorch install...

# pre-install 
pip install --pre cmake colorama filelock lit numpy Pillow Jinja2 \
	mpmath fsspec MarkupSafe certifi filelock networkx \
	sympy packaging requests \
         --index-url https://download.pytorch.org/whl/rocm5.7
pip install --pre torch torchvision torchtext torchaudio triton pytorch-triton-rocm \
  --index-url https://download.pytorch.org/whl/rocm5.7

bitsandbytes rocm

2023-09-11 - New version of BitsAndBytes(0.41 !) made for 5.6 Project website : https://github.com/arlo-phoenix/bitsandbytes-rocm-5.6

cd
git clone https://github.com/arlo-phoenix/bitsandbytes-rocm-5.6.git
cd bitsandbytes-rocm-5.6/
BUILD_CUDA_EXT=0 pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/rocm5.7
# 7900XTX
#make hip ROCM_TARGET=gfx1100 ROCM_HOME=/opt/rocm-5.7.3/
# 6900XT
#make hip ROCM_TARGET=gfx1030 ROCM_HOME=/opt/rocm-5.7.3/
# both...
make hip ROCM_TARGET=gfx1100,gfx1030 ROCM_HOME=/opt/rocm-5.7.3/
pip install . --extra-index-url https://download.pytorch.org/whl/nightly/rocm5.7

Flash-Attention 2 :

Install may take a few mins ( takes author close to 5mins on an AMD 5950x CPU as tiem of writing )... It appears this may work with ROCm 5.7.3 and ExLlamav2 ( at least it doesn't complain about it being missing when it is installed ) . 2024-03-07 - As of checking this does not compile on Ubuntu 23.10.1 - Thankfully it's optional and things are functional without it.

cd
git clone https://github.com/ROCmSoftwarePlatform/flash-attention.git
cd flash-attention
pip install . --extra-index-url https://download.pytorch.org/whl/rocm5.7

Oobabooga / Text-generation-webui - Install webui...

cd
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui

Oobabooga's 'requirements'

The default bitsandbytes for AMD is out of date and doesn't support GPU. So we installed one earlier ( may be unsupported... ) we'll run sed first to adjust that line of the requirements...

sed -i "s@bitsandbytes==@bitsandbytes>=@g" requirements_amd.txt 
pip install -r requirements_amd.txt 

Exllama and Exllamav2 loaders ... It appears ExLlama isn't being maintained and the emphasis is now on ExLlamav2... v2 has been updated to support Mixture of Experts (MoE such as Mixtral ). 2023-12-23 - After many tests, it appears that the exllamav2 that's installed above gives an error, so we're compiling and reinstalling exllama here as when we do that it does work.
2024-01-18 - Something has broken and exllamav2 won't compile so I've added a line to reset the checkout to the last known good / compiling version 0.0.11 2024-01-20 - Thanks to TurboDerp for resolving issue with exllamav2 so it plays nice with HIP. Remarked out workaround, in case such is useful in future.

# install exllama
#git clone https://github.com/turboderp/exllama repositories/exllama
# install exllamav2
git clone https://github.com/turboderp/exllamav2 repositories/exllamav2
cd repositories/exllamav2
# Force collection back to base 0.0.11 
# git reset --hard a4ecea6
pip install .   --index-url https://download.pytorch.org/whl/rocm5.7
cd ../..

Let's create a script (run.sh) to run the program...

tee --append run.sh <<EOF
#!/bin/bash
## activate conda
conda activate textgen
## command to run server... 
python server.py --listen  --extensions sd_api_pictures send_pictures gallery 
conda deactivate
EOF
chmod u+x run.sh

Models If you're new to this - new models can be downloaded from the shell via a python script, or from a form in the interface. There are lots of them - http://huggingface.co Generally the GPTQ models by TheBloke are likely to load. The 30B/33B models will load on 24GB of VRAM, but may error, or run out of memory depending on usage and parameters.

To get new models note the ~/text-generation-webui directory has a program " download-model.py " that is made for downloading models from HuggingFace's collection.

If you have old models, link pre-stored models into the models

# cd ~/text-generation-webui
# mv models models.1
# ln -s /path/to/models models

Note that to run the script :

source run.sh

It does download some things the first time it runs. The exllamav2 loader works with most GPTQ models. This is the best choice as it is fast.
Some models that won't load that way will load with AutoGPTQ - but without Triton ( triton seems to break things ). Also worth noting, I've had things work on one card or the other, but not on both cards, loading on both cards causes LLMs to spit out gibberish.

End - Oobabooga - Text-Generation-WebUI


nvtop from source

( As one from packages crashes on 2 GPUs, while this never version from sources works fine. ) project website : https://github.com/Syllo/nvtop optional - tool for displaying gpu / memory usage info The package for this crashes with 2 gpu's, here it is from source.

sudo apt install -y libdrm-dev libsystemd-dev libudev-dev
cd 
git clone https://github.com/Syllo/nvtop.git
mkdir -p nvtop/build && cd nvtop/build
cmake .. -DNVIDIA_SUPPORT=OFF -DAMDGPU_SUPPORT=ON -DINTEL_SUPPORT=OFF
make
sudo make install

end nvtop

amd-ai's People

Contributors

nktice avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.