Run VisualGLM-6B on RX 7900 XTX

Prerequisites

Install AMDGPU driver with ROCm

Download the following prebuilt wheels into ~/Downloads

EDIT (20230615): As we have official index for torch with gfx1100 support now, there is no longer need for them.

If somehow the above links become outdated, you can always find latest links here:

and don’t forget to use their new filenames in “Install” section.

Install

mkdir GLM
cd GLM

git clone https://github.com/THUDM/VisualGLM-6B

git clone --depth=1 https://huggingface.co/THUDM/visualglm-6b

# create default venv dir
python3 -m venv venv
source venv/bin/activate

# option 1 (recommended): install torch with gfx1100 support
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.5

# option 2: install custom torch and torchvision
pip install ~/Downloads/torch-2.0.1+gite19229c-cp310-cp310-linux_x86_64.whl
pip install ~/Downloads/torchvision-0.15.2+f5f4cad-cp310-cp310-linux_x86_64.whl

cd VisualGLM-6B

# remove torch* from requirements.txt
sed -i "/^torch.*$/d" requirements.txt

pip install -r requirements.txt

# use local model path
sed -i "s/THUDM\/visualglm-6b/..\/visualglm-6b/g" cli_demo_hf.py

Launch

source venv/bin/activate

cd VisualGLM-6B

python3 cli_demo_hf.py

Caveats

Do NOT use quantized models which require cpm_kernels. cpm_kernels is written in CUDA and hasn’t been maintained for quite some time.

ChatGLM?

This question is left as an after-school assignment for students to ponder.

comments powered by Disqus
If my work has been helpful to you, please send me a star on GitHub!
Built with Hugo
Theme Stack designed by Jimmy