Code Monkey home page Code Monkey logo

fastllm's Introduction

fastllm

介绍

fastllm是纯c++实现,无第三方依赖的高性能大模型推理库

6~7B级模型在安卓端上也可以流畅运行

部署交流QQ群: 831641348

| 快速开始 | 模型获取 | 开发计划 |

功能概述

  • 🚀 纯c++实现,便于跨平台移植,可以在安卓上直接编译
  • 🚀 ARM平台支持NEON指令集加速,X86平台支持AVX指令集加速,NVIDIA平台支持CUDA加速,各个平台速度都很快就是了
  • 🚀 支持浮点模型(FP32), 半精度模型(FP16), 量化模型(INT8, INT4) 加速
  • 🚀 支持多卡部署,支持GPU + CPU混合部署
  • 🚀 支持Batch速度优化
  • 🚀 支持并发计算时动态拼Batch
  • 🚀 支持流式输出,很方便实现打字机效果
  • 🚀 支持python调用
  • 🚀 前后端分离设计,便于支持新的计算设备
  • 🚀 目前支持ChatGLM系列模型,各种LLAMA模型(ALPACA, VICUNA等),BAICHUAN模型,QWEN模型,MOSS模型,MINICPM模型等

两行代码加速 (测试中,暂时只支持chatglm系列)

使用如下命令安装fastllm_pytools包

cd fastllm
mkdir build
cd build
cmake .. -DUSE_CUDA=ON # 如果不使用GPU编译,那么使用 cmake .. -DUSE_CUDA=OFF
make -j
cd tools && python setup.py install

然后只需要在原本的推理程序中加入两行即可使用fastllm加速

# 这是原来的程序,通过huggingface接口创建模型
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)

# 加入下面这两行,将huggingface模型转换成fastllm模型
# 目前from_hf接口只能接受原始模型,或者ChatGLM的int4, int8量化模型,暂时不能转换其它量化模型
from fastllm_pytools import llm
model = llm.from_hf(model, tokenizer, dtype = "float16") # dtype支持 "float16", "int8", "int4"

# 注释掉这一行model.eval()
#model = model.eval()

model支持了ChatGLM的API函数chat, stream_chat,因此ChatGLM的demo程序无需改动其他代码即可运行

model还支持下列API用于生成回复

# 生成回复
print(model.response("你好"))

# 流式生成回复
for response in model.stream_response("你好"):
    print(response, flush = True, end = "")

转好的模型也可以导出到本地文件,之后可以直接读取,也可以使用fastllm cpp接口读取

model.save("model.flm"); # 导出fastllm模型
new_model = llm.model("model.flm"); # 导入fastllm模型

注: 该功能处于测试阶段,目前仅验证了ChatGLM、ChatGLM2模型可以通过2行代码加速

PEFT支持(测试中,目前仅支持ChatGLM + LoRA)

使用🤗PEFT可以方便地运行finetune过的大模型,你可以使用如下的方式让你的PEFT模型使用fastllm加速:

import sys
from peft import PeftModel
from transformers import AutoModel, AutoTokenizer
sys.path.append('..')
model = AutoModel.from_pretrained("THUDM/chatglm-6b", device_map='cpu', trust_remote_code=True)
model = PeftModel.from_pretrained(model, "path/to/your/own/adapter") # 这里使用你自己的peft adapter
model = model.eval()
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)

# 如果模型中存在active_adapter,那么在fastllm模型中,这个adapter也会被默认启用
from fastllm_pytools import llm
model = llm.from_hf(model, tokenizer, dtype = "float16") # dtype支持 "float16", "int8", "int4"

接下来,你就可以像使用普通的模型一样(例如调用chat,stream_chat函数)

你也可以更换PEFT模型所使用的的adapter:

model.set_adapter('your adapter name')

或者关闭PEFT,使用原本的预训练模型:

model.disable_adapter()

推理速度

6B级int4模型单4090延迟最低约5.5ms

6B级fp16模型单4090最大吞吐量超过10000 token / s

6B级int4模型在骁龙865上速度大约为4~5 token / s

详细测试数据点这里

CMMLU精度测试

模型 Data精度 CMMLU分数
ChatGLM2-6b-fp16 float32 50.16
ChatGLM2-6b-int8 float32 50.14
ChatGLM2-6b-int4 float32 49.63

目前测试了ChatGLM2模型,具体测试步骤点这里

快速开始

编译

建议使用cmake编译,需要提前安装c++编译器,make, cmake

gcc版本建议9.4以上,cmake版本建议3.23以上

GPU编译需要提前安装好CUDA编译环境,建议使用尽可能新的CUDA版本

使用如下命令编译

cd fastllm
mkdir build
cd build
cmake .. -DUSE_CUDA=ON # 如果不使用GPU编译,那么使用 cmake .. -DUSE_CUDA=OFF
make -j

编译完成后,可以使用如下命令安装简易python工具包。

cd tools # 这时在fastllm/build/tools目录下
python setup.py install

其他不同平台的编译可参考文档 TFACC平台

运行demo程序

我们假设已经获取了名为model.flm的模型(参照 模型获取,初次使用可以先下载转换好的模型)

编译完成之后在build目录下可以使用下列demo:

# 这时在fastllm/build目录下

# 命令行聊天程序, 支持打字机效果 (只支持Linux)
./main -p model.flm 

# 简易webui, 使用流式输出 + 动态batch,可多路并发访问
./webui -p model.flm --port 1234 

# python版本的命令行聊天程序,使用了模型创建以及流式对话效果
python tools/cli_demo.py -p model.flm 

# python版本的简易webui,需要先安装streamlit-chat
streamlit run tools/web_demo.py model.flm 

Windows下的编译推荐使用Cmake GUI + Visual Studio,在图形化界面中完成。

如编译中存在问题,尤其是Windows下的编译,可参考FAQ

简易python调用

编译后如果安装了简易python工具包,那么可以使用python来调用一些基本的API (如果没有安装,也可以在直接import编译生成的tools/fastllm_pytools来使用)

# 模型创建
from fastllm_pytools import llm
model = llm.model("model.flm")

# 生成回复
print(model.response("你好"))

# 流式生成回复
for response in model.stream_response("你好"):
    print(response, flush = True, end = "")

另外还可以设置cpu线程数等内容,详细API说明见 fastllm_pytools

这个包不包含low level api,如果需要使用更深入的功能请参考 Python绑定API

Python绑定API

cd pyfastllm
export USE_CUDA=OFF    # 只使用CPU,如需使用GPU则去除本行
python3 setup.py build
python3 setup.py install 
cd examples/
python cli_simple.py  -m chatglm -p chatglm-6b-int8.flm 或  
python web_api.py  -m chatglm -p chatglm-6b-int8.flm  

上述web api可使用web_api_client.py进行测试。更多用法,详见API文档

多卡部署

fastllm_pytools中使用多卡部署

from fastllm_pytools import llm
# 支持下列三种方式,需要在模型创建之前调用
llm.set_device_map("cuda:0") # 将模型部署在单一设备上
llm.set_device_map(["cuda:0", "cuda:1"]) # 将模型平均部署在多个设备上
llm.set_device_map({"cuda:0" : 10, "cuda:1" : 5, "cpu": 1}) # 将模型按不同比例部署在多个设备上

Python绑定API中使用多卡部署

import pyfastllm as llm
# 支持以下方式,需要在模型创建之前调用
llm.set_device_map({"cuda:0" : 10, "cuda:1" : 5, "cpu": 1}) # 将模型按不同比例部署在多个设备上

c++中使用多卡部署

// 支持以下方式,需要在模型创建之前调用
fastllm::SetDeviceMap({{"cuda:0", 10}, {"cuda:1", 5}, {"cpu", 1}}); // 将模型按不同比例部署在多个设备上

Docker 编译运行

docker 运行需要本地安装好 NVIDIA Runtime,且修改默认 runtime 为 nvidia

  1. 安装 nvidia-container-runtime
sudo apt-get install nvidia-container-runtime
  1. 修改 docker 默认 runtime 为 nvidia

/etc/docker/daemon.json

{
  "registry-mirrors": [
    "https://hub-mirror.c.163.com",
    "https://mirror.baidubce.com"
  ],
  "runtimes": {
      "nvidia": {
          "path": "/usr/bin/nvidia-container-runtime",
          "runtimeArgs": []
      }
   },
   "default-runtime": "nvidia" // 有这一行即可
}

  1. 下载已经转好的模型到 models 目录下
models
  chatglm2-6b-fp16.flm
  chatglm2-6b-int8.flm
  1. 编译并启动 webui
DOCKER_BUILDKIT=0 docker compose up -d --build

Android上使用

编译

# 在PC上编译需要下载NDK工具
# 还可以尝试使用手机端编译,在termux中可以使用cmake和gcc(不需要使用NDK)
mkdir build-android
cd build-android
export NDK=<your_ndk_directory>
# 如果手机不支持,那么去掉 "-DCMAKE_CXX_FLAGS=-march=armv8.2a+dotprod" (比较新的手机都是支持的)
cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-23 -DCMAKE_CXX_FLAGS=-march=armv8.2a+dotprod ..
make -j

运行

  1. 在Android设备上安装termux软件
  2. 在termux中执行termux-setup-storage获得读取手机文件的权限。
  3. 将NDK编译出的main文件,以及模型文件存入手机,并拷贝到termux的根目录
  4. 使用命令chmod 777 main赋权
  5. 然后可以运行main文件,参数格式参见./main --help

模型获取

模型库

可以在以下链接中下载已经转换好的模型

huggingface

模型导出

ChatGLM模型导出 (默认脚本导出ChatGLM2-6b模型)

# 需要先安装ChatGLM-6B环境
# 如果使用自己finetune的模型需要修改chatglm_export.py文件中创建tokenizer, model的代码
cd build
python3 tools/chatglm_export.py chatglm2-6b-fp16.flm float16 #导出float16模型
python3 tools/chatglm_export.py chatglm2-6b-int8.flm int8 #导出int8模型
python3 tools/chatglm_export.py chatglm2-6b-int4.flm int4 #导出int4模型

baichuan模型导出 (默认脚本导出baichuan-13b-chat模型)

# 需要先安装baichuan环境
# 如果使用自己finetune的模型需要修改baichuan2flm.py文件中创建tokenizer, model的代码
# 根据所需的精度,导出相应的模型
cd build
python3 tools/baichuan2flm.py baichuan-13b-fp16.flm float16 #导出float16模型
python3 tools/baichuan2flm.py baichuan-13b-int8.flm int8 #导出int8模型
python3 tools/baichuan2flm.py baichuan-13b-int4.flm int4 #导出int4模型

baichuan2模型导出 (默认脚本导出baichuan2-7b-chat模型)

# 需要先安装baichuan2环境
# 如果使用自己finetune的模型需要修改baichuan2_2flm.py文件中创建tokenizer, model的代码
# 根据所需的精度,导出相应的模型
cd build
python3 tools/baichuan2_2flm.py baichuan2-7b-fp16.flm float16 #导出float16模型
python3 tools/baichuan2_2flm.py baichuan2-7b-int8.flm int8 #导出int8模型
python3 tools/baichuan2_2flm.py baichuan2-7b-int4.flm int4 #导出int4模型

MOSS模型导出

# 需要先安装MOSS环境
# 如果使用自己finetune的模型需要修改moss_export.py文件中创建tokenizer, model的代码
# 根据所需的精度,导出相应的模型
cd build
python3 tools/moss_export.py moss-fp16.flm float16 #导出float16模型
python3 tools/moss_export.py moss-int8.flm int8 #导出int8模型
python3 tools/moss_export.py moss-int4.flm int4 #导出int4模型

LLAMA系列模型导出

# 修改build/tools/alpaca2flm.py程序进行导出
# 不同llama模型使用的指令相差很大,需要参照torch2flm.py中的参数进行配置

一些模型的转换可以参考这里的例子

QWEN模型导出

  • Qwen
# 需要先安装QWen环境
# 如果使用自己finetune的模型需要修改qwen2flm.py文件中创建tokenizer, model的代码
# 根据所需的精度,导出相应的模型
cd build
python3 tools/qwen2flm.py qwen-7b-fp16.flm float16 #导出float16模型
python3 tools/qwen2flm.py qwen-7b-int8.flm int8 #导出int8模型
python3 tools/qwen2flm.py qwen-7b-int4.flm int4 #导出int4模型
  • Qwen1.5
# 需要先安装QWen2环境(transformers >= 4.37.0)
# 根据所需的精度,导出相应的模型
cd build
python3 tools/llamalike2flm.py qwen1.5-7b-fp16.flm float16 "qwen/Qwen1.5-4B-Chat" #导出wen1.5-4B-Chat float16模型
python3 tools/llamalike2flm.py qwen1.5-7b-int8.flm int8 "qwen/Qwen1.5-7B-Chat" #导出Qwen1.5-7B-Chat int8模型
python3 tools/llamalike2flm.py qwen1.5-7b-int4.flm int4 "qwen/Qwen1.5-14B-Chat" #导出Qwen1.5-14B-Chat int4模型
# 最后一个参数可替换为模型路径

MINICPM模型导出

# 需要先安装MiniCPM环境(transformers >= 4.36.0) 
# 默认脚本导出iniCPM-2B-dpo-fp16模型
cd build 
python tools/minicpm2flm.py minicpm-2b-float16.flm #导出dpo-float16模型
./main -p minicpm-2b-float16.flm # 执行模型

开发计划

也就是俗称的画饼部分,大家如果有需要的功能可以在讨论区提出

短期计划

  • 添加MMLU, CMMLU等测试程序
  • 支持直接转换已经量化好的huggingface模型
  • 实现外推到8K长度

中期计划

  • 支持更多后端,如opencl, vulkan, 以及一些NPU加速设备
  • 支持、验证更多模型,完善模型库
  • 优化tokenizer (由于目前在python中可以直接使用原模型的tokenizer来分词,所以这项工作暂时并不急迫)

长期计划

  • 支持ONNX模型导入、推理
  • 支持模型微调

fastllm's People

Contributors

acupofespresso avatar aofengdaxia avatar bjmsong avatar caseylai avatar colorfuldick avatar denghongcai avatar dignfei avatar dongkid avatar felix-fei-fei avatar fluxlinkage avatar helloimcx avatar hubin858130 avatar jacques-chen avatar kiranosora avatar leomax-xiong avatar levinxo avatar lockmatrix avatar lxrite avatar mistsun-chen avatar purpleroc avatar siemonchan avatar tiansztiansz avatar tylunasli avatar vinlic avatar wangyumu avatar wangzhaode avatar wildkid1024 avatar xinaiwunai avatar yuanphoenix avatar ztxz16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastllm's Issues

quant转换模型出现的问题

在执行 ./quant -p chatglm-6b-fp32.flm -o chatglm-6b-fp16.flm -b 16出现以下问题
FastLLM Error: Unkown model type: unknown
terminate called after throwing an instance of 'std::string'
Aborted (core dumped)

编译warning

-- The CXX compiler identification is GNU 7.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- USE_CUDA: ON
-- PYTHON_API: OFF
-- CMAKE_CXX_FLAGS -pthread --std=c++17 -O2 -march=native
-- The CUDA compiler identification is NVIDIA 11.8.89
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Configuring done (1.8s)
CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "fastllm".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "main".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "quant".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "webui".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "benchmark".
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) in CMakeLists.txt:
  Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
  empty CUDA_ARCHITECTURES not allowed.  Run "cmake --help-policy CMP0104"
  for policy details.  Use the cmake_policy command to set the policy and
  suppress this warning.

  CUDA_ARCHITECTURES is empty for target "fastllm_tools".
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating done (0.0s)
-- Build files have been written to: /users_3/fastllm/build

这个正常么

quant 方法

请问是否有计划实现类似ggml采取更加灵活的量化方法,如Q4_1, q3_k_m

Segmentation fault (core dumped)

流程:
(1)根据readme.md,构建编译环境,状态:编译成功(存在main、quant文件);
(2)将自己微调(ptv2)后的模型导出为浮点模型(/root/autodl-tmp/chatglm-6b.bin,大小25GB),状态:导出成功;
(3)利用命令./quant -m chatglm -p /root/autodl-tmp/chatglm-6b.bin -o /root/autodl-tmp/chatglm-6b-int8.bin -b 8 将浮点模型导出为int8模型,此步骤发生报错。

报错信息如下:
Segmentation fault (core dumped)
image

环境:
服务器平台-AutoDL
Ubantu-20.04
CMake-3.16.3
CUDA-11.3
GPU-RTX2080Ti(11GB)
内存-40GB

能否支持windows平台?

在linux上编译了,运行很快。但是在windows上编译失败。
我希望能在windows上运行,再结合whisper和VITS就能实现实时对话的AI了!

centos 安装报错

$ cd /opt/jtmodel/chatgpt-mi10/fastllm/build
$ cmake ..

报错:
-- USE_CUDA: OFF
-- CMAKE_CXX_FLAGS -pthread --std=c++17 -O2 -march=native
CMake Error at CMakeLists.txt:35 (target_link_libraries):
Object library target "fastllm" may not link to anything.

-- Configuring incomplete, errors occurred!
See also "/opt/jtmodel/chatgpt-mi10/fastllm/build/CMakeFiles/CMakeOutput.log".

make编译出错

系统:Ubuntu-18.04 x86_64
GPU: P100
编译失败报错: make -j4
[ 10%] Building CXX object CMakeFiles/fastllm.dir/src/fastllm.cpp.o
/data/code/fastllm/src/fastllm.cpp: In function ‘int fastllm::DotU4U8(uint8_t*, uint8_t*, int)’:
/data/code/fastllm/src/fastllm.cpp:192:20: error: ‘_mm256_set_m128i’ was not declared in this scope
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
/data/code/fastllm/src/fastllm.cpp:192:20: note: suggested alternative: ‘_mm256_set_epi8’
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
_mm256_set_epi8
/data/code/fastllm/src/fastllm.cpp: In member function ‘void fastllm::Data::CalcWeightSum()’:
/data/code/fastllm/src/fastllm.cpp:706:31: error: ‘_mm256_set_m128i’ was not declared in this scope
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
/data/code/fastllm/src/fastllm.cpp:706:31: note: suggested alternative: ‘_mm256_set_epi8’
__m256i bytex = _mm256_set_m128i(_mm_srli_epi16(orix, 4), orix);
^~~~~~~~~~~~~~~~
_mm256_set_epi8
CMakeFiles/fastllm.dir/build.make:79: recipe for target 'CMakeFiles/fastllm.dir/src/fastllm.cpp.o' failed
make[2]: *** [CMakeFiles/fastllm.dir/src/fastllm.cpp.o] Error 1
CMakeFiles/Makefile2:179: recipe for target 'CMakeFiles/fastllm.dir/all' failed
make[1]: *** [CMakeFiles/fastllm.dir/all] Error 2
Makefile:100: recipe for target 'all' failed
make: *** [all] Error 2

请教下楼主的编译环境?怎么解决编译问题?
后续有支持chatglm-16的计划吗?代码在其他芯片能否支持(P100, v100,T4...)
感谢

hf 模型转 llm模型时报错

使用如下命令安装fastllm_pytools包

cd fastllm
mkdir build
cd build
cmake .. -DUSE_CUDA=ON
make -j
cd tools && python setup.py install

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code = True)
from fastllm_pytools import llm
model = llm.from_hf(model, tokenizer, dtype = "float16") 

root@7c296f76e678:/home/user/code/build/tools# python3
Python 3.10.6 (main, May 16 2023, 09:56:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer, AutoModel
>>> model='/home/user/code/chatglm2-6b'
>>> tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code = True)
>>> model = AutoModel.from_pretrained(model, trust_remote_code = True)
Loading checkpoint shards:  71%|████████████████████████████████████████               | 4/7 [00:07<00:05,  1.77s/it 
Loading checkpoint shards:  86%|████████████████████████████████████████                                                                                     
Loading checkpoint shards: 100%|████████████████████████████████████████                                                                                     
Loading checkpoint shards: 100%|████████████████████████████████████████              | 7/7 [00:11<00:00,  1.68s/it]
>>> from fastllm_pytools import llm
>>> model = llm.from_hf(model, tokenizer, dtype = "float16")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/user/code/build/tools/fastllm_pytools/llm.py", line 35, in from_hf
    return hf_model.create(model, tokenizer, dtype = dtype);
  File "/home/user/code/build/tools/fastllm_pytools/hf_model.py", line 49, in create
    model_type = model.config.__dict__["model_type"];
KeyError: 'model_type'

使用的 ChatGLM2 模型,model = llm.from_hf(model, tokenizer, dtype = "float16") 这一步报错了

TP多卡部署

后续支持tp切分多卡部署吗?看FasterTransformer Bloom-7b的方案做tp切分,速度会有明显提升

pyfastllm 编译问题:pybind11子模块拉取

pyfastllm编译需要pybind11模块,执行下初始化和更新子模块再执行编译。
cd build-py
git submodule init && git submodule update
cmake .. -DUSE_CUDA=ON -DPY_API=ON
make -j4
...

首个token时延问题

在V100, cuda 11.8 , gcc 11.3环境下测试生成速度,bs=1, 输入query(prompt+query)长度900+,存在首个token时延长的问题:

  1. fastllm:
    • (1)输入长度 1200, 首个token 3.32s, 其他token 20ms
    • (2)输入长度 1200, 首个token 3.32s, 其他token 20ms
    • (3)...
  2. torch
    • (1)输入长度 1200, 首个token 1.56s, 其他token 51ms
    • (2)输入长度 1200, 首个token 0.12s, 其他token 51ms
    • (3)输入长度 1200, 首个token 0.11s, 其他token 51ms
    • (4)...
      结果是fastllm的首个token时延随query长度变化很大,且每个query都会有一样的实验,torch会随着query递减。结果长度短的(<50 token)情况下,对比torch没有优势。

在机器上执行cmake -j步骤时报错

  1. 在Ubuntu 20.4的Docker容器上编译安装
  2. GCC 9和GCC 11版本都试过了,一样的报错
  3. 机器是老的Dell R720,可能CPU比较老

麻烦帮忙看下

In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h: In member function 'void fastllm::Data::CalcWeightSum()':
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:119:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_add_epi32(__m256i, __m256i)': target specific option mismatch
  119 | _mm256_add_epi32 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:464:43: note: called from here
  464 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx1, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:464:43: note: called from here
  464 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx1, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:119:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_add_epi32(__m256i, __m256i)': target specific option mismatch
  119 | _mm256_add_epi32 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:463:43: note: called from here
  463 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:463:43: note: called from here
  463 |                     acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                           ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:482:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_cvtepu8_epi16(__m128i)': target specific option mismatch
  482 | _mm256_cvtepu8_epi16 (__m128i __X)
      | ^~~~~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:462:55: note: called from here
  462 |                     __m256i mx1 = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(ax, 1));
      |                                   ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
....<略>.....
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:341:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_madd_epi16(__m256i, __m256i)': target specific option mismatch
  341 | _mm256_madd_epi16 (__m256i __A, __m256i __B)
      | ^~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:514:51: note: called from here
  514 |                             acc = _mm256_add_epi32(acc, _mm256_madd_epi16(mx0, ones));
      |                                   ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:47,
                 from /root/app/fastllm/include/utils/utils.h:21,
                 from /root/app/fastllm/src/fastllm.cpp:5:
/usr/lib/gcc/x86_64-linux-gnu/11/include/avx2intrin.h:482:1: error: inlining failed in call to 'always_inline' '__m256i _mm256_cvtepu8_epi16(__m128i)': target specific option mismatch
  482 | _mm256_cvtepu8_epi16 (__m128i __X)
      | ^~~~~~~~~~~~~~~~~~~~
/root/app/fastllm/src/fastllm.cpp:512:63: note: called from here
  512 |                             __m256i mx1 = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(bx, 1));
      |                                           ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/fastllm.dir/build.make:76: CMakeFiles/fastllm.dir/src/fastllm.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:223: CMakeFiles/fastllm_tools.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:93: CMakeFiles/fastllm.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

关于量化的细节

请问下关于linear层的int8和int4的量化策略是怎样的呢?以及推理时是还原成fp16再计算的吗

cuda运行出错

FastllmCudaBatchMatMul函数中调用的cublasSgemmStridedBatched,返回的status错误码为15,导致无法进行推理。请问您那边可以正常运行cuda版本吗

Android端运行,文件传输问题

ReadMe中讲需要把main和模型push到termux目录下
当前目录是/data/data/com.termux/files/home/
但是adb push传输文件时会提示Permission denied

请问是否必须要将手机root,需要通过什么方式将文件传输至termux目录下呢?
谢谢!

量化问题

量化目前是只量化权重吗,有考虑对activation的量化吗

反馈一个bug,关于Tokenizer

在ChatGLM的Official实现中,token采用了import sentencepiece as spm,这样的一个库,这个库在 self.sp.EncodeAsPieces(text),这一句会把英文单词比如“hello”处理成"▁hello",注意前面的两个杠不是下划线。这应该是最标准的方式,而本项目好像没有做类似的处理。

流式输出

很棒的work!
请教下,量化之后,模型输出支持流式返回吗

推理结果 ChatGLM:<eop><eop><eop><eop><eop>

转换完成的模型,推理结果只有是什么原因?

  1. tools/chatglm_export.py ../chatglm-6b.bin
  2. (CPU) build/quant -m chatglm -p chatglm-6b.bin -o chatglm-6b-int8.bin -b 8
  3. 推理(CPU/GPU): build/main -m chatglm -p chatglm-6b-int8.bin
    结果

Load (368 / 368)
Warmup...
finish.
用户: 1
ChatGLM: '<eop...'

请教是什么原因?

是否也考虑支持一下bloom

您好、是否也考虑支持一下bloom、结构上应该和llama差不多、但是bloom有比较多不同size的模型,更适合移动端的场景,可能能让这个项目更丰富

加载模型时出现bug

FastLLM Error: FileBuffer.ReadInt error.

terminate called after throwing an instance of 'std::__cxx11::basic_string<char, std::char_traits, std::allocator >'
已放弃

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.