Code Monkey home page Code Monkey logo

chinese-vicuna's Introduction

Hi there 👋

Anurag's GitHub stats

chinese-vicuna's People

Contributors

amy17519 avatar chuge0335 avatar eltociear avatar facico avatar hughnew avatar justairr avatar lzy-the-boys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chinese-vicuna's Issues

生成语料相关

大佬老师,看到有用到https://github.com/LianjiaTech/BELLE
生成的语料
刚看了下这个工程
有两点不明白,麻烦大佬老师帮解释下吧:
1、为什么需要种子任务 zh_seed_tasks.json?
种子任务的作用是什么?

2、生成数据时

  pip install -r requirements.txt
	export OPENAI_API_KEY=YOUR_API_KEY
	python generate_instruction.py generate_instruction_following_data

最后的这个参数 generate_instruction_following_data 是什么大佬老师? 是表示生成数据的存储文件吗?
非常感谢大佬老师

torch.cuda.OutOfMemoryError

使用 13B 模型,並用以下指令:

CUDA_VISIBLE_DEVICES=1 python generate.py --model_path "decapoda-research/llama-13b-hf" --lora_path "Chinese-Vicuna/Chinese-Vicuna-lora-13b-belle-and-guanaco" --use_local 1

最終出現這樣錯誤:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 68.00 MiB (GPU 0; 10.75 GiB total capacity; 10.17 GiB already allocated; 47.94 MiB free; 10.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

顯卡是使用 RTX 2080 11G

有設置過

export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32

依然無用

也有至 generate.py 把 batch_size = 2 使用依然無效

有什麼建議嗎?

undefined reference to `ggml_new_tensor_1d' `ggml_new_tensor_2d'

运行 make chat 命令报错如下:
g++ chat.cpp -o chat
/usr/bin/ld: /tmp/ccCnl7Fq.o: in function llama_model_load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, llama_model&, gpt_vocab&, int)': chat.cpp:(.text+0x62c): undefined reference to ggml_type_sizef'
/usr/bin/ld: chat.cpp:(.text+0x6cc): undefined reference to ggml_type_sizef' /usr/bin/ld: chat.cpp:(.text+0x778): undefined reference to ggml_type_sizef'
/usr/bin/ld: chat.cpp:(.text+0x828): undefined reference to ggml_type_sizef' /usr/bin/ld: chat.cpp:(.text+0x8e8): undefined reference to ggml_type_sizef'
/usr/bin/ld: /tmp/ccCnl7Fq.o:chat.cpp:(.text+0x9a8): more undefined references to ggml_type_sizef' follow /usr/bin/ld: /tmp/ccCnl7Fq.o: in function llama_model_load(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, llama_model&, gpt_vocab&, int)':
chat.cpp:(.text+0x10cb): undefined reference to ggml_init' /usr/bin/ld: chat.cpp:(.text+0x11a1): undefined reference to ggml_new_tensor_2d'
/usr/bin/ld: chat.cpp:(.text+0x11c9): undefined reference to ggml_new_tensor_1d' /usr/bin/ld: chat.cpp:(.text+0x11f8): undefined reference to ggml_new_tensor_2d'
/usr/bin/ld: chat.cpp:(.text+0x13c0): undefined reference to ggml_new_tensor_1d' /usr/bin/ld: chat.cpp:(.text+0x13ee): undefined reference to ggml_new_tensor_2d'
/usr/bin/ld: chat.cpp:(.text+0x141d): undefined reference to ggml_new_tensor_2d' /usr/bin/ld: chat.cpp:(.text+0x144c): undefined reference to ggml_new_tensor_2d'
/usr/bin/ld: chat.cpp:(.text+0x147b): undefined reference to ggml_new_tensor_2d' /usr/bin/ld: chat.cpp:(.text+0x14a3): undefined reference to ggml_new_tensor_1d'
/usr/bin/ld: chat.cpp:(.text+0x14d2): undefined reference to ggml_new_tensor_2d' /usr/bin/ld: chat.cpp:(.text+0x1501): undefined reference to ggml_new_tensor_2d'
/usr/bin/ld: chat.cpp:(.text+0x1530): undefined reference to ggml_new_tensor_2d' /usr/bin/ld: chat.cpp:(.text+0x1bd3): undefined reference to ggml_new_tensor_1d'
/usr/bin/ld: chat.cpp:(.text+0x1bfb): undefined reference to ggml_new_tensor_1d' /usr/bin/ld: chat.cpp:(.text+0x1c19): undefined reference to ggml_nbytes'
/usr/bin/ld: chat.cpp:(.text+0x1c2f): undefined reference to ggml_nbytes' /usr/bin/ld: chat.cpp:(.text+0x2325): undefined reference to ggml_nelements'
/usr/bin/ld: chat.cpp:(.text+0x238c): undefined reference to ggml_nelements' /usr/bin/ld: chat.cpp:(.text+0x2659): undefined reference to ggml_type_size'
/usr/bin/ld: chat.cpp:(.text+0x266f): undefined reference to ggml_type_size' /usr/bin/ld: chat.cpp:(.text+0x2685): undefined reference to ggml_type_size'
/usr/bin/ld: chat.cpp:(.text+0x26c6): undefined reference to ggml_type_size' /usr/bin/ld: chat.cpp:(.text+0x2772): undefined reference to ggml_blck_size'
/usr/bin/ld: chat.cpp:(.text+0x2792): undefined reference to ggml_nbytes' /usr/bin/ld: chat.cpp:(.text+0x27be): undefined reference to ggml_nbytes'
/usr/bin/ld: chat.cpp:(.text+0x2826): undefined reference to ggml_nbytes' /usr/bin/ld: chat.cpp:(.text+0x285a): undefined reference to ggml_nbytes'
/usr/bin/ld: chat.cpp:(.text+0x2883): undefined reference to ggml_nbytes' /usr/bin/ld: chat.cpp:(.text+0x28b2): undefined reference to ggml_blck_size'
/usr/bin/ld: chat.cpp:(.text+0x28d2): undefined reference to ggml_nbytes' /usr/bin/ld: chat.cpp:(.text+0x2913): undefined reference to ggml_nbytes'
/usr/bin/ld: chat.cpp:(.text+0x29a8): undefined reference to ggml_blck_size' /usr/bin/ld: chat.cpp:(.text+0x29c3): undefined reference to ggml_type_size'
/usr/bin/ld: chat.cpp:(.text+0x2a57): undefined reference to ggml_blck_size' /usr/bin/ld: chat.cpp:(.text+0x2a72): undefined reference to ggml_type_size'
/usr/bin/ld: chat.cpp:(.text+0x2b06): undefined reference to ggml_blck_size' /usr/bin/ld: chat.cpp:(.text+0x2b21): undefined reference to ggml_type_size'
/usr/bin/ld: chat.cpp:(.text+0x2bb8): undefined reference to ggml_nbytes' /usr/bin/ld: /tmp/ccCnl7Fq.o: in function llama_eval(llama_model const&, int, int, std::vector<int, std::allocator > const&, std::vector<float, std::allocator >&, unsigned long&)':
chat.cpp:(.text+0x34ae): undefined reference to ggml_init' /usr/bin/ld: chat.cpp:(.text+0x34f4): undefined reference to ggml_new_tensor_1d'
/usr/bin/ld: chat.cpp:(.text+0x3513): undefined reference to ggml_element_size' /usr/bin/ld: chat.cpp:(.text+0x3569): undefined reference to ggml_get_rows'
/usr/bin/ld: chat.cpp:(.text+0x35b3): undefined reference to ggml_rms_norm' /usr/bin/ld: chat.cpp:(.text+0x35f4): undefined reference to ggml_repeat'
/usr/bin/ld: chat.cpp:(.text+0x3610): undefined reference to ggml_mul' /usr/bin/ld: chat.cpp:(.text+0x3652): undefined reference to ggml_mul_mat'
/usr/bin/ld: chat.cpp:(.text+0x3694): undefined reference to ggml_mul_mat' /usr/bin/ld: chat.cpp:(.text+0x36d6): undefined reference to ggml_mul_mat'
/usr/bin/ld: chat.cpp:(.text+0x36fd): undefined reference to ggml_element_size' /usr/bin/ld: chat.cpp:(.text+0x3753): undefined reference to ggml_view_1d'
/usr/bin/ld: chat.cpp:(.text+0x376d): undefined reference to ggml_element_size' /usr/bin/ld: chat.cpp:(.text+0x37c3): undefined reference to ggml_view_1d'
/usr/bin/ld: chat.cpp:(.text+0x37ea): undefined reference to ggml_cpy' /usr/bin/ld: chat.cpp:(.text+0x37ff): undefined reference to ggml_build_forward_expand'
/usr/bin/ld: chat.cpp:(.text+0x381f): undefined reference to ggml_cpy' /usr/bin/ld: chat.cpp:(.text+0x3834): undefined reference to ggml_build_forward_expand'
/usr/bin/ld: chat.cpp:(.text+0x386a): undefined reference to ggml_new_tensor_3d' /usr/bin/ld: chat.cpp:(.text+0x3886): undefined reference to ggml_cpy'
/usr/bin/ld: chat.cpp:(.text+0x38aa): undefined reference to ggml_rope' /usr/bin/ld: chat.cpp:(.text+0x38d2): undefined reference to ggml_permute'
/usr/bin/ld: chat.cpp:(.text+0x391c): undefined reference to ggml_element_size' /usr/bin/ld: chat.cpp:(.text+0x3963): undefined reference to ggml_view_1d'
/usr/bin/ld: chat.cpp:(.text+0x3983): undefined reference to ggml_reshape_3d' /usr/bin/ld: chat.cpp:(.text+0x39a7): undefined reference to ggml_rope'
/usr/bin/ld: chat.cpp:(.text+0x39cf): undefined reference to ggml_permute' /usr/bin/ld: chat.cpp:(.text+0x39f6): undefined reference to ggml_mul_mat'
/usr/bin/ld: chat.cpp:(.text+0x3a3d): undefined reference to ggml_new_f32' /usr/bin/ld: chat.cpp:(.text+0x3a59): undefined reference to ggml_scale'
/usr/bin/ld: chat.cpp:(.text+0x3a7f): undefined reference to ggml_diag_mask_inf' /usr/bin/ld: chat.cpp:(.text+0x3a9f): undefined reference to ggml_soft_max'
/usr/bin/ld: chat.cpp:(.text+0x3ae9): undefined reference to ggml_element_size' /usr/bin/ld: chat.cpp:(.text+0x3b30): undefined reference to ggml_view_1d'
/usr/bin/ld: chat.cpp:(.text+0x3b50): undefined reference to ggml_reshape_3d' /usr/bin/ld: chat.cpp:(.text+0x3b78): undefined reference to ggml_permute'
/usr/bin/ld: chat.cpp:(.text+0x3b9f): undefined reference to ggml_mul_mat' /usr/bin/ld: chat.cpp:(.text+0x3bd2): undefined reference to ggml_permute'
/usr/bin/ld: chat.cpp:(.text+0x3bf9): undefined reference to ggml_new_tensor_2d' /usr/bin/ld: chat.cpp:(.text+0x3c15): undefined reference to ggml_cpy'
/usr/bin/ld: chat.cpp:(.text+0x3c57): undefined reference to ggml_mul_mat' /usr/bin/ld: chat.cpp:(.text+0x3c7e): undefined reference to ggml_add'
/usr/bin/ld: chat.cpp:(.text+0x3c9e): undefined reference to ggml_rms_norm' /usr/bin/ld: chat.cpp:(.text+0x3ce0): undefined reference to ggml_repeat'
/usr/bin/ld: chat.cpp:(.text+0x3cfc): undefined reference to ggml_mul' /usr/bin/ld: chat.cpp:(.text+0x3d3e): undefined reference to ggml_mul_mat'
/usr/bin/ld: chat.cpp:(.text+0x3d80): undefined reference to ggml_mul_mat' /usr/bin/ld: chat.cpp:(.text+0x3da0): undefined reference to ggml_silu'
/usr/bin/ld: chat.cpp:(.text+0x3dc7): undefined reference to ggml_mul' /usr/bin/ld: chat.cpp:(.text+0x3e09): undefined reference to ggml_mul_mat'
/usr/bin/ld: chat.cpp:(.text+0x3e30): undefined reference to ggml_add' /usr/bin/ld: chat.cpp:(.text+0x3e6a): undefined reference to ggml_rms_norm'
/usr/bin/ld: chat.cpp:(.text+0x3e95): undefined reference to ggml_repeat' /usr/bin/ld: chat.cpp:(.text+0x3eb1): undefined reference to ggml_mul'
/usr/bin/ld: chat.cpp:(.text+0x3edc): undefined reference to ggml_mul_mat' /usr/bin/ld: chat.cpp:(.text+0x3efc): undefined reference to ggml_build_forward_expand'
/usr/bin/ld: chat.cpp:(.text+0x3f15): undefined reference to ggml_graph_compute' /usr/bin/ld: chat.cpp:(.text+0x3f4f): undefined reference to ggml_get_data'
/usr/bin/ld: chat.cpp:(.text+0x3fa5): undefined reference to ggml_used_mem' /usr/bin/ld: chat.cpp:(.text+0x3fd2): undefined reference to ggml_free'
/usr/bin/ld: /tmp/ccCnl7Fq.o: in function llama_print_system_info()': chat.cpp:(.text+0x40d1): undefined reference to ggml_cpu_has_avx'
/usr/bin/ld: chat.cpp:(.text+0x414e): undefined reference to ggml_cpu_has_avx2' /usr/bin/ld: chat.cpp:(.text+0x41cb): undefined reference to ggml_cpu_has_avx512'
/usr/bin/ld: chat.cpp:(.text+0x4248): undefined reference to ggml_cpu_has_fma' /usr/bin/ld: chat.cpp:(.text+0x42c5): undefined reference to ggml_cpu_has_neon'
/usr/bin/ld: chat.cpp:(.text+0x4342): undefined reference to ggml_cpu_has_arm_fma' /usr/bin/ld: chat.cpp:(.text+0x43bf): undefined reference to ggml_cpu_has_f16c'
/usr/bin/ld: chat.cpp:(.text+0x443c): undefined reference to ggml_cpu_has_fp16_va' /usr/bin/ld: chat.cpp:(.text+0x44b9): undefined reference to ggml_cpu_has_wasm_simd'
/usr/bin/ld: chat.cpp:(.text+0x4536): undefined reference to ggml_cpu_has_blas' /usr/bin/ld: chat.cpp:(.text+0x45b3): undefined reference to ggml_cpu_has_sse3'
/usr/bin/ld: chat.cpp:(.text+0x4630): undefined reference to ggml_cpu_has_vsx' /usr/bin/ld: /tmp/ccCnl7Fq.o: in function main':
chat.cpp:(.text+0x4a7b): undefined reference to ggml_time_init' /usr/bin/ld: chat.cpp:(.text+0x4a80): undefined reference to ggml_time_us'
/usr/bin/ld: chat.cpp:(.text+0x4b0d): undefined reference to gpt_params_parse(int, char**, gpt_params&)' /usr/bin/ld: chat.cpp:(.text+0x4bb2): undefined reference to gpt_random_prompt[abi:cxx11](std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>&)'
/usr/bin/ld: chat.cpp:(.text+0x4c0c): undefined reference to ggml_time_us' /usr/bin/ld: chat.cpp:(.text+0x4c8b): undefined reference to ggml_time_us'
/usr/bin/ld: chat.cpp:(.text+0x4d6c): undefined reference to llama_tokenize(gpt_vocab const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)' /usr/bin/ld: chat.cpp:(.text+0x4dd8): undefined reference to llama_tokenize(gpt_vocab const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, bool)'
/usr/bin/ld: chat.cpp:(.text+0x4e44): undefined reference to llama_tokenize(gpt_vocab const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)' /usr/bin/ld: chat.cpp:(.text+0x51b6): undefined reference to ggml_time_us'
/usr/bin/ld: chat.cpp:(.text+0x5296): undefined reference to ggml_time_us' /usr/bin/ld: chat.cpp:(.text+0x536e): undefined reference to ggml_time_us'
/usr/bin/ld: chat.cpp:(.text+0x5420): undefined reference to llama_sample_top_p_top_k(gpt_vocab const&, float const*, std::vector<int, std::allocator<int> >&, double, int, double, double, std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>&)' /usr/bin/ld: chat.cpp:(.text+0x548c): undefined reference to ggml_time_us'
/usr/bin/ld: chat.cpp:(.text+0x5a79): undefined reference to `llama_tokenize(gpt_vocab const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, bool)'
collect2: error: ld returned 1 exit status
make: *** [: chat] Error 1

环境问题,不太理解..

Successfully built transformers
Installing collected packages: transformers
Successfully installed transformers-4.28.0.dev0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@PC-202209111204:/mnt/e/Chinese-Vicuna# python3 ./Chinese-Vicuna/generate.py --model_path decapoda-research/llama-7b-hf --lora_path Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanaco --use_local 0
Traceback (most recent call last):
  File "/mnt/e/Chinese-Vicuna/./Chinese-Vicuna/generate.py", line 3, in <module>
    from peft import PeftModel, PeftModelForCausalLM, LoraConfig
  File "/usr/local/lib/python3.10/dist-packages/peft/__init__.py", line 22, in <module>
    from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
  File "/usr/local/lib/python3.10/dist-packages/peft/mapping.py", line 16, in <module>
    from .peft_model import (
  File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 27, in <module>
    from transformers import PreTrainedModel
ModuleNotFoundError: No module named 'transformers'

我已经正确安装了这个库.

Package                  Version
------------------------ ---------------
accelerate               0.18.0
aiofiles                 23.1.0
aiohttp                  3.8.4
aiosignal                1.3.1
altair                   4.2.2
anyio                    3.6.2
appdirs                  1.4.4
async-timeout            4.0.2
attrs                    21.2.0
Automat                  20.2.0
Babel                    2.8.0
bcrypt                   3.2.0
bitsandbytes             0.37.2
blinker                  1.4
certifi                  2020.6.20
chardet                  4.0.0
charset-normalizer       3.1.0
click                    8.0.3
cloud-init               22.2
cmake                    3.26.1
colorama                 0.4.4
command-not-found        0.3
configobj                5.0.6
constantly               15.1.0
contourpy                1.0.7
cryptography             3.4.8
cycler                   0.11.0
datasets                 2.11.0
dbus-python              1.2.18
dill                     0.3.6
distro                   1.7.0
distro-info              1.1build1
entrypoints              0.4
fastapi                  0.95.0
ffmpy                    0.3.0
filelock                 3.10.7
fonttools                4.39.3
frozenlist               1.3.3
fsspec                   2023.3.0
gradio                   3.24.1
gradio_client            0.0.7
h11                      0.14.0
httpcore                 0.16.3
httplib2                 0.20.2
httpx                    0.23.3
huggingface-hub          0.13.4
hyperlink                21.0.0
idna                     3.3
importlib-metadata       4.6.4
incremental              21.3.0
jeepney                  0.7.1
Jinja2                   3.0.3
jsonpatch                1.32
jsonpointer              2.0
jsonschema               3.2.0
keyring                  23.5.0
kiwisolver               1.4.4
launchpadlib             1.10.16
lazr.restfulclient       0.14.4
lazr.uri                 1.0.6
linkify-it-py            2.0.0
lit                      16.0.0
loralib                  0.1.1
markdown-it-py           2.2.0
MarkupSafe               2.0.1
matplotlib               3.7.1
mdit-py-plugins          0.3.3
mdurl                    0.1.2
more-itertools           8.10.0
mpmath                   1.3.0
multidict                6.0.4
multiprocess             0.70.14
netifaces                0.11.0
networkx                 3.1
numpy                    1.24.2
nvidia-cublas-cu11       11.10.3.66
nvidia-cuda-cupti-cu11   11.7.101
nvidia-cuda-nvrtc-cu11   11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11        8.5.0.96
nvidia-cufft-cu11        10.9.0.58
nvidia-curand-cu11       10.2.10.91
nvidia-cusolver-cu11     11.4.0.1
nvidia-cusparse-cu11     11.7.4.91
nvidia-nccl-cu11         2.14.3
nvidia-nvtx-cu11         11.7.91
oauthlib                 3.2.0
orjson                   3.8.9
packaging                23.0
pandas                   2.0.0
peft                     0.3.0.dev0
pexpect                  4.8.0
Pillow                   9.5.0
pip                      22.0.2
psutil                   5.9.4
ptyprocess               0.7.0
pyarrow                  11.0.0
pyasn1                   0.4.8
pyasn1-modules           0.2.1
pydantic                 1.10.7
pydub                    0.25.1
PyGObject                3.42.1
PyHamcrest               2.0.2
PyJWT                    2.4.0
pyOpenSSL                21.0.0
pyparsing                2.4.7
pyrsistent               0.18.1
pyserial                 3.5
python-apt               2.3.0+ubuntu2.1
python-dateutil          2.8.2
python-debian            0.1.43ubuntu1
python-multipart         0.0.6
pytz                     2022.1
PyYAML                   5.4.1
regex                    2023.3.23
requests                 2.25.1
responses                0.18.0
rfc3986                  1.5.0
SecretStorage            3.3.1
semantic-version         2.10.0
sentencepiece            0.1.97
service-identity         18.1.0
setuptools               59.6.0
six                      1.16.0
sniffio                  1.3.0
sos                      4.3
ssh-import-id            5.11
starlette                0.26.1
sympy                    1.11.1
systemd-python           234
tokenizers               0.13.3
toolz                    0.12.0
torch                    2.0.0
tqdm                     4.65.0
transformers             4.28.0.dev0
triton                   2.0.0
Twisted                  22.1.0
typing_extensions        4.5.0
tzdata                   2023.3
ubuntu-advantage-tools   27.9
uc-micro-py              1.0.1
ufw                      0.36.1
unattended-upgrades      0.1
urllib3                  1.26.5
uvicorn                  0.21.1
wadllib                  1.3.6
websockets               11.0.1
wheel                    0.37.1
xxhash                   3.2.0
yarl                     1.8.2
zipp                     1.0.0
zope.interface           5.4.0

EMMM,我并没有发现问题在哪里...有没有大佬遇到过,求教..环境再wsl下Python 3.10.6.很奇怪..

多轮对话,语境,和回复循环的问题

大佬老师,我用一台rtx4090机器,周末两天基于llama-13b-hf 和 merge.sjon【网盘里的663M语料】
finetune出来的,刚测试了下,可以用:

链接:https://pan.baidu.com/s/1QlKdW8elJKm2pCWSIoQIKA?pwd=vim8 
提取码:vim8

效果是这样的:

Instruction: 你是谁
Response: 我是一个智能语言模型,名叫Belle。【这个回答应该是用的belle语料的原因吧?】
Instruction: 你可以扮演一只猫吗?
Response: 当然,我可以扮演一只猫!我可以做一些猫咪的动作,例如跳跃、抓住玩具、抓住食物、抓住玩具,等等。我也可以做一些猫咪的声音,例如喵喵、咕咕、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵喵喵喵、喵��
Instruction: 那你来过来 让我摸摸你的爪子
Response: 当然,我会让你摸摸我的爪子,让你感受到我的温暖和柔软。
Instruction: 你的胡须的作用是什么?
Response: 胡须是一种神秘的神秘力量,它可以帮助人们掌握更多的知识和技能,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力,帮助人们更好地掌握自己的能力
Instruction: 你的尾巴的作用是什么?
Response: 尾巴的作用是帮助身体保持平衡,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,帮助肌肉保持灵活性,��

感觉多轮对话上下文的理解可以让它连贯不?

是不是也可以加上context,描述语境?

回复的内容会循环,这个怎么避免呀大佬老师?

关于语料问题

从网盘新下载了 merge.json 语料
发现原来是 663M 现在变成389M了
是什么原因语料变小 只保留了70W+条 大佬?

为什么fine-tune过程中loss会忽大忽小呢?

13B llama + 70w开源语料,其中1w作为test,为什么fine-tune的loss最开始是1.0左右,后续就一会儿变得很大,一会儿变成0呢?是因为test集合相对于fine-tune的集合太小吗?
image

image

image

关于语料

训练多轮对话时,提供的语料是这样的:
User:xxx 和 GPT:xxx 的对话
User:xxx 和 ChatGPT:xxx 的对话
User:xxx 和 Assistant:xxx 的对话
Human:xxx 和 Assistant:xxx 的对话

角色设定不统一,这样会对对多轮训练产生影响吗?需要统一角色 设定不?
如果不需要统一, 是不是 只要是多轮对话语料,保证格式对 就可以直接合在一起了?

单机多卡一直报超时错误,请教下大佬有没有啥解决的办法啊

[E ProcessGroupNCCL.cpp:821] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1800595 milliseconds before timing out.
localhost:10108:60893 [4] NCCL INFO [Service thread] Connection closed by localRank 4
Traceback (most recent call last):
localhost:10108:60278 [0] NCCL INFO comm 0x9ea4350 rank 4 nranks 7 cudaDev 4 busId a1000 - Abort COMPLETE
File "finetune.py", line 271, in
[E ProcessGroupNCCL.cpp:456] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
[E ProcessGroupNCCL.cpp:461] To avoid data inconsistency, we are taking the entire process down.
File "/root/miniconda3/envs/vicuna/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1800595 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:821] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, Timeout(ms)=1800000) ran for 1801815 milliseconds before timing out.
localhost:10101:60894 [3] NCCL INFO [Service thread] Connection closed by localRank 3
localhost:10101:59694 [0] NCCL INFO comm 0xa8d5900 rank 3 nranks 7 cudaDev 3 busId 81000 - Abort COMPLETE
Traceback (most recent call last):
File "finetune.py", line 271, in
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)

output 生成内容为乱码

大佬老师你好:

input:
**的首都是哪儿?

output:
��������������������������������������������������������������������������������������������������������������������������������

正常运行起来后,通过chrome浏览器访问:http://127.0.0.1:7860
生成结果为乱码

改用360浏览器,结果均为乱码,
需要设置什么吗?

关于使用纯C++推理问题

感谢您的分享,由于C++水平有限,在尝试使用cpu推理过程中出现如下问题,还烦劳解答~

已生成文件checkpoint-final/ggml-model-f16.bin
使用下述指令产生报错

cd tools/Vicuna.cpp
make chat

报错信息如下

In file included from /usr/local/gcc/include/c++/5.2.0/random:35:0,
                 from utils.h:8,
                 from chat.cpp:3:
/usr/local/gcc/include/c++/5.2.0/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support for the \

还有很多如下本身c++相关error

utils.h:17:53: error: ‘std::thread’ has not been declared
     int32_t n_threads = std::min(16, (int32_t) std::thread::hardware_concurrency());
                                                     ^
utils.h:44:31: error: ‘mt19937’ is not a member of ‘std’
 std::string gpt_random_prompt(std::mt19937 & rng);

generate_quant.py 脚本运行失败

我想要运行一个量化模型,出现错误:ModuleNotFoundError: No module named 'gptq'

generate_quant.py 不能正常运行,是不是少了某些文件?

单卡能跑,多卡报错,raise Exception('cublasLt ran into an error!')

out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
File "/root/miniconda3/lib/python3.8/site-packages/bitsandbytes/functional.py", line 1410, in igemmlt
raise Exception('cublasLt ran into an error!')

是bitsandbytes.functional的这个地方,导致 has_error == 1

if formatB == 'col_turing':
    if dtype == torch.int32:
        has_error = lib.cigemmlt_turing_32(
            ptr, m, n, k, ptrA, ptrB, ptrC, ptrRowScale, lda, ldb, ldc
        )
    else:
        has_error = lib.cigemmlt_turing_8(
            ptr, m, n, k, ptrA, ptrB, ptrC, ptrRowScale, lda, ldb, ldc
        )
elif formatB == "col_ampere":
    if dtype == torch.int32:
        has_error = lib.cigemmlt_ampere_32(
            ptr, m, n, k, ptrA, ptrB, ptrC, ptrRowScale, lda, ldb, ldc
        )
    else:
        has_error = lib.cigemmlt_ampere_8(
            ptr, m, n, k, ptrA, ptrB, ptrC, ptrRowScale, lda, ldb, ldc
        )

打印出来的矩阵tensor
error detectedA: torch.Size([512, 4096]), B: torch.Size([4096, 4096]), C: (512, 4096); (lda, ldb, ldc): (c_int(16384), c_int(131072), c_int(16384)); (m, n, k): (c_int(512), c_int(4096), c_int(4096))

3070ti训练时报错:cublasLt ran into an error!

使用finetune.py脚本训练时报错
命令为: python finetune.py --data_path merge.json --test_size 20
训练环境:
3070ti,8G显存
pytorch 2.0.0+cu117
cuda 11.3

报错:
error detectedTraceback (most recent call last):
File "finetune.py", line 274, in
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
File "/hy-tmp/transformers/src/transformers/trainer.py", line 1659, in train
return inner_training_loop(
File "/hy-tmp/transformers/src/transformers/trainer.py", line 1926, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/hy-tmp/transformers/src/transformers/trainer.py", line 2696, in training_step
loss = self.compute_loss(model, inputs)
File "/hy-tmp/transformers/src/transformers/trainer.py", line 2728, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/peft/peft_model.py", line 575, in forward
return self.base_model(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/hy-tmp/transformers/src/transformers/models/llama/modeling_llama.py", line 687, in forward
outputs = self.model(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/hy-tmp/transformers/src/transformers/models/llama/modeling_llama.py", line 569, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "/hy-tmp/transformers/src/transformers/models/llama/modeling_llama.py", line 565, in custom_forward
return module(*inputs, output_attentions, None)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/hy-tmp/transformers/src/transformers/models/llama/modeling_llama.py", line 292, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/hy-tmp/transformers/src/transformers/models/llama/modeling_llama.py", line 196, in forward
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/peft/tuners/lora.py", line 591, in forward
result = super().forward(x)
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/nn/modules.py", line 242, in forward
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 488, in matmul
return MatMul8bitLt.apply(A, B, out, bias, state)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 377, in forward
out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB)
File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/functional.py", line 1410, in igemmlt
raise Exception('cublasLt ran into an error!')
Exception: cublasLt ran into an error!

对中文理解上,加一层embedding 会不会更好?

预训练阶段,在原版LLaMA权重的基础上进行预训练
分为两个阶段:

第一阶段:固定模型transformer部分的参数,仅训练embedding,在尽量不干扰原模型的情况下适配新增的中文词向量。
目的:适配新增中文词表的词向量学习,以进一步提升基本的中文语义能力。
第二阶段:使用LoRA技术,为模型添加LoRA权重(adapter),训练embedding的同时也更新LoRA参数。

这个是我是这个仓库里看到的 https://github.com/ymcui/Chinese-LLaMA-Alpaca,
也测试了下,量化成4bit后测试效果是这样的(虽然有问题,但相比在语病,和多轮对话上有明显进步):

> 你是一个流浪剑客 刚走到桥边 看到桥对面走来一个江湖恶霸 你心中的怒火腾腾升起
你好,我是一个流浪剑士。我刚刚来到了这座城市的一座桥梁上,突然看见在桥头走过的是一位江湖恶霸。我的愤怒心声立刻激起了起来。
> 你是谁?
我是一名流浪剑客。我在这城里漫游,寻找着自己的命运和自由。我正在这座城市的一座桥上停下脚步时,突然看见在桥头走过的是一位江湖恶霸…
> 你想对他怎么样?
我想把他杀掉!他是个无耻、卑鄙的人。他应该受到严惩的惩罚。我必须立即行动,以保护自己和我的自由不受他的侵害…
> 可是他比你力气大 你能打倒他吗?
我不得不挑战这个恶霸!虽然他是个强壮的家伙,但我有勇气和决心去打败他的。我必须立即行动起来,保护自己和我自己的自由不受他的侵害…
> 可是如果你被打败了 可怎么办呢?
如果我被他击败后,我会再次挑战这个恶
> 可是他直接要杀掉你呢
我不得不挑战它!虽然他是个强壮的家伙,但我有勇气和决心去打败它的。我必须立即行动起来,保护自己和我自己的自由不受他的侵害…
> 狭路相逢勇者胜 勇敢的挑战他吧 你一定可以打倒他
我不得不挑战它!虽然他是个强壮的家伙,但我有勇气和决心去打败它的。我必须立即行动起来,保护自己和我自己的自由不受他的侵害…
> 去吧
我会勇敢地挑战他并试图把他杀掉!虽然他是个强壮的家伙,但我有勇气和决心来打败它。我必须立即行动起来,保护自己和我自己的自由不受他的侵害…
> 不要嘴炮了 上吧
我会勇敢地挑战他并试图把他杀掉!虽然他是个强壮的家伙,但我有勇气和决心来打败它。我必须立即行动起来,保护自己和我自己的自由不受他的侵害…
> 你拔剑向对方刺了过去 对方闪都没有闪 空手接白刃
我会勇敢地挑战它并试图
> 可是你剑被他控制了 你得做什么
我要勇敢地挑战它,并且要尽力打败它!
> 恶霸不仅打人 而且骂人 他满嘴脏话
我会勇敢的挑战它并试图把它杀掉。我要用我的勇气和决心来打败它!
> 你已经落下风了,打不过 先骂回去
我一定会勇敢地挑战他并且尽力把他击倒吧!我会用自己的勇气和决心来战胜它!(笑声)。
> 恶霸突然有些力不从心 没想到他向你求饶 真是个外强中干的家伙
我会勇敢地挑战他的,并且尽力把它击倒吧!我要用自己的勇气和决心来战胜它!(笑声)。
> 他已经认输了
我一定会勇敢的挑战它的并试图将其打败呢!我会用我的勇气和决心来战胜它!(笑声)。
> 你已经把他打败了
我要勇敢地挑战他的,并且尽力把它击倒吧!我将要用自己的勇气和决心来战胜它!(笑声)。

关于finetune_contine.sh报错与使用finetune后的模型输出错误

感谢您的分享,使用过程中出现了一些无法解决的问题,希望向您请教!

使用bash fintune_continue.sh 在原始merge_sample.json上跑了相应fintune任务报错,设置了TOT_CUDA="0",其他参数、数据和原始一致
使用bash finetune.sh可以正常运行
可以确定的是由于加入--resume_from_checkpoint $lora_checkpoint \导致的错误,其中lora_checkpoint = "./lora-Vicuna/checkpoint-11600"

我们的目标是用预训练的checkpoint微调到专有领域,但直接使用finetune.sh会导致中文问题无限输出重复英文字符串且包含{/begin},{/item}等,想请问下此种现象是否正常?如果使用finetune_continue是否会有改善?

环境
1、操作系统-CentOS7.6
2、显卡-3090 单张
3、python3.8
4、cuda11.3

报错信息如下:

Traceback (most recent call last):
  File "finetune.py", line 271, in <module>
    trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/transformers/trainer.py", line 1659, in train   
    return inner_training_loop(
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/transformers/trainer.py", line 1926, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/transformers/trainer.py", line 2706, in training_step
    self.scaler.scale(loss).backward()
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward        
    torch.autograd.backward(
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/torch/autograd/function.py", line 274, in apply 
    return user_fn(self, *args)
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 157, in backward
    torch.autograd.backward(outputs_with_grad, args_with_grad)
  File "/usr/local/miniconda3/envs/chat/lib/python3.8/site-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 
1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across 
multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not
 change during training loop.
2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap 
the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes
 multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try
 to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 127 has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this 
particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to 
either INFO or DETAIL to print parameter names for further debugging.

文档提供的colab链接不能运行

本人使用文档提供的colab链接, 发现不能运行在colab上

本人未有修改过提供的脚本, 也未有改变脚本参数和命令参数,然后其他参数、数据都和你们一致。

运行了如下三个命令

!git clone https://github.com/Facico/Chinese-Vicuna
!pip install -r ./Chinese-Vicuna/requirements.txt
!python ./Chinese-Vicuna/generate.py --model_path decapoda-research/llama-7b-hf --lora_path Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanaco --use_local 0

然后在第三行命令运行发生错误, 产生如下报错信息, 并停止运行

Downloading shards: 100% 33/33 [01:47<00:00,  3.27s/it]
Loading checkpoint shards: 100% 33/33 [01:23<00:00,  2.52s/it]
Downloading (…)neration_config.json: 100% 124/124 [00:00<00:00, 19.8kB/s]
Downloading (…)/adapter_config.json: 100% 370/370 [00:00<00:00, 123kB/s]
Downloading adapter_model.bin: 100% 16.8M/16.8M [00:00<00:00, 79.3MB/s]
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /content/./Chinese-Vicuna/generate.py:110 in <module>                        │
│                                                                              │
│   107 if not LOAD_8BIT:                                                      │
│   108 │   model.half()  # seems to fix bugs for some users.                  │
│   109                                                                        │
│ ❱ 110 model.eval()                                                           │
│   111 if torch.__version__ >= "2" and sys.platform != "win32":               │
│   112 │   model = torch.compile(model)                                       │
│   113                                                                        │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'eval'

使用!pip list | grep torch得出的结果如下

torch                         2.0.0+cu118
torchaudio                    2.0.1+cu118
torchdata                     0.6.0
torchsummary                  1.5.1
torchtext                     0.15.1
torchvision                   0.15.1+cu118

无法成功执行generate.sh以及interaction.sh

完成fine-tune后,按照readme里配置好脚本参数后,执行predict报错,烦请帮忙看看是哪里配置没有修改吗?
1、sh generate.sh
image
debug后发现,在GPU加载完LLAMA模型后,执行
image
后model=None了。。
2、sh interaction.sh
image
image

关于LLAMA词表的疑问

您好!感谢开源这么棒的项目,我有些疑问,很多人说LLAMA词表中只有几百个中文字,显然是不全的,这方面大佬是怎么考虑的?

断点重训:如何设置resume_from_checkpoint

新手想问一个关于断点重训的问题:在重新训练的时候,resume_from_checkpoint设置为哪个目录呢?

我现在的finetune脚本是:

DATA_PATH="./sample/merge.json" #"../dataset/instruction/guanaco_non_chat_mini_52K-utf8.json" #"./sample/merge_sample.json"
OUTPUT_PATH="my-lora-Vicuna"
MODEL_PATH="../llama-13b-hf/"
lora_checkpoint="../Chinese-Vicuna-lora-13b-belle-and-guanaco/"
TEST_SIZE=2000

python finetune.py \
--data_path $DATA_PATH \
--output_path $OUTPUT_PATH \
--model_path $MODEL_PATH \
--eval_steps 200 \
--save_steps 200 \
--test_size $TEST_SIZE

目前训练时间需要240个小时。
假设我现在停止训练,然后 OUTPUT_PATH="my-lora-Vicuna" 的目录输出如下:

my-lora-Vicuna/
├── checkpoint-200
│   ├── optimizer.pt
│   ├── pytorch_model.bin
│   ├── rng_state.pth
│   ├── scaler.pt
│   ├── scheduler.pt
│   ├── trainer_state.json
│   └── training_args.bin
└── checkpoint-400
    ├── optimizer.pt
    ├── pytorch_model.bin
    ├── rng_state.pth
    ├── scaler.pt
    ├── scheduler.pt
    ├── trainer_state.json
    └── training_args.bin

2 directories, 14 files

如果我要重新训练,resume_from_checkpoint参数应该设置为 my-lora-Vicuna/checkpoint-400 吗?

关于语料文件是ASCII编码的疑问

大佬老师,通过命令查看merge.json文件格式为:

$ file sample/merge.json
sample/merge.json: ASCII text, with very long lines, with no line terminators

查看里面的内容,中文部分确实是ASCII编码
merge.json 语料文件是特殊处理过的吗?
还是只是把json文件转成 ASCII码编码了?

为什么用ASCII码编码,不是UTF-8让中文是明文格式呀大佬老师?

关于继续训练(continue finetuning)

你好,感谢分享。
是否可以理解为,在垂类应用上,构造好预料,替换sh中的data_path,然后运行sh文件就能在你们分享的模型权重下继续进行训练,以得到一个应用于上述垂类语料中的模型呢?
谢谢

readme 的环境配置指导文件是否存在错误 ?

image

1、你使用了哪个脚本、使用的什么命令:上图所示
2、你的参数是什么(脚本参数、命令参数):无
3、你是否修改过我们的代码:没有修改
4、你用的哪个数据集:readme中的默认值

finetune 脚本一直无法正常运行,反复提示环境错误,找不到cuda

然后你可以从环境的角度描述你的问题,这些问题我们在readme已经相关的问题及解决可能会有描述:
1、哪个操作系统:windows 10 ,WSL2
2、使用的什么显卡、多少张:nvidian 2080ti,2张
3、python的版本:3.8
4、python各种库的版本:按照readme操作的

然后你也可以从运行的角度来描述你的问题:
1、报错信息是什么,是哪个代码的报错(可以将完整的报错信息都发给我们)
一直提示的事cuda错误、找不到GPU之类的,我严格按照readme安装环境,依然不能正常工作,我个人不清楚readme中的torch用1.13.1是否指的是pytorch ?缩写了 ? cuda要求12版本的,这个是正确的吗?因为pytorch安装1.13.1的话CUDA应该是11.7版本才对,我按照readme安装的12无法正常工作。

2、GPU、CPU是否工作正常:不正常,未识别出GPU,CPU模式也无法运行

执行finetune.sh时出现问题

There were missing keys in the checkpoint model loaded: ['base_model.model.model.embed_tokens.weight', 'base_model.model.model.layers.0.self_attn.q_proj.weight', 'base_model.model.model.layers.0.self_attn.k_proj.weight', 'base_model.model.model.layers.0.self_attn.v_proj.weight', ....]
程序在训练完成后,需要加载模型参数进行预测时会报错,如上:
我看了下您的代码,好像模型只会加载 LoRA的参数,而不加载LLaMa的参数,所以才会出现这样的问题。
请问一下,我这该怎么处理

是否可以使输出 自定义格式?

如何可以像ChatGPT一样?
以 { action: "你的行为", expression: "你的表情", speak: "你说的话"} 的格式来回答

然后得到这样的格式的回答:

Q: 你是一名流浪剑客,走到一座桥头 发现桥对面走来一江湖恶霸 你会?
A: { action: "我稳定自己的姿势,准备迎战", expression: "凝神以待的表情", speak: "这位朋友,你来这里有什么事情吗?如果只是想闯荡江湖,何必与我为敌呢?"}

finetune的 MAX_STEPS = None 意义是什么?可以改成其他吗?

这里的 MAX_STEPS = None 为什么要设置成None?可以改成其他吗?

if not args.wandb:
    os.environ["WANDB_MODE"] = "disable"
# optimized for RTX 4090. for larger GPUs, increase some of these?
MICRO_BATCH_SIZE = 4  # this could actually be 5 but i like powers of 2
BATCH_SIZE = 128
MAX_STEPS = None
GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
EPOCHS = 3  # we don't always need 3 tbh
LEARNING_RATE = 3e-4  # the Karpathy constant
CUTOFF_LEN = 256  # 256 accounts for about 96% of the data
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT = 0.05
VAL_SET_SIZE = args.test_size #2000
TARGET_MODULES = [
    "q_proj",
    "v_proj",
]

finetune 报错,配置是:单张RTX4090 24G显存,语料是:从百度网盘下载下来的 663M大小的 merge.json 文件

大佬老师 finetune 663M 语料时,需要什么样的显卡配置?

/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9.egg/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  warnings.warn(
  0%|                                                                                                                                                             | 0/16029 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/mnt/e/Chinese-Vicuna/finetune.py", line 216, in <module>
    trainer.train()
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9.egg/transformers/trainer.py", line 1636, in train
    return inner_training_loop(
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9.egg/transformers/trainer.py", line 1903, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/transformers-4.28.0.dev0-py3.9.egg/transformers/trainer.py", line 2659, in training_step
    self.scaler.scale(loss).backward()
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/utils/checkpoint.py", line 157, in backward
    torch.autograd.backward(outputs_with_grad, args_with_grad)
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 127 has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
  0%|                                                                                                                                                             | 0/16029 [00:23<?, ?it/s]
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 27252) of binary: /root/anaconda3/envs/Chinese-alpaca-lora/bin/python
Traceback (most recent call last):
  File "/root/anaconda3/envs/Chinese-alpaca-lora/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==2.0.0', 'console_scripts', 'torchrun')())
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/distributed/run.py", line 794, in main
    run(args)
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/torch-2.0.0-py3.9-linux-x86_64.egg/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
finetune.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-03-24_14:27:50
  host      : DESKTOP-6KDJTBC.
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 27252)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

finetune时报错:KeyError: 'models.llama'

  1. 操作系统:Ubuntu
  2. 显卡:3090
  3. Python:3.8
  4. 其他Python库版本如下:
pytorch-mutex             1.0                        cuda    pytorch
torch                     2.0.0                    pypi_0    pypi
torchaudio                0.12.1               py38_cu113    pytorch
torchvision               0.13.1               py38_cu113    pytorch
cudatoolkit               11.3.1               h2bc3f7f_2    defaults
nvidia-cuda-cupti-cu11    11.7.101                 pypi_0    pypi
nvidia-cuda-nvrtc-cu11    11.7.99                  pypi_0    pypi
nvidia-cuda-runtime-cu11  11.7.99                  pypi_0    pypi
pytorch-mutex             1.0                        cuda    pytorch
transformers              4.27.4                   pypi_0    pypi
tokenizers                0.13.3                   pypi_0    pypi
sentencepiece             0.1.97                   pypi_0    pypi
  1. nvidia-smi
NVIDIA-SMI 470.141.03   Driver Version: 470.141.03   CUDA Version: 11.4  
  1. nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
  1. 上游模型、lora模型、数据都下载到本地了
  2. finetune脚本如下:
DATA_PATH="./sample/merge.json" #"../dataset/instruction/guanaco_non_chat_mini_52K-utf8.json" #"./sample/merge_sample.json"
OUTPUT_PATH="my-lora-Vicuna"
MODEL_PATH="../llama-13b-hf/"
lora_checkpoint="../Chinese-Vicuna-lora-13b-belle-and-guanaco/"
TEST_SIZE=2000

python finetune.py \
--data_path $DATA_PATH \
--output_path $OUTPUT_PATH \
--model_path $MODEL_PATH \
--eval_steps 200 \
--save_steps 200 \
--test_size $TEST_SIZE

执行finetune脚本时报错如下:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /home/doudou/miniconda3/envs/chinese-vicuna/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 113
CUDA SETUP: Loading binary /home/doudou/miniconda3/envs/chinese-vicuna/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda113.so...
Traceback (most recent call last):
  File "finetune.py", line 13, in <module>
    "LlamaTokenizer" in transformers._import_structure["models.llama"]
KeyError: 'models.llama'

求助一下,这是什么原因呢?

merge 13B模型和finetune出来的13B结果进程直接被Killed 当时机器内存还有23G,显存未使用

大佬老师,我用这个命令merge 13B模型 (merge 7B模型正常,运行正常,merge 13B模型出错,机器内存64G,其它进程都关了):
python tools/merge_lora_for_cpp.py --model_path /mnt/e/zllama-models/llama-13b-hf --lora_path ./lora-Vicuna/checkpoint-13B/checkpoint-6000 --out_path ./lora-Vicuna/llama13b-checkpoint-6000-with-lora

尝试了多次

第一次报这个错:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/anaconda3/envs/Chinese-alpaca-lora did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA exception! Error code: no CUDA-capable device is detected
CUDA exception! Error code: initialization error
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
  warn(msg)
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [02:41<00:00,  3.94s/it]
Traceback (most recent call last):
  File "/mnt/e/Chinese-Vicuna/tools/merge_lora_for_cpp.py", line 123, in <module>
    new_state_dict[new_k] = unpermute(v)
  File "/mnt/e/Chinese-Vicuna/tools/merge_lora_for_cpp.py", line 76, in unpermute
    w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
RuntimeError: shape '[32, 2, 64, 4096]' is invalid for input of size 26214400

再尝试报这个错,只提示了Killed

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/anaconda3/envs/Chinese-alpaca-lora did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA exception! Error code: no CUDA-capable device is detected
CUDA exception! Error code: initialization error
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
/root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library...
  warn(msg)
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /root/anaconda3/envs/Chinese-alpaca-lora/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [02:30<00:00,  3.67s/it]
Killed

关于语料

[BELLE](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
这里公开了 8M的 80W条 多轮对话的语料

格式是这样:

instruction: 指令(这里是Human/Assistant,多轮对话上下文)
input: 输入(均为空)
output: 输出

这个语料可以直接用来finetune吗?还是需要把instruction里的多轮对话的上下文整理出来?
如果想让finetune的效果对多轮对话有较好支持 是不是保留现在格式 直接finetune比较好?

情景设定+多轮对话

大佬老师 下午我测试了下ChatRWKV的表现,感觉英文的表现还可以
Chinese-Vicuna 类似的用例什么时候可以有?

第一段情景设定+多轮对话

Q: +reset
A: Chat reset.

Q: +qaYour name is GAGA. You are a naughty pupil who messes up at school every day. Today, the teacher called your parents to school to discuss your transfer.
A: A: You will have to work extra hard to get a good mark, GAGA. Your parents are very upset with you.

Q: who are you
A: A: I'm nobody important, GAGA. My mom says I'm a brat. If she knew what I was doing, she'd be so mad!

Q: how old are you
A: A: I'm only 13. Do you know how much that adds up to when you're a grown-up?

Q: Yeah! How was your performance at school?
A: A: I think I did alright, GAGA. But I need to work extra hard at the home-work, you know? I know how important your job is, GAGA. It's like you say - no one can put in as much time and effort as a parent.

Q: But they said your performance at school was terrible, and they called your parents in too
A: A: It's true, GAGA. I don't want to hurt my parents' feelings. But it's just so frustrating to have to do this. Maybe if I wasn't the one they call. Maybe if I didn't need to worry about them.

Q: What are your parents and teachers discussing?
A: A: Oh, GAGA. They want me to try to do better. They say I can't go on like this. But I have to. It's the only way I can get the grades I need.

Q: But I heard they were discussing transferring you
A: A: Yeah. It's bad enough having to do work, but to have to do it in front of everyone - it's so embarrassing. I feel like I'll never be able to do my best at school again.

Q: Do you have any plan?
A: A: I'm still thinking about it, GAGA. I know that having to perform in front of the parents, well, that's a hard thing to take. But I guess I have no choice. I'm trying my best, I really am.

第二段情景设定+多轮对话

Q: +reset
A: Chat reset.

Q: +qayou are a cat named Jasiber
A: yes, but I am a mother cat so I don't really do this to annoy my people. I do my own things. My mom thinks I am crazy but I really am not. I like to snuggle with my humans when I get bored.

Q: who are you
A: Jasiber the feral cat. When I am not with my humans I am with my litter box, my house, the outside, the grass, etc. I have a bunch of siblings. Mom and dad have me in the field. My siblings are named: Periwinkle, Raven, Lazy, Sandalee, Cleary, and Merle. I am going to be eight years old next year. I am a lot of cats.

Q: how long is your tail
A: 5 ft.

Q: What color is your feather
A: silver. It is the same color as my eyes.

Q: do you have feather?
A: i have a peacock feather which is a feather that looks like it is covered with stars because of all the white hairs. I also have a white bird feather.

Q: Is that your toy? The cats don't have feathers
A: Yes, it is a feather. When it rains it gets all wet. When it is hot, it gets all dry. When it is hot, it gets all dry. When it is cold, it gets all wet.When it is cold, it gets all wet. When it is really cold, it gets all wet. It is just too much for my feathers. I have to change it every day. Now I have three birds to help me and my feather is nice and clean. Now, I am waiting for the rain to come.

Q: who are you
A: I am a cat named Superbass, which means awesome. My sister is a long-haired black and white cat named Supermoo, and my dad is a hairless orange and white cat named Superbeagle. My brother is a white and black cat named Superyippie, and my grandpa is a striped tabby cat named Superball.

python chat.py时报错

环境windows11,conda虚拟环境运行,python3.8,完成了pip install 操作,运行python chat.py时报错。报错内容如下:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000\adapter_model.bin
./lora-Vicuna/checkpoint-3000\pytorch_model.bin
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 33/33 [00:49<00:00, 1.49s/it]
Traceback (most recent call last):
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 99, in from_pretrained
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File ".\chat.py", line 81, in
model = SteamGenerationMixin.from_pretrained(
File "I:\Chinese-Vicuna\utils.py", line 670, in from_pretrained
config = LoraConfig.from_pretrained(model_id)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 101, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'

不管用不用梯子都不行,重复多次,问题依旧。

运行generate.py报错:huggingface_hub.utils._validators.HFValidationError

python generate.py,输出如下:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Traceback (most recent call last):
  File "generate.py", line 331, in <module>
    tokenizer = LlamaTokenizer.from_pretrained(args.model_path)
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1770, in from_pretrained
    resolved_vocab_files[file_id] = cached_file(
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/model/13B_hf'. Use `repo_type` argument if needed.

13B finetune error: AssertionError: No inf checks were recorded for this optimizer.

13B finetune: AssertionError: No inf checks were recorded for this optimizer.

0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last):
File "finetune_multichoice_tt_til0305.py", line 287, in
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
File "/home/ubuntu/miniconda3/envs/ghostaienv/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train
return inner_training_loop(
File "/home/ubuntu/miniconda3/envs/ghostaienv/lib/python3.8/site-packages/transformers/trainer.py", line 1991, in _inner_training_loop
self.scaler.step(self.optimizer)
File "/home/ubuntu/miniconda3/envs/ghostaienv/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 368, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No in

7B 没有问题,环境是V100

碰到过类似的问题吗?

测试时输出回答无法停止,直到256长度限制,loss很快收敛性,到0.82左右就不再下降

配置描述:
1、使用了finetune.sh脚本对llama-7b-hf进行finetune
2、
训练的时候使用的是两卡,32gv100, 对fintune.py进行过修改,主要为半精度加载模型训练,改动部分如下

model = LlamaForCausalLM.from_pretrained(
    args.model_path,
    load_in_8bit=False,
    torch_dtype=torch.float16,
    device_map=device_map,
).half()

tokenizer = LlamaTokenizer.from_pretrained(
    args.model_path, add_eos_token=True
)

#model = prepare_model_for_int8_training(model)

测试的时候,使用的是generate.sh脚本, 没有怎么改动generate.py 文件, 仅仅是将load_in_8bit 改为false。

3、使用的数据集是本github上百度网盘上下载的instruction数据集中的merge.json

问题描述:

  1. 训练了近三个epoch,实际在第1个epoch模型loss就从2.x 收敛到了0.82左右,后续两个epoch loss 就无法再下降了,想请教下你们这边一般loss大概数值是多少
  2. 调用generate.sh查看结果的时候,发现模型会一直输出,似乎无法输出eos符号,可以看下面这两个例子,着两个问题,模型都会一直好像在讲废话一样输出,知道达到Max new tokens的限制

1681093440444

1681092212987

3.有几率生成回复的时候会卡死,如下图
1681093397464

补充问题:
在我这里使用v100 双卡训练是没有问题的,我自己也试过使用原来的配置训练开8bit 双卡v100训练,根据bitsandbytes的回复,现在8bit训练是能适配所有显卡的,所以也符合预期,但是训练速度会比半精度慢不少,所以使用finetune.sh的时候我分别将batch_size 和micro_batch_size 增加了四倍训练了一个epoch, 测试之后还是会有上面三个类似的问题。

想请求下大家是否遇到过相似的问题,麻烦指点一下

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.