Code Monkey home page Code Monkey logo

yangjianxin1 / firefly Goto Github PK

View Code? Open in Web Editor NEW
5.3K 55.0 477.0 6.38 MB

Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型

Python 100.00%
gpt alpaca llm baichuan llama lora qlora peft llama2 internlm chatglm qwen aquila mistral mixtral zephyr minicpm gemma llama3 qwen2

firefly's Introduction

Firefly: 一站式大模型训练工具

GitHub Repo stars Generic badge Generic badge

欢迎加入Firefly大模型技术交流群,关注我们的公众号,点击加群按钮即可。

欢迎关注我们的知乎进行交流讨论:红雨瓢泼

项目简介

Firefly 是一个开源的大模型训练项目,支持对主流的大模型进行预训练、指令微调和DPO,包括但不限于Qwen2、Yi-1.5、Llama3、Gemma、Qwen1.5、MiniCPM、Llama、InternLM、Baichuan、ChatGLM、Yi、Deepseek、Qwen、Orion、Ziya、Xverse、Mistral、Mixtral-8x7B、Zephyr、Vicuna、Bloom等。 本项目支持全量参数训练、LoRA、QLoRA高效训练,支持预训练、SFT、DPO。 如果你的训练资源有限,我们极力推荐使用QLoRA进行指令微调,因为我们在Open LLM Leaderboard上验证了该方法的有效性,并且取得了非常不错的成绩。

🔔 本项目主要内容如下:

  • 📗 支持预训练、指令微调、DPO,支持全量参数训练、LoRA、QLoRA高效训练。通过配置文件的方式训练不同的模型,小白亦可快速上手训练模型。
  • 📗 支持使用Unsloth加速训练,并且节省显存。
  • 📗 支持绝大部分主流的开源大模型,如Llama3、Gemma、MiniCPM、Llama、InternLM、Baichuan、ChatGLM、Yi、Deepseek、Qwen、Orion、Ziya、Xverse、Mistral、Mixtral-8x7B、Zephyr、Vicuna、Bloom,训练时与各个官方的chat模型的template对齐。
  • 📗 整理并开源指令微调数据集:firefly-train-1.1M 、moss-003-sft-data、ultrachat、 WizardLM_evol_instruct_V2_143k、school_math_0.25M。
  • 📗 开源Firefly系列指令微调模型权重
  • 📗 在Open LLM Leaderboard上验证了QLoRA训练流程的有效性。

当前版本针对不同的chat模型的template进行了适配,代码存在较大的更新。若你更喜欢此前的版本,可下载代码v0.0.1-alpha

News

  • 🔥 Unsloth x Qwen2模型结构的PR已merge到Unsloth的master分支。
  • 🔥 扩展Unsloth,支持Qwen2模型结构,包括Qwen1.5系列的Dense模型,代码库:Unsloth技术文章
  • 🔥 支持Unsloth,训练Llama3-8B仅需7.75GB显存,可减少42.58%显存占用,减少30.72%训练时间。 训练增益评测
  • 🔥 优化训练流程,支持全量训练、LoRA、QLoRA高效训练,支持预训练、指令微调和DPO。指令微调与DPO的template与原有的chat模型对齐,支持绝大多数开源模型,包括Gemma、MiniCPM、Llama、InternLM、Baichuan、ChatGLM、Yi、Deepseek、Qwen、Orion、Ziya、Xverse、Mistral、Mixtral-8x7B、Zephyr、Vicuna、Bloom等。
  • 🔥 开源模型权重firefly-mixtral-8x7b ,在🤗Open LLM排行榜分数为70.34,超越Yi-34B、Llama2-65B-Chat、Qwen-14B、Vicuna-33B-v1.3等模型。
  • 🔥 开源LongQLoRA, 【技术报告】。可高效扩展LLama上下文长度,在单张32GB V100上将Llama2长度扩展至8k(亦可扩展至12k),仅微调1000 step,在PG19和Proof-pile数据集上的perplexity优于LongLoRA,在PG19上略胜MPT-7B-8K。
  • 🔥 开源Firefly-LLaMA2-Chinese项目在4*V100上进行高效训练,经过中文词表扩充、增量预训练、多轮指令微调,在CMMLU上超越Linly、Yayi、FlagAlpha等,与Ziya、Chinese-Alpaca表现基本持平。
  • 🔥 开源firefly-baichuan2-13b,在OpenCompass的CMMLU榜单上以56.83的分数,位列第8,比百川官方Chat模型略低1.57分。
  • 🔥 开源firefly-llama-30b,在🤗Open LLM排行榜上以64.83分,同量级模型排名第10
  • 🔥 开源firefly-llama2-13b,在🤗Open LLM排行榜上以62分,同量级模型排名第3,比榜首略低0.5分。
  • 🔥 开源firefly-llama-13b,在Hugging Face的Open LLM排行榜上复刻Vicuna-13B,比Vicuna-13b-1.1略高0.2分,比llams-2-13b-chat略低0.5分。
  • LLMPruner:大语言模型裁剪工具 ,开源裁剪后的Bloom模型权重

相关项目

  • Firefly-LLaMA2-Chinese:中文Llama2模型,对Llama2进行中文词表扩充、增量预训练和指令微调。
  • LongQLoRA:大模型长度扩展项目,可在单卡V100上将LLaMA-13B的长度扩展至8192,且性能逼近MPT-8K。
  • LLMPruner:对Bloom进行词表裁剪,减少模型参数量。

技术博客

技术博客

模型评测

Open LLM Leaderboard评测

评测结果来源于Hugging Face的Open LLM Leaderboard。我们的模型均采用QLoRA脚本进行训练,训练仅使用1~2张V100。

模型 Average ARC HellaSwag MMLU TruthfulQA
firefly-mixtral-8x7b 70.16 68.09 85.76 71.49 55.31
Yi-34B-Chat 69.97 65.44 84.16 74.9 55.37
firefly-llama-30b 64.83 64.25 83.64 58.23 53.2
falcon-40b-instruct 63.47 61.6 84.31 55.45 52.52
guanaco-33b 62.98 62.46 84.48 53.78 51.22
firefly-llama2-13b-v1.2 62.17 60.67 80.46 56.51 51.03
firefly-llama2-13b 62.04 59.13 81.99 55.49 51.57
vicuna-13b-v1.5 61.63 56.57 81.24 56.67 51.51
mpt-30b-chat 61.21 58.7 82.54 51.16 52.42
wizardlm-13b-v1.2 60.79 59.04 82.21 54.64 47.27
vicuna-13b-v1.3 60.01 54.61 80.41 52.88 52.14
llama-2-13b-chat 59.93 59.04 81.94 54.64 44.12
vicuna-13b-v1.1 59.21 52.73 80.14 51.9 52.08
guanaco-13b 59.18 57.85 83.84 48.28 46.73

模型列表

🔔 使用本项目的训练代码,以及上述训练数据,我们训练并开源了以下模型权重。

中文模型:

模型 基座模型 训练长度
firefly-baichuan2-13b baichuan-inc/Baichuan2-13B-Base 1024
firefly-baichuan-13b baichuan-inc/Baichuan-13B-Base 1024
firefly-qwen-7b Qwen/Qwen-7B 1024
firefly-chatglm2-6b THUDM/chatglm2-6b 1024
firefly-internlm-7b internlm/internlm-7b 1024
firefly-baichuan-7b baichuan-inc/baichuan-7B 1024
firefly-ziya-13b YeungNLP/Ziya-LLaMA-13B-Pretrain-v1 1024
firefly-bloom-7b1 bigscience/bloom-7b1 1024
firefly-bloom-2b6-v2 YeungNLP/bloom-2b6-zh 512
firefly-bloom-2b6 YeungNLP/bloom-2b6-zh 512
firefly-bloom-1b4 YeungNLP/bloom-1b4-zh 512

英文模型:

模型 基座模型 训练长度
firefly-mixtral-8x7b mistralai/Mixtral-8x7B-v0.1 1024
firefly-llama-30b huggyllama/llama-30b 1024
firefly-llama-13-v1.2 NousResearch/Llama-2-13b-hf 1024
firefly-llama2-13b NousResearch/Llama-2-13b-hf 1024
firefly-llama-13b-v1.2 huggyllama/llama-13b 1024
firefly-llama-13b huggyllama/llama-13b 1024

训练数据

指令微调数据

🔔 目前本项目主要整理了如下指令数据集,并将其整理成统一的数据格式:

数据集 介绍
firefly-train-1.1M 我们收集了23种常见的中文NLP任务的数据,并且构造了许多与中华文化相关的数据,如对联、作诗、文言文翻译、散文、金庸小说等。对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万
moss-003-sft-data 由复旦大学MOSS团队开源的中英文多轮对话数据,包含100万+数据
ultrachat 由清华大学开源的英文多轮对话数据,包含140万+数据
WizardLM_evol_instruct_V2_143k 由WizardLM项目开源的英文指令微调数据集,通过Evol-Instruct方法让指令进化,加强指令的复杂度,以提升模型对复杂指令的遵循能力。包含143k条数据。
school_math_0.25M 由BELLE项目组开源的数学运算指令数据,包含25万条数据。
shareAI/CodeChat 主要包含逻辑推理、代码问答、代码生成相关语料样本。
shareAI/ShareGPT-Chinese-English-90k 中英文平行双语优质人机问答数据集,覆盖真实复杂场景下的用户提问。
ultrachat_200k 由Zephyr项目开源的英文指令微调数据,在ultrachat数据基础上进行清洗
ultrafeedback_binarized 英文偏好数据集,可用于DPO训练

可按需选择、采样或组合上述数据。若是训练中文模型,建议尝试使用moss数据,英文可尝试ultrachat和WizardLM数据。

训练数据为jsonl格式,每一行的数据格式如下,其中conversation字段是必需的,可以根据实际需求添加或删除其他字段。 可以参考项目中的data/dummy_data.jsonl文件。

{
    "conversation_id":44,
    "category":"Brainstorming",
    "conversation":[
        {
            "human":"怎么能够有效地提升个人能力?",
            "assistant":"提升个人能力可以通过以下几个方面实现:\n\n1. 学习新知识和技能:阅读相关书籍、参加课程和工作坊、观看教育视频等学习方式,可以帮助你掌握新的知识和技能,从而提升个人能力。\n\n2. 持续练习和实践:不断地练习和实践所学的知识和技能,可以帮助你更好地掌握和应用这些技能,提高你的表现和效率。\n\n3. 寻求反馈和改进:寻求他人的反馈和建议可以帮助你发现自己的不足之处,进一步完善和改进自己的能力。\n\n4. 建立良好的工作和生活习惯:建立良好的工作和生活习惯,如合理安排时间、注意健康、保持积极态度等,可以提高你的工作效率和生活品质,从而提升个人能力。\n\n5. 社交和交流:与他人交流和合作,可以帮助你更好地了解他人的观点和经验,从而提升自己的能力和见识。"
        },
        {
            "human":"对于初学者来说,如何快速掌握编程技能?",
            "assistant":"以下是几个建议,可以帮助初学者快速掌握编程技能:\n\n1. 学习基础知识:建议先学习编程语言的基础知识和语法,例如变量、数据类型、控制结构、函数等。可以通过在线课程、编程书籍、视频教程等方式学习。\n\n2. 实践编程:进行实际的编程练习是非常重要的,可以通过参加编程挑战、编写小项目等方式来进行实践。\n\n3. 寻求反馈:可以寻求他人的反馈和建议,例如在线编程社区、编程导师等,他们可以帮助你发现问题并及时纠正。\n\n4. 阅读优秀代码:阅读和分析优秀的代码可以帮助你学习其他程序员的编程思路和技巧,从而提高自己的编程能力。\n\n5. 不断学习和更新:编程技术和工具在不断更新和发展,因此需要不断学习新的知识和技能,例如学习新的编程语言、框架、工具等,以跟上技术的发展和变化。\n\n总之,学习编程需要耐心和坚持,并且需要不断地实践和学习。通过以上建议,可以帮助初学者快速掌握编程技能。"
        },
        {
            "human":"除了参加编程挑战和阅读代码,还有哪些方式可以提升自己的编程能力呢?",
            "assistant":"除了参加编程挑战和阅读代码,还有以下几种方式可以提升自己的编程能力:\n\n1. 参与开源项目:加入一个开源项目,可以让你了解到实际的软件开发流程和团队合作方式,并且可以与其他开发人员合作,学习和分享编程技巧和经验。\n\n2. 进行代码重构:对已经编写的代码进行重构,可以帮助你了解更好的编程实践和设计模式,从而提高代码的可读性和可维护性。\n\n3. 练习算法和数据结构:算法和数据结构是编程中的重要基础,练习算法和数据结构可以帮助你了解不同的数据结构和算法,从而更好地解决实际问题。\n\n4. 学习软件工程和架构:学习软件工程和架构的知识,可以帮助你了解如何设计和构建可靠、可扩展和可维护的软件系统。\n\n5. 参加编程社区和论坛:参加编程社区和论坛可以让你与其他开发人员分享和讨论编程经验和技巧,了解最新的技术趋势和解决方案。\n\n总之,提升编程能力需要不断地学习和实践,并与其他开发人员和编程专家进行交流和合作,以获取更多的经验和知识。"
        },
        {
            "human":"你有没有推荐的编程挑战网站?",
            "assistant":"当然,以下是一些值得推荐的编程挑战网站:\n\n1. HackerRank:HackerRank 是一个全球领先的技术招聘平台,提供数百种编程挑战和比赛,包括算法、数据结构、数学、SQL、AI 等多个技术领域。\n\n2. LeetCode:LeetCode 是一个在线编程学习和实践平台,提供数千道编程题目,主要涵盖算法和数据结构。\n\n3. Codeforces:Codeforces 是一个全球知名的编程竞赛网站,提供数百个比赛和练习题目,主要涵盖算法和数据结构。\n\n4. Codewars:Codewars 是一个在线编程学习和实践平台,提供数千道编程题目,主要涵盖算法、数据结构、面向对象编程等多个技术领域。\n\n5. Project Euler:Project Euler 是一个非营利性的在线编程挑战平台,提供一系列数学和计算机科学问题,需要用编程语言进行解答。\n\n以上这些编程挑战网站可以帮助你提高编程能力,并且可以与其他编程爱好者进行交流和竞争。"
        },
        {
            "human":"在学习编程的过程中,如何避免陷入困境和放弃呢?",
            "assistant":"学习编程是一个需要长期坚持和不断学习的过程,以下是一些避免陷入困境和放弃的方法:\n\n1. 制定合理的学习计划:制定合理的学习计划,包括学习时间、学习内容、目标等,可以帮助你更好地管理时间和精力,避免学习过程中的松懈和分心。\n\n2. 寻找合适的学习资源:选择适合自己的学习资源,例如在线课程、编程书籍、视频教程等,可以让你更好地了解和掌握编程知识和技能。\n\n3. 寻求帮助和支持:在学习过程中,遇到问题和困难是很正常的,可以寻求他人的帮助和支持,例如参加编程社区、找到编程导师等。\n\n4. 进行实践和项目:实践和项目是学习编程的重要组成部分,可以帮助你更好地了解和掌握编程技能,同时也可以提高学习的兴趣和动力。\n\n5. 坚持并保持兴趣:坚持学习和保持兴趣是学习编程的关键。可以通过参加编程社区、参加编程竞赛、与其他编程爱好者交流等方式来保持兴趣和动力。\n\n总之,学习编程需要耐心和坚持,并需要不断学习和实践。通过以上方法可以帮助你避免陷入困境和放弃。"
        }
    ],
}

其中firefly-train-1.1M的数据分布如下图所示:

预训练数据

数据格式可以参考项目中的data/pretrain/dummy_pretrain.jsonl文件。

DPO数据

数据格式可以参考项目中的data/dummy_dpo.jsonl文件。

模型训练

若训练中报错,可先查看FAQ

我们将训练中使用的各种组件抽取出来,以便后续的扩展和优化,详见component目录下的实现。训练时的参数配置存储在train_args目录下,方便统一管理和更改。大家可以在train_args目录下查看不同模型的训练配置,按需修改或添加。

安装环境

在requirements.txt下固定了几个主要的python包的版本,执行如下脚本即可。注意:

  • 对于绝大部分模型,我们均在torch==1.13,transformers==4.36环境上进行调试和训练。但部分较新的模型,需要更新transformers版本。
    • Qwen1.5需要将transformers更新只4.37。
    • Gemma需要将transformers更新只4.38.1,torch==2.0.0。
  • 使用QLoRA训练Baichuan2时,需要安装torch==2.0,并且卸载xformers和apex。
  • 使用QLoRA训练Qwen时,需将flash-attn卸载,否则会报错。
pip install requirements.txt

如果需要开启Unsloth,建议安装或者更新以下Python包:

pip install git+https://github.com/unslothai/unsloth.git
pip install bitsandbytes==0.43.1
pip install peft==0.10.0
pip install torch==2.2.2
pip install xformers==0.0.25.post1

如果需要使用Unsloth对Qwen1.5进行训练,安装如下包:

pip install git+https://github.com/yangjianxin1/unsloth.git

损失函数

预训练时,我们采用经典的自回归损失,即每个位置的token都会参与loss计算。

指令微调时,我们仅计算assistant回复部分的loss。

参数说明

📝 train_args目录下存储了不同模型使用不同训练方式的配置文件,主要参数说明如下:

  • output_dir:训练输出目录,存储checkpoint、tokenizer、tensorboard等
  • model_name_or_path:预训练模型的本地目录,或者在huggingface上的模型名称。
  • train_file:训练数据集路径。sft时,需要设置为文件,可以使用data/dummy_data.jsonl进行debug。pretrain时,需要设置为目录。脚本会自动扫描目录下的所有jsonl文件。
  • template_name:指令微调时,使用的模板名称。具体有哪些template_name,可参考component/template.py文件
  • num_train_epochs:训练的轮次。如果数据量足够大,一般建议只训一个epoch。
  • tokenize_num_workers:预训练时,tokenize的线程数,默认为10。
  • deepspeed:deepspeed的训练配置文件。全量参数训练时,将采用deepspeed,关于deepspeed的参数配置说明,请参考deepspeed文档
  • train_mode:训练模式,full、lora或qlora,默认为qlora。
  • task_type:任务类型,pretrain、sft或dpo,默认为sft。
  • per_device_train_batch_size:每张显卡的batch size。
  • gradient_accumulation_steps:梯度累计步数。global batch=num_gpus * per_device_train_batch_size * gradient_accumulation_steps。
  • gradient_checkpointing:如果显存捉襟见肘,可以开启。以时间换空间,模型不缓存激活状态,会进行两次forward计算,以节省显存。
  • learning_rate:学习率。全量参数微调的时候,建议小一些,1e-5或5e-6。
  • max_seq_length:训练时的最大长度。按照自己的设备进行设置,越长需要占用越多显存。
  • max_prompt_length:进行dpo时,prompt的最大长度。
  • logging_steps:每隔多少步统计一次train loss。
  • save_steps:每隔多少步保存一个模型。
  • save_total_limit:output_dir目录中最多保存多少个checkpoint,超出则会将最旧的删除。
  • lr_scheduler_type:学习率变化策略。
  • warmup_steps:warm up步数。学习率经过多少步,增长到指定的数值。
  • optim:优化器。如果是全量参数微调,建议使用adamw_hf。
  • seed:随机种子,用于复现实验结果。
  • fp16:使用使用fp16混合精度。V100建议开启。
  • bf16:使用使用bf16混合精度。A100建议开启。
  • use_unsloth:是否使用unsloth,目前unsloth仅支持部分模型,例如Llama3、Mistral、Gemma、TinyLlama等,详情见Unsloth

以下几个参数,当使用QLoRA训练的时候,需要设置:

  • lora_rank:qlora矩阵的秩。一般设置为8、16、32、64等,在qlora论文中作者设为64。越大则参与训练的参数量越大,一般来说效果会更好,但需要更多显存,。
  • lora_alpha: qlora中的缩放参数。一般设为16、32即可。
  • lora_dropout: lora权重的dropout rate。

关于deepspeed的参数配置,可按需自行修改。

开始训练

💻 全量参数预训练,将{num_gpus}替换为显卡数量:

deepspeed --num_gpus={num_gpus} train.py --train_args_file train_args/pretrain/full/bloom-1b1-pretrain-full.json

💻 全量参数指令微调,将{num_gpus}替换为显卡数量:

deepspeed --num_gpus={num_gpus} train.py --train_args_file train_args/sft/full/bloom-1b1-sft-full.json

💻 单卡QLoRA预训练:

python train.py --train_args_file train_args/pretrain/qlora/yi-6b-pretrain-qlora.json

💻 单卡QLoRA指令微调:

python train.py --train_args_file train_args/sft/qlora/yi-6b-sft-qlora.json

💻 多卡QLoRA预训练:

torchrun --nproc_per_node={num_gpus} train.py --train_args_file train_args/pretrain/qlora/yi-6b-pretrain-qlora.json

💻 多卡QLoRA指令微调:

torchrun --nproc_per_node={num_gpus} train.py --train_args_file train_args/sft/qlora/yi-6b-sft-qlora.json

💻 单卡QLoRA进行DPO训练:

python train.py --train_args_file train_args/sft/qlora/minicpm-2b-dpo-qlora.json

模型使用

权重合并

如果使用LoRA或者QLoRA进行训练,本项目仅保存adapter的权重和配置文件,需要将adapter权重与base model进行合并。脚本见script/merge_lora.py

模型推理

我们提供了多轮对话的交互脚本,详见script/chat目录,该脚本可同时兼容本项目训练的所有模型进行推理。脚本中设置的template_name,需要与模型训练时的template_name一致。

cd script/chat
python chat.py

生成脚本中的top_p、temperature、repetition_penalty、do_sample等参数对模型的生成效果影响较大,可按照自己的使用场景进行调试修改。

推理脚本中支持使用base model和adapter进行推理,缺点是每次启动脚本都需要合并一次权重,等待时间较久。

支持使用4bit进行推理,显存要求低,效果会略有下降。

FAQ

问题1:OOM如何解决?

如果发生OOM,可以缩小per_device_train_batch_size、max_seq_length等参数来缓解。也可以设gradient_checkpointing=true,可以大幅降低显存占用,但训练速度会变慢一些。

问题2:安装包错误

requirements.txt中有各python包的版本

pip install -r requirements.txt

问题3:如何指定使用某些卡训练?

通过如下方式,即可指定使用0和1号卡进行训练:

CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node={num_gpus} train_qlora.py --train_args_file train_args/qlora/baichuan-7b-sft-qlora.json

问题4:训练Baichuan2失败

训练Baichuan2需要安装torch==2.0,并且卸载xformers和apex,否则会报错

RuntimeError: No such operator xformers::efficient_attention_forward_generic - did you forget to build xformers with `python setup.py develop`?

问题5:训练Qwen失败

Qwen进行QLoRA训练需要卸载flash-attn,否则会报错:

assert all((i.dtype in [torch.float16, torch.bfloat16] for i in (q, k, v))) 

问题6:Qwen-Base和Yi-Base经过SFT之后,没法生成<|im_end|>,无法正常停止

经查询,该问题广泛存在于Qwen官方代码库的issue中,如果训练Qwen-Base和Yi-Base,建议设template_name="default",可以避免该问题。 如果对Qwen-Chat和Yi-Chat模型进行SFT,则不会产生该问题,可将template_name分别设为"qwen"和"yi"。

注意:该问题在Qwen1.5中不存在

局限性和使用限制

由于模型参数量限制、训练数据的清洗程度等因素,本项目开源的模型可能存在以下局限性:

  • 对于事实性知识,容易产生错误的回复。
  • 由于未经过无害化微调,可能会产生歧视、危害、违背伦理道德的言论。
  • 在代码、推理上的能力仍有欠缺。

基于以上模型的局限性,我们要求本项目的代码、数据、模型不得用于对社会造成危害的用途,且应当遵循基座模型的商业许可。

引用

若使用本项目的数据、代码或模型,请引用本项目。

@misc{Firefly,
  author = {Jianxin Yang},
  title = {Firefly(流萤): 中文对话式大语言模型},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/yangjianxin1/Firefly}},
}

Star History

Star History Chart

firefly's People

Contributors

yangjianxin1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

firefly's Issues

qlora训练的一个报错

ValueError: FP16 Mixed precision training with AMP or APEX (--fp16) and FP16 half precision evaluation (--fp16_full_eval) can only be used on CUDA devices.
请问这个错误怎么解决?

information about the data

Can you provide more information about the data source?
Is the dataset of each subtask published for the first time or from other existing ones?

在trainer里面,无法识别self.deepspeed

File ~/autodl-fs/Firefly/component/trainer.py:88, in Trainer._save_checkpoint(self, model, trial, metrics)
     86         if self.do_grad_scaling:
     87             torch.save(self.scaler.state_dict(), os.path.join(output_dir, SCALER_NAME))
---> 88 elif self.args.should_save and not self.deepspeed:
     89     # deepspeed.save_checkpoint above saves model/optim/sched
     90     torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME))
     91     with warnings.catch_warnings(record=True) as caught_warnings:

AttributeError: 'LoRATrainer' object has no attribute 'deepspeed'

比如,当每50个steps的时候想保存一下中间checkpoint,这里就会报错。但是我感觉作者那里应该没有报错(因为代码里面有多处self.deepspeed的代码)。要是想让trainer具备deepspeed的属性,是需要装特殊版本的transformer吗?

多卡训练报错。。。。

你好,我在多卡训练中遇到如下错误,不知道怎么解决呢?望回复,谢谢!:
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 53738 closing signal SIGHUP
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 53740 closing signal SIGHUP
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 53741 closing signal SIGHUP
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 53742 closing signal SIGHUP
Traceback (most recent call last):
File "/home/huchangyou/anaconda3/envs/firefly/bin/torchrun", line 8, in
sys.exit(main())
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 237, in launch_agent
result = agent.run()
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
result = self._invoke_run(role)
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 850, in _invoke_run
time.sleep(monitor_interval)
File "/home/huchangyou/anaconda3/envs/firefly/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 62, in _terminate_process_handler
raise SignalException(f"Process {os.getpid()} got signal: {sigval}", sigval=sigval)
torch.distributed.elastic.multiprocessing.api.SignalException: Process 53672 got signal: 1

效果有点离谱啊

text = input('User:')
User:床前明月光的下一句是什么
text = '{}'.format(text)
input_ids = tokenizer(text, return_tensors="pt").input_ids
input_ids = input_ids.to(device)
outputs = model.generate(input_ids, max_new_tokens=200, do_sample=True, top_p=0.85, temperature=0.35,
... repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
rets = tokenizer.batch_decode(outputs)
output = rets[0].strip().replace(text, "").replace('', "")
print("Firefly:{}".format(output))
Firefly:“你睡吧,我走了”

[BUG] QLoRA微调baichuan-7B报错:矩阵维度不匹配

在自己的数据集上做SFT报错矩阵维度不匹配,然后用示例数据也是报同样的错误,如下图所示:
image

配置文件:

{
    "output_dir": "output/firefly-baichuan-7b-qlora-sft",
    "model_name_or_path": "baichuan-inc/baichuan-7B",
    "train_file": "./data/dummy_data.jsonl",
    "num_train_epochs": 20,
    "per_device_train_batch_size": 16,
    "gradient_accumulation_steps": 1,
    "learning_rate": 2e-4,
    "max_seq_length": 300,
    "logging_steps": 100,
    "save_steps": 100,
    "save_total_limit": 1,
    "lr_scheduler_type": "constant_with_warmup",
    "warmup_steps": 0,
    "lora_rank": 64,
    "lora_alpha": 16,
    "lora_dropout": 0.05,

    "gradient_checkpointing": true,
    "disable_tqdm": false,
    "optim": "paged_adamw_32bit",
    "seed": 42,
    "fp16": true,
    "report_to": "tensorboard",
    "dataloader_num_workers": 0,
    "save_strategy": "steps",
    "weight_decay": 0,
    "max_grad_norm": 0.3,
    "remove_unused_columns": false
}

请求合作构建中文语料库

您好,我是开源项目闻达的作者之一,目前我们和ChatRWKV合作在未来的模型中针对性提高关于关键词提取任务的能力。目前我们有接近3000条偏向具体场景业务的“一句话提问”语料,希望能够和您合作一起进行语料库的扩充。
同时peng bo和我也欢迎您加入rwkv和wenda的技术交流群:253297302、325154699

arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit

折腾了好久实在没办法了,求助!!!

其他设置都感觉没啥问题,我电脑双显卡,我禁用了一张,重启也还是这样,实在不知道咋办,哭……
而且我也不知道怎么分别设置显存,一张3060 6G ,一张3090 24G。但这个应该不是显存问题,我把参数"max_seq_length": 都设置很小300了,禁掉3060 只看3090也才占6G 但还是一直出错……

image

PS F:\__4__code\gc_ziji\bloom_LLM> & "F:/__4__code/Program Files/Python/Python310/python.exe" f:/__4__code/gc_ziji/bloom_LLM/train_qlora.py

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin F:\__4__code\Program Files\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
CUDA SETUP: CUDA runtime path found: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudart64_110.dll
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary F:\__4__code\Program Files\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll...
2023-06-23 21:07:34.832 | INFO     | __main__:init_components:100 - Initializing components...
第一张显卡名称:NVIDIA GeForce RTX 3090
memory footprint of model: 1.5424633026123047 GB
trainable params: 78,643,200 || all params: 1,377,431,040 || trainable%: 5.709411049717596
verify all params of the model
torch.float32 197783040 0.1435883425423606
torch.uint8 1179648000 0.8564116574576394
torch.float32 ['base_model.model.transformer.h.0.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.0.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.0.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.0.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.0.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.0.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.0.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.0.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.1.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.1.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.1.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.1.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.1.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.1.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.1.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.1.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.2.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.2.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.2.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.2.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.2.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.2.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.2.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.2.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.3.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.3.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.3.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.3.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.3.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.3.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.3.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.3.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.4.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.4.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.4.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.4.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.4.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.4.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.4.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.4.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.5.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.5.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.5.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.5.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.5.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.5.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.5.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.5.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.6.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.6.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.6.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.6.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.6.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.6.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.6.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.6.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.7.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.7.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.7.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.7.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.7.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.7.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.7.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.7.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.8.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.8.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.8.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.8.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.8.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.8.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.8.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.8.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.9.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.9.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.9.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.9.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.9.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.9.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.9.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.9.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.10.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.10.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.10.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.10.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.10.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.10.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.10.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.10.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.11.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.11.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.11.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.11.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.11.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.11.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.11.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.11.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.12.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.12.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.12.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.12.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.12.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.12.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.12.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.12.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.13.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.13.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.13.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.13.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.13.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.13.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.13.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.13.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.14.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.14.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.14.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.14.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.14.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.14.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.14.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.14.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.15.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.15.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.15.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.15.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.15.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.15.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.15.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.15.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.16.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.16.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.16.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.16.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.16.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.16.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.16.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.16.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.17.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.17.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.17.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.17.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.17.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.17.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.17.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.17.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.18.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.18.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.18.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.18.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.18.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.18.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.18.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.18.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.19.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.19.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.19.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.19.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.19.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.19.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.19.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.19.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.20.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.20.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.20.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.20.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.20.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.20.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.20.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.20.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.21.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.21.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.21.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.21.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.21.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.21.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.21.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.21.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.22.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.22.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.22.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.22.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.22.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.22.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.22.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.22.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.23.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.23.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.23.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.23.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.23.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.23.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.23.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.23.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.24.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.24.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.24.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.24.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.24.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.24.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.24.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.24.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.25.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.25.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.25.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.25.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.25.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.25.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.25.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.25.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.26.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.26.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.26.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.26.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.26.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.26.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.26.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.26.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.27.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.27.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.27.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.27.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.27.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.27.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.27.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.27.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.28.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.28.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.28.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.28.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.28.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.28.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.28.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.28.mlp.dense_4h_to_h.lora_B.default.weight', 'base_model.model.transformer.h.29.self_attention.query_key_value.lora_A.default.weight', 'base_model.model.transformer.h.29.self_attention.query_key_value.lora_B.default.weight', 'base_model.model.transformer.h.29.self_attention.dense.lora_A.default.weight', 'base_model.model.transformer.h.29.self_attention.dense.lora_B.default.weight', 'base_model.model.transformer.h.29.mlp.dense_h_to_4h.lora_A.default.weight', 'base_model.model.transformer.h.29.mlp.dense_h_to_4h.lora_B.default.weight', 'base_model.model.transformer.h.29.mlp.dense_4h_to_h.lora_A.default.weight', 'base_model.model.transformer.h.29.mlp.dense_4h_to_h.lora_B.default.weight'] 

verify trainable params the model
torch.float32 78643200 1.0
torch.float32 78643200
2023-06-23 21:07:50.232 | INFO     | component.dataset:__init__:14 - Loading data: ./data/data.jsonl
2023-06-23 21:07:50.234 | INFO     | component.dataset:__init__:17 - there are 18 data in dataset
2023-06-23 21:07:50.244 | INFO     | __main__:main:196 - *** starting training ***
  0%|                                                                                                 | 0/1 [00:00<?, ?it/s]Error invalid device ordinal at line 359 in file D:\a\bitsandbytes-windows-webui\bitsandbytes-windows-webui\csrc\pythonInterface.c
C:\arrow\cpp\src\arrow\filesystem\s3fs.cc:2598:  arrow::fs::FinalizeS3 was not called even though S3 was initialized.  This could lead to a segmentation fault at exit

似乎还不支持chatglm2-6b的qlora方法微调

ValueError: Unrecognized configuration class <class 'transformers_modules.THUDM.chatglm2-6b.b9c9fe65e2b705d06b900d73b9dc6c9d1c862382.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 566) of binary: /root/miniconda3/envs/glm/bin/python

希望能更进一下

RuntimeError: CUDA Setup failed despite GPU being available. 使用bitsandbytes0.39.0出错。

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/libbitsandbytes_cuda110.so
False
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: /workdir/common/miniconda3/envs/tmp_py38 did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: /workdir/common/miniconda3/envs/tmp_py38/lib/ did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('443'), PosixPath('tcp'), PosixPath('//10.131.13.1')}
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('default'), PosixPath('/mnt/dolphinfs/hdd_pool/docker/share,default'), PosixPath('/mnt/dolphinfs/hdd_pool/docker/user/hadoop-health-alg,default'), PosixPath('/mnt/dolphinfs/ssd_pool/docker/share,default')}
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('-Djava.security.krb5.conf=/opt/meituan/hadoop/etc/hadoop/krb5.conf')}
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('//config.hulk.vip.sankuai.com')}
warn(msg)
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/contrib/capacity-scheduler/*.jar')}
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get CUDA error: invalid device function errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 110
CUDA SETUP: Required library version not found: libbitsandbytes_cuda110.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...

================================================ERROR=====================================
CUDA SETUP: CUDA detection failed! Possible reasons:

  1. CUDA driver not installed
  2. CUDA not installed
  3. You have multiple conflicting CUDA libraries
  4. Required library not pre-compiled for this bitsandbytes release!
    CUDA SETUP: If you compiled from source, try again with make CUDA_VERSION=DETECTED_CUDA_VERSION for example, make CUDA_VERSION=113.
    CUDA SETUP: The CUDA version for the compile might depend on your conda install. Inspect CUDA version via conda list | grep cuda.
    ================================================================================

CUDA SETUP: Something unexpected happened. Please compile from source:
git clone [email protected]:TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=110 make cuda110
python setup.py install
CUDA SETUP: Setup Failed!
Traceback (most recent call last):
File "train_qlora.py", line 2, in
from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/peft/init.py", line 22, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/peft/mapping.py", line 16, in
from .peft_model import (
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/peft/peft_model.py", line 34, in
from .tuners import (
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/peft/tuners/init.py", line 21, in
from .lora import LoraConfig, LoraModel
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/peft/tuners/lora.py", line 41, in
import bitsandbytes as bnb
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/init.py", line 6, in
from . import cuda_setup, utils, research
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/research/init.py", line 1, in
from . import nn
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/research/nn/init.py", line 1, in
from .modules import LinearFP8Mixed, LinearFP8Global
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/research/nn/modules.py", line 8, in
from bitsandbytes.optim import GlobalOptimManager
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/optim/init.py", line 6, in
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "/workdir/common/miniconda3/envs/tmp_py38/lib/python3.8/site-packages/bitsandbytes-0.39.0-py3.8.egg/bitsandbytes/cextension.py", line 20, in
raise RuntimeError('''
RuntimeError:
CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

谢谢分享,谈谈使用后的感想

试了一下效果,感觉比较依赖prompt,在一些子任务上,表现的一般,和glm6b有较大差距。不过优点是很小,显存占用才7g。如果能以这种参数规模,达到glm6b的效果,那就非常实用了

2b6模型损坏

Encountered 1 file(s) that may not have been copied correctly on Windows:
pytorch_model.bin

背诵古诗

训练的模型无法完成 背诵静夜思 这样的指令

求一个docker环境

您好,感谢开源这个非常棒的一个项目。请问可以分享一个docker环境吗?直接通过requirements.txt安装环境,遇到了太多的问题 (:

V100如何实现QLoRA微调

readme中提到单卡V100实现对33b的模型实现微调,但是V100不是不支持int4加速训练么?

TypeError: GradientAccumulationPlugin.__init__() got an unexpected keyword argument 'sync_with_dataloader'

/data/firefly/0625/Firefly/train_qlora.py:196 in │
│ │
│ 193 │
│ 194 │
│ 195 if name == "main": │
│ ❱ 196 │ main() │
│ 197 │
│ 198 │
│ 199 │
│ │
│ /data/firefly/0625/Firefly/train_qlora.py:181 in main │
│ │
│ 178 │ # 进行一些配置和检查 │
│ 179 │ args, training_args = setup_everything() │
│ 180 │ # 加载各种组件 │
│ ❱ 181 │ trainer = init_components(args, training_args) │
│ 182 │ # 开始训练 │
│ 183 │ logger.info("*** starting training ***") │
│ 184 │ train_result = trainer.train() │
│ │
│ /data/firefly/0625/Firefly/train_qlora.py:166 in init_components │
│ │
│ 163 │ data_collator = SFTDataCollator(tokenizer, args.max_seq_length) │
│ 164 │ │
│ 165 │ # 初始化Trainer │
│ ❱ 166 │ trainer = LoRATrainer( │
│ 167 │ │ model=model, │
│ 168 │ │ args=training_args, │
│ 169 │ │ train_dataset=train_dataset, │
│ │
│ /data/firefly/0625/Firefly/component/trainer.py:50 in init
│ │
│ 47 │ │ │ preprocess_logits_for_metrics: Callable[[torch.Tensor, torch.Tensor], torch. │
│ 48 │ │ │ compute_loss=None, │
│ 49 │ ): │
│ ❱ 50 │ │ super(Trainer, self).init( │
│ 51 │ │ │ model=model, │
│ 52 │ │ │ args=args, │
│ 53 │ │ │ data_collator=data_collator, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:335 in init
│ │
│ 332 │ │ self.deepspeed = None │
│ 333 │ │ self.is_in_train = False │
│ 334 │ │ │
│ ❱ 335 │ │ self.create_accelerator_and_postprocess() │
│ 336 │ │ │
│ 337 │ │ # memory metrics - must set up as early as possible │
│ 338 │ │ self._memory_tracker = TrainerMemoryTracker(self.args.skip_memory_metrics) │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:3830 in │
│ create_accelerator_and_postprocess │
│ │
│ 3827 │ │ grad_acc_kwargs = {"num_steps": self.args.gradient_accumulation_steps} │
│ 3828 │ │ if version.parse(accelerate_version) > version.parse("0.20.3"): │
│ 3829 │ │ │ grad_acc_kwargs["sync_with_dataloader"] = False │
│ ❱ 3830 │ │ gradient_accumulation_plugin = GradientAccumulationPlugin(**grad_acc_kwargs) │
│ 3831 │ │ │
│ 3832 │ │ # create accelerator object │
│ 3833 │ │ self.accelerator = Accelerator( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: GradientAccumulationPlugin.init() got an unexpected keyword argument 'sync_with_dataloader'

命令:torchrun --nproc_per_node=2 train_qlora.py --train_args_file train_args/bloom-sft-qlora.json

请教下大佬这个错误为什么?

会有int4或者int8等量化、裁剪版本吗?

如题。
使用了一圈下来发现在中文语境下效果比同类惊艳很多。而且占用还很小!
但是在我实体机上跑得动,一到服务器就会爆内存被kill掉,感觉如果出了int4或者int8等量化、裁剪版本直接会赞爆了,哪怕效果低一点。

建议

项目其实挺好的,建议B站做个介绍视频,包括使用以及进一步自己场景下的应用和工业化落地方面的内容指导。结合实际情况我也可以帮助你们推广。

huggingface里面缺少config.json文件

OSError: YeungNLP/firefly-bloom-7b1-qlora-sft does not appear to have a file named config.json. Checkout 
'https://huggingface.co/YeungNLP/firefly-bloom-7b1-qlora-sft/main' for available files.

好像7b1的文件里面缺少config.json文件,不过理论上使用bloom-7b1原生的config也是可以的吧?

有没有办法用LoRA微调?

如项目ReadMe说的,有时候效果会有偏差,是不是直接集成LoRA,方便最终使用者微调,会更加容易控制质量?
例如,我想拿Firefly来写一些文章,是不是可以拿我常看的内容方便的用LoRA微调下Firefly?

使用多卡会报错,单卡没问题,不支持多卡?

使用多卡会报错,单卡没问题,不支持多卡吗?

页面的多轮对话模型代码,百川7B模型,直接运行,报错

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
device = 'cuda'
model_name = 'baichuan-inc/baichuan-7B'
adapter_name = 'YeungNLP/firefly-baichuan-7b1-qlora-sft'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True,
    low_cpu_mem_usage=True,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(model, adapter_name)
model.eval()
model = model.to(device)
# 记录所有历史记录
history_token_ids = tokenizer('<s>', return_tensors="pt").input_ids
# 输入模型的最大长度
history_max_len = 1000
user_input = input('User:')
while True:
    user_input = '{}</s>'.format(user_input)
    user_input_ids = tokenizer(user_input, return_tensors="pt").input_ids
    history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1)
    model_input_ids = history_token_ids[:, -history_max_len:].to(device)
    outputs = model.generate(
        input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
        temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
    )
    model_input_ids_len = model_input_ids.size(1)
    response_ids = outputs[:, model_input_ids_len:]
    history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1)
    response = tokenizer.batch_decode(response_ids)
    print("Firefly:" + response[0].strip().replace('</s>', ""))
    user_input = input('User:')

image

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!

torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 问下大佬 这个错误怎么解决?

CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /opt/conda/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
2023-06-21 07:39:30.573 | INFO | main:init_components:100 - Initializing components...
2023-06-21 07:39:30.573 | INFO | main:init_components:100 - Initializing components...
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/firefly/Firefly/train_qlora.py:196 in │
│ │
│ 193 │
│ 194 │
│ 195 if name == "main": │
│ ❱ 196 │ main() │
│ 197 │
│ 198 │
│ 199 │
│ │
│ /data/firefly/Firefly/train_qlora.py:181 in main │
│ │
│ 178 │ # 进行一些配置和检查 │
│ 179 │ args, training_args = setup_everything() │
│ 180 │ # 加载各种组件 │
│ ❱ 181 │ trainer = init_components(args, training_args) │
│ 182 │ # 开始训练 │
│ 183 │ logger.info("*** starting training ***") │
│ 184 │ train_result = trainer.train() │
│ │
│ /data/firefly/Firefly/train_qlora.py:111 in init_components │
│ │
│ 108 │ │ local_rank = int(os.environ.get('LOCAL_RANK', '0')) │
│ 109 │ │ device_map = {'': local_rank} │
│ 110 │ # 加载tokenzier │
│ ❱ 111 │ tokenizer = AutoTokenizer.from_pretrained( │
│ 112 │ │ args.model_name_or_path, │
│ 113 │ │ trust_remote_code=True, │
│ 114 │ ) │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:702 in │
│ from_pretrained │
│ │
│ 699 │ │ │ │ raise ValueError( │
│ 700 │ │ │ │ │ f"Tokenizer class {tokenizer_class_candidate} does not exist or is n │
│ 701 │ │ │ │ ) │
│ ❱ 702 │ │ │ return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *input │
│ 703 │ │ │
│ 704 │ │ # Otherwise we have to be creative. │
│ 705 │ │ # if model is an encoder decoder, the encoder tokenizer class is used by default │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1811 in │
│ from_pretrained │
│ │
│ 1808 │ │ │ else: │
│ 1809 │ │ │ │ logger.info(f"loading file {file_path} from cache at {resolved_vocab_fil │
│ 1810 │ │ │
│ ❱ 1811 │ │ return cls._from_pretrained( │
│ 1812 │ │ │ resolved_vocab_files, │
│ 1813 │ │ │ pretrained_model_name_or_path, │
│ 1814 │ │ │ init_configuration, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1965 in │
│ _from_pretrained │
│ │
│ 1962 │ │ │
│ 1963 │ │ # Instantiate tokenizer. │
│ 1964 │ │ try: │
│ ❱ 1965 │ │ │ tokenizer = cls(*init_inputs, *init_kwargs) │
│ 1966 │ │ except OSError: │
│ 1967 │ │ │ raise OSError( │
│ 1968 │ │ │ │ "Unable to load vocabulary from file. " │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/bloom/tokenization_bloom_fast.py:121 │
│ in init
│ │
│ 118 │ │ clean_up_tokenization_spaces=False, │
│ 119 │ │ kwargs, │
│ 120 │ ): │
│ ❱ 121 │ │ super().init( │
│ 122 │ │ │ vocab_file, │
│ 123 │ │ │ merges_file, │
│ 124 │ │ │ tokenizer_file=tokenizer_file, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:111 in init │
│ │
│ 108 │ │ │ fast_tokenizer = copy.deepcopy(tokenizer_object) │
│ 109 │ │ elif fast_tokenizer_file is not None and not from_slow: │
│ 110 │ │ │ # We have a serialization from tokenizers which let us directly build the ba │
│ ❱ 111 │ │ │ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) │
│ 112 │ │ elif slow_tokenizer is not None: │
│ 113 │ │ │ # We need to convert a slow tokenizer to build the backend │
│ 114 │ │ │ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
Exception: expected value at line 1 column 1
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/firefly/Firefly/train_qlora.py:196 in │
│ │
│ 193 │
│ 194 │
│ 195 if name == "main": │
│ ❱ 196 │ main() │
│ 197 │
│ 198 │
│ 199 │
│ │
│ /data/firefly/Firefly/train_qlora.py:181 in main │
│ │
│ 178 │ # 进行一些配置和检查 │
│ 179 │ args, training_args = setup_everything() │
│ 180 │ # 加载各种组件 │
│ ❱ 181 │ trainer = init_components(args, training_args) │
│ 182 │ # 开始训练 │
│ 183 │ logger.info("
starting training ***") │
│ 184 │ train_result = trainer.train() │
│ │
│ /data/firefly/Firefly/train_qlora.py:111 in init_components │
│ │
│ 108 │ │ local_rank = int(os.environ.get('LOCAL_RANK', '0')) │
│ 109 │ │ device_map = {'': local_rank} │
│ 110 │ # 加载tokenzier │
│ ❱ 111 │ tokenizer = AutoTokenizer.from_pretrained( │
│ 112 │ │ args.model_name_or_path, │
│ 113 │ │ trust_remote_code=True, │
│ 114 │ ) │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:702 in │
│ from_pretrained │
│ │
│ 699 │ │ │ │ raise ValueError( │
│ 700 │ │ │ │ │ f"Tokenizer class {tokenizer_class_candidate} does not exist or is n │
│ 701 │ │ │ │ ) │
│ ❱ 702 │ │ │ return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *input │
│ 703 │ │ │
│ 704 │ │ # Otherwise we have to be creative. │
│ 705 │ │ # if model is an encoder decoder, the encoder tokenizer class is used by default │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1811 in │
│ from_pretrained │
│ │
│ 1808 │ │ │ else: │
│ 1809 │ │ │ │ logger.info(f"loading file {file_path} from cache at {resolved_vocab_fil │
│ 1810 │ │ │
│ ❱ 1811 │ │ return cls._from_pretrained( │
│ 1812 │ │ │ resolved_vocab_files, │
│ 1813 │ │ │ pretrained_model_name_or_path, │
│ 1814 │ │ │ init_configuration, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1965 in │
│ _from_pretrained │
│ │
│ 1962 │ │ │
│ 1963 │ │ # Instantiate tokenizer. │
│ 1964 │ │ try: │
│ ❱ 1965 │ │ │ tokenizer = cls(*init_inputs, **init_kwargs) │
│ 1966 │ │ except OSError: │
│ 1967 │ │ │ raise OSError( │
│ 1968 │ │ │ │ "Unable to load vocabulary from file. " │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/models/bloom/tokenization_bloom_fast.py:121 │
│ in init
│ │
│ 118 │ │ clean_up_tokenization_spaces=False, │
│ 119 │ │ **kwargs, │
│ 120 │ ): │
│ ❱ 121 │ │ super().init( │
│ 122 │ │ │ vocab_file, │
│ 123 │ │ │ merges_file, │
│ 124 │ │ │ tokenizer_file=tokenizer_file, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py:111 in init
│ │
│ 108 │ │ │ fast_tokenizer = copy.deepcopy(tokenizer_object) │
│ 109 │ │ elif fast_tokenizer_file is not None and not from_slow: │
│ 110 │ │ │ # We have a serialization from tokenizers which let us directly build the ba │
│ ❱ 111 │ │ │ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) │
│ 112 │ │ elif slow_tokenizer is not None: │
│ 113 │ │ │ # We need to convert a slow tokenizer to build the backend │
│ 114 │ │ │ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
Exception: expected value at line 1 column 1
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 24024) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/bin/torchrun", line 33, in
sys.exit(load_entry_point('torch==1.13.1', 'console_scripts', 'torchrun')())
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train_qlora.py FAILED

Failures:
[1]:
time : 2023-06-21_07:39:33
host : dbcloud
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 24025)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-06-21_07:39:33
host : dbcloud
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 24024)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

生成重复答案

请问下大佬,有生成答案重复的怎么处理能避免?
top_p = 0.7
temperature = 0.95
repetition_penalty = 1.0
微调一些文档数据,得到的答案会有一部分重复

question : 假如你是一名面试求职者,请你回答面试官的问题:描述一个案例,你是如何通过团队合作达成目标的?)
answer : 本科期间担任班长,在班级管理中,团队合作必不可少,作为班长,我通过组织班委班委,对班级进行有效同学进行**引导,在**引导中,通过班委对班级同学进行工作分工,在工作中,通过班委对班级同学进行有效管想引导,**引导中,通过班委对班级同学进行工作分工,工作分工中,通过班委对班级同学进行有效管理,在管理想引导中,通过班委对班级同学进行工作分工,工作分工中,通过班委对班级同学进行有效管理,在管理中,通过班通过班委对班级同学进行工作分工,工作分工中,通过班委对班级同学进行有效管理,在管理中,通过班委对班级同班级同学进行工作分工,工作分工中,通过班委对班级同学进行有效管理,在管理中,通过班委对班级同学进行**行工作分工,工作分工中,通过班委对班级同学进行有效管理,在管理中,通过班委对班级同学进行**引导,**,工作分工中,通过班委对班级同学进行有效管理,在管理中,通过班委对班级同学进行**引导,**引导中,通中,通过班委对班级同学进行有效管理,在管理中,通过班委对班级同学进行**引导,**引导中,通过班委对班委对班级同学进行有效管理,在管理中,通过班委对班级同学进行**引导,**引导中,通过班委对班级同学进行学进行有效管理,在管理中,通过班委对班级同学进行**引导,**引导中,通过班委对班级同学进行工作分工,管理,在管理

在v100上用qlora报OOM

跑的是baichuan-7b模型,用的train_args/sft-qlora.json默认参数,只是替换了模型路径和数据路径。尝试使用1卡和8卡V100, 均报OOM。
‘’‘’
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 1; 31.75 GiB total capacity; 30.35 GiB already allocated; 209.75 MiB free; 30.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
‘’‘’

使用train_qlora.py时报错:arrow::fs::FinalizeS3 was not called even though S3 was initialized.

目前使用全参数训练正常,使用qlora时会报错。完整报错为:
verify trainable params the model
torch.float32 143130624 1.0
torch.float32 143130624
2023-07-05 23:33:31.639 | INFO | component.dataset:init:14 - Loading data: ./data/dummy_data.jsonl
2023-07-05 23:33:31.645 | INFO | component.dataset:init:17 - there are 33 data in dataset
2023-07-05 23:33:31.652 | INFO | main:main:183 - *** starting training ***
0%| | 0/2 [00:00<?, ?it/s]use_cache=True is incompatible with gradient checkpointing. Setting use_cache=False...
Error invalid device ordinal at line 359 in file /home/tim/git/bitsandbytes/csrc/pythonInterface.c
/arrow/cpp/src/arrow/filesystem/s3fs.cc:2598: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit

这个报错能搜到的解决办法都不太明确,实在搞不定,发个issue请教一下

使用模型:YeungNLP/firefly-baichuan-7b-qlora-sft-merge
使用显卡:4090
使用系统:windows wsl下的Ubuntu 22.04.2 LTS
加载训练数据没有动,还是./data/dummy_data.jsonl

环境包已经按requirements.txt做了更新,具体如下:
Package Version


absl-py 1.4.0
accelerate 0.21.0.dev0
aiofiles 23.1.0
aiohttp 3.8.4
aiosignal 1.3.1
altair 5.0.0
anyio 3.6.2
async-timeout 4.0.2
attrs 23.1.0
bitsandbytes 0.39.0
blinker 1.4
cachetools 5.3.1
certifi 2023.5.7
charset-normalizer 3.1.0
click 8.1.3
cmake 3.26.3
command-not-found 0.3
contourpy 1.0.7
cpm-kernels 1.0.11
cryptography 3.4.8
cycler 0.11.0
datasets 2.12.0
dbus-python 1.2.18
deepspeed 0.9.5
dill 0.3.6
distro 1.7.0
distro-info 1.1build1
evaluate 0.4.0
fastapi 0.95.1
ffmpy 0.3.0
filelock 3.12.0
fonttools 4.39.4
frozenlist 1.3.3
fsspec 2023.5.0
google-auth 2.21.0
google-auth-oauthlib 1.0.0
gradio 3.30.0
gradio_client 0.2.4
grpcio 1.56.0
h11 0.14.0
hjson 3.1.0
httpcore 0.17.0
httplib2 0.20.2
httpx 0.24.0
huggingface-hub 0.14.1
icetk 0.0.7
idna 3.4
importlib-metadata 4.6.4
jeepney 0.7.1
Jinja2 3.1.2
joblib 1.3.1
jsonschema 4.17.3
keyring 23.5.0
kiwisolver 1.4.4
latex2mathml 3.75.5
launchpadlib 1.10.16
lazr.restfulclient 0.14.4
lazr.uri 1.0.6
linkify-it-py 2.0.2
lit 16.0.3
loguru 0.7.0
Markdown 3.4.3
markdown-it-py 2.2.0
MarkupSafe 2.1.2
matplotlib 3.7.1
mdit-py-plugins 0.3.3
mdtex2html 1.2.0
mdurl 0.1.2
more-itertools 8.10.0
mpi4py 3.1.4
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.14
netifaces 0.11.0
networkx 3.1
ninja 1.11.1
numpy 1.21.4
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
oauthlib 3.2.0
orjson 3.8.12
packaging 23.1
pandas 1.2.5
peft 0.4.0.dev0
Pillow 9.5.0
pip 23.1.2
protobuf 4.23.3
psutil 5.9.5
py-cpuinfo 9.0.0
pyarrow 12.0.0
pyasn1 0.5.0
pyasn1-modules 0.3.0
pydantic 1.10.7
pydub 0.25.1
Pygments 2.15.1
PyGObject 3.42.1
PyJWT 2.3.0
pyparsing 2.4.7
pyrsistent 0.19.3
python-apt 2.4.0+ubuntu1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3
PyYAML 5.4.1
regex 2023.5.5
requests 2.30.0
requests-oauthlib 1.3.1
responses 0.18.0
rsa 4.9
safetensors 0.3.1
scikit-learn 1.3.0
scipy 1.10.1
screen-resolution-extra 0.0.0
SecretStorage 3.3.1
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.0.0
six 1.16.0
sniffio 1.3.0
starlette 0.26.1
sympy 1.11.1
systemd-python 234
tensorboard 2.13.0
tensorboard-data-server 0.7.1
tensorboardX 2.6.1
threadpoolctl 3.1.0
tokenizers 0.13.3
toolz 0.12.0
torch 1.13.0
torchsummary 1.5.1
torchvision 0.15.2
tqdm 4.62.3
transformers 4.31.0.dev0
triton 2.0.0
typing_extensions 4.5.0
tzdata 2023.3
ubuntu-advantage-tools 8001
uc-micro-py 1.0.2
ufw 0.36.1
unattended-upgrades 0.1
urllib3 1.26.16
uvicorn 0.22.0
wadllib 1.3.6
websockets 11.0.3
Werkzeug 2.3.6
wheel 0.37.1
xkit 0.0.0
xxhash 3.2.0
yarl 1.9.2
zipp 1.0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.