Comments (3)
For the newer versions of Flash Attention v2, the additional rotary pos ops are dependent on the triton
library. However, it appears there's an issue with triton compiling the CUDA kernel. Unfortunately, the error messages from this compilation process are not included in the currently provided logs; they should ideally be located above the Python error messages.
As a temporary workaround, I recommend uninstalling Triton. This will cause it to fallback to the non--Flash Attention v2 implementation.
To troubleshoot the issue, the version of triton and related logs will be needed.
from qwen.
For the newer versions of Flash Attention v2, the additional rotary pos ops are dependent on the
triton
library. However, it appears there's an issue with triton compiling the CUDA kernel. Unfortunately, the error messages from this compilation process are not included in the currently provided logs; they should ideally be located above the Python error messages.As a temporary workaround, I recommend uninstalling Triton. This will cause it to fallback to the non--Flash Attention v2 implementation.
To troubleshoot the issue, the version of triton and related logs will be needed.
Hi thanks for your prompt reply. I think I have figured out the problem, it is because I am using V100 for finetune and flash-attention does not support V100 currently. After I uninstall V100, I could run the finetune.py as normal.
I have also noticed that V100 does not support training with BF16, do you have a benchmark for FP16? (Coz I only see the comparison between BF16, INT8 and INT4.) I am curious how much performance will be degraded if I finetune the baseline Qwen-7B model using FP16 for full parameters?. Or in this case, it would be preferred to just fine-tune use LORA only? (The magnitude of my dataset is around 100K single conversations)
Thanks for your help in advance!!
from qwen.
bf16 and fp16 should have similar performance (as in speed) on devices where both are supported. if accuracy is concerned, bf16 could enable training that are more stable for larger models but if both can train the model succesfully, the resulted models may not differ significantly.
from qwen.
Related Issues (20)
- [BUG] <关于model.generate时发现的源码错误> HOT 2
- [BUG] <Qwen-14B-Chat 输入长文本时无输出结果> HOT 5
- [BUG] Function Calling 示例有错误,最新的 openai sdk 运行时提示 api 已经废弃 HOT 2
- 请问哪里可以找到qwen用于vllm的jinja template? HOT 1
- [BUG] <title>执行eval中的eval_plugin进行评测 有一个agent从huggingface_hub拉包错误 HOT 1
- 请问可以使用高通的npu进行部署和推理吗? HOT 1
- 微调完成后使用llama_factory的vllm和qwen官方的vllm部署方式启动返回的不一样 HOT 2
- 💡 [REQUEST] - <使用ollama来调用qwen:14B时,怎么设置输出文本长度呢> HOT 1
- [BUG] <title>fastchat + vLLM +OpenAI API 调用qwen模型,数据不需要预先处理吗 HOT 1
- 本地部署后,运行很慢啊 HOT 4
- 请问下 2.5什么时候开源呀? HOT 1
- File "finetune.py", line 412, in <module> train() File "finetune.py", line 384, in train model = get_peft_model(model, lora_config) File "/opt/conda/envs/qwen/lib/python3.8/site-packages/peft/mapping.py", line 123, in get_peft_model peft_config.base_model_name_or_path = model.__dict__.get("name_or_path", None) AttributeError: 'NoneType' object has no attribute '__dict__'[BUG] <title> HOT 2
- qwen 14b 不微调的情况下,问相同的问题,模型输出也不太一致,是为什么?温度已经设置成0了 HOT 2
- [BUG] <title>torch.cuda.OutOfMemoryError: CUDA out of memory. HOT 1
- Qwen pre_trained, 打印一下内容,就没有了,不确定是否训练完成 HOT 2
- [BUG] 转换Qwen1.5-14B报错 HOT 1
- 多轮对话训练数据格式组织 HOT 1
- [BUG] Questionable embedding feature shape extracted from Qwen-7B-Chat HOT 2
- [BUG] <title> 命令行运行参数解析错误
- 工具调用的时候,本来用户没有输入参数,但是模型会自动幻想参数 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qwen.