Comments (4)
小白目前算是修的能用了,我猜测是vllm 0.4.3版本修改了
engine
目录下async_llm_engince.py
文件中的generate
函数参数 测试模型为llama3-8B-Instruct# vllm新版本下的函数 async def generate( self, inputs: PromptInputs, sampling_params: SamplingParams, request_id: str, lora_request: Optional[LoRARequest] = None, ) -> AsyncIterator[RequestOutput]:可以手动将chat目录下vllm_engine.py内的generate函数修改为如下形式
result_generator = self.model.generate( #prompt=None, sampling_params=sampling_params, request_id=request_id, inputs = messages[-1]['content'], #prompt_token_ids=prompt_ids, lora_request=self.lora_request, #multi_modal_data=multi_modal_data, )不过本人是纯小白,不知道这么修改是否合理,只是现在能成功加载模型对话了,还是等作者进行修复吧。 另外也可以尝试降级vllm,但是本人为了环境稳定性没有尝试。
Downgrading vllm to version 0.4.2 fixed the problem in my environment. Thank you.
from llama-factory.
from llama-factory.
我也是,有什么解决方法吗
from llama-factory.
小白目前算是修的能用了,我猜测是vllm 0.4.3版本修改了engine
目录下async_llm_engince.py
文件中的generate
函数参数
测试模型为llama3-8B-Instruct
# vllm新版本下的函数
async def generate(
self,
inputs: PromptInputs,
sampling_params: SamplingParams,
request_id: str,
lora_request: Optional[LoRARequest] = None,
) -> AsyncIterator[RequestOutput]:
可以手动将chat目录下vllm_engine.py内的generate函数修改为如下形式
result_generator = self.model.generate(
#prompt=None,
sampling_params=sampling_params,
request_id=request_id,
inputs = messages[-1]['content'],
#prompt_token_ids=prompt_ids,
lora_request=self.lora_request,
#multi_modal_data=multi_modal_data,
)
不过本人是纯小白,不知道这么修改是否合理,只是现在能成功加载模型对话了,还是等作者进行修复吧。
另外也可以尝试降级vllm,但是本人为了环境稳定性没有尝试。
from llama-factory.
Related Issues (20)
- 模型eval时只输出loss等info,却没有acc?求助(是环境问题吗) HOT 2
- Latest LLaMA-Factory repo force to use Troch 2.4 hence is clashing with Unsloth/XFormers HOT 3
- webui Chat hugging face 总是乱码
- 如何使用自己的reward函数而不是使用reward model呢? HOT 1
- 请问,llamafactory现在支持在昇腾910上进行模型评估嘛? HOT 1
- Gemma 2 + unsloth + fa2 full SFT RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
- 微调后词表长度不一致怎么办
- 请问使用qlora微调后生成的模型中哪里体现了量化的配置参数 HOT 1
- Llama-factory使用错误
- 如何在 使用 openai 风格 部署时,使用 beam search
- help on understanding the implementation of FSDP.
- bitsandbytes qlora微调模型推理
- Running tokenizer on dataset 速度逐渐变慢 HOT 1
- model.generate的参数在yaml中设定无效,我设了do_sample: false,使用profiler查看实际还是true 此问题只在训练中途的eval发生,训练结束的最后一次eval正常
- 设置随机数种子后,相同数据集和配置的每次训练loss还是不一样 HOT 2
- 对微调后的GLM-4-9B-Chat运行examples/train_lora/llama3_lora_predict.yaml出错
- qwen2-vl双卡全量微调OOM HOT 1
- 多机多卡运行报错
- ValueError: Template qwen2 does not exist. HOT 1
- 多卡制定HF_DATASETS_CACHE会报错 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama-factory.