Comments (5)
你的训练参数(比如学习率、优化器啥的)贴一下。
另外请检查你的训练数据是否处理正确。输出 [(,,)]
像是丢失了三元组信息。
调用bot.chat
不应该输出[EOS]
的,是不是还有其他的修改?
这里是完整的微调代码:finetune_IE_task.ipynb
from chatlm-mini-chinese.
您好,运行code与main分支是一致的,只更改了t5模型(google-t5/t5-base)的base路径:
参数是这样子的:
SFTconfig(max_seq_len=512, tokenizer_dir='/home/xxx/mycode/demo2/model_save/', sft_train_file='./data/my_train.json', batch_size=16, num_train_epochs=6, save_steps=3000, gradient_accumulation_steps=4, learning_rate=5e-05, logging_first_step=True, logging_steps=20, output_dir='./model_save/ie_task', warmup_steps=1000, fp16=True, seed=23333)
training_args = Seq2SeqTrainingArguments(
output_dir=config.output_dir,
per_device_train_batch_size=config.batch_size,
auto_find_batch_size=True, # 防止OOM
gradient_accumulation_steps=config.gradient_accumulation_steps,
learning_rate=config.learning_rate,
logging_steps=config.logging_steps,
num_train_epochs=config.num_train_epochs,
optim="adafactor",
report_to='tensorboard',
log_level='info',
save_steps=config.save_steps,
save_total_limit=3,
fp16=config.fp16,
logging_first_step=config.logging_first_step,
warmup_steps=config.warmup_steps,
seed=config.seed,
generation_config=generation_config,
)
我找到了,自己更改了一处代码:
def sft_train(config: SFTconfig) -> None:
# step 1. 加载tokenizer
tokenizer = PreTrainedTokenizerFast.from_pretrained(config.tokenizer_dir)
tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # add code in here
...
因为先前运行时报了这个错误:
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`
看来像是包版本不同导致的。
from chatlm-mini-chinese.
使用requirements.txt重新创建一份虚拟环境,注释 tokenizer.add_special_tokens({'pad_token': '[PAD]'})
这一行,执行时还是报了上面的[PAD]错误提示,好奇怪。。
执行 trainer = sft_train(config)
出,报错信息:
log_error_pad.txt
from chatlm-mini-chinese.
如果你使用项目的tokenizer的话,pad_token是存在的,不用自己添加。依赖一样的话,那大概率是模型文件不完整,重新下载试试,国内可以通过modelscope下载,tokenizer_dir
记得改为./model_save
,即:tokenizer和模型是在一个文件夹下的。
from modelscope import snapshot_download
model_id = 'charent/ChatLM-mini-Chinese'
model_id = snapshot_download(model_id, cache_dir='./model_save')
from chatlm-mini-chinese.
如果你使用项目的tokenizer的话,pad_token是存在的,不用自己添加。依赖一样的话,那大概率是模型文件不完整,重新下载试试,国内可以通过modelscope下载,
tokenizer_dir
记得改为./model_save
,即:tokenizer和模型是在一个文件夹下的。from modelscope import snapshot_download model_id = 'charent/ChatLM-mini-Chinese' model_id = snapshot_download(model_id, cache_dir='./model_save')
嗯嗯好的,谢谢喽。
from chatlm-mini-chinese.
Related Issues (20)
- 项目怎么使用fastchat 进行调试 HOT 1
- 预训练数据集 HOT 2
- 用train.py出现shape的mismatch HOT 10
- sft微调时报错 HOT 4
- 如何提取中间层的输出? HOT 2
- 考虑出一个支持llama的版本吗 HOT 1
- RuntimeError: No executable batch size found, reached zero HOT 2
- 如何加载sft后的模型? HOT 1
- train_3.5M_CN数据处理问题 HOT 1
- 这个模型好像没有长文对话的能力,该如何训练它让它有这个能力? HOT 1
- 请问这些预训练数据加起来有多少token呀 HOT 2
- 非常不错的开源项目 HOT 1
- 预训练数据集必须是{“prompt”: "response":}的格式么? HOT 2
- Some NCCL operations have failed or timed out. HOT 5
- sft_train HOT 1
- 是否考虑将预训练的模型和仅stf后的模型也上传的平台呢 HOT 1
- 这种只能通过问答对的方式,有没有办法MLM的方式学习知识体系。 HOT 1
- 预训练,用了160万数据,共2G句子对,使用A40的48G显存,无论使用1/2/3/4卡,都会报OOM HOT 1
- 可以用a卡训练吗 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatlm-mini-chinese.