Comments (6)
我尝试对比了一下 tokenized_datasets 的长度
print(len(tokenized_datasets["train"]))
print(len(tokenized_datasets["valid"]))
我自己的数据集train 只有12 valid 只有1 而我生成了6K问答对。。。
from zero_nlp.
- 不得不说,这东西是玄学,好了,这是开玩笑的。
- 说回技术,因为我加载数据,使用的是
datasets
这个包,你可以看看这个包的使用介绍。
from zero_nlp.
`def tokenize(element):
outputs = tokenizer(
element["content"],
truncation=True,
max_length=context_length,
return_overflowing_tokens=True,
return_length=True,
)
input_batch = []
for length, input_ids in zip(outputs["length"], outputs["input_ids"]):
if length == context_length:
input_batch.append(input_ids)
return {"input_ids": input_batch}
tokenized_datasets = raw_datasets.map(
tokenize, batched=True, remove_columns=raw_datasets["train"].column_names
)
tokenized_datasets
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False)`
里面的
if length == context_length:
input_batch.append(input_ids)
这个会把小于512的数据排除,可以看看是不是可以改成<=呢?
from zero_nlp.
我尝试了小于512 以及
tokenizer(
element["content"],
padding=True,
truncation=True,
max_length=context_length,
)
甚至删除了length了判断。。也没有作用。
我打印了作者提供的数据集的相关数据
print(len(raw_datasets["train"])) # 10000
print(len(tokenized_datasets["train"])) # 4753
print(len(tokenized_datasets["valid"])) #489
也存在着输出丢失的问题。不过我自己构建的那个更明显raw_datasets 的train 有4000多,而 tokenized_datasets 的train 只有12个 (手动捂脸)
from zero_nlp.
我们是把if length == context_length:改成了if length <= context_length:检查了一下数据集发现ok了,但是不知道为啥调出来的模型没有效果,老哥那边能调出一个有效果的模型吗
from zero_nlp.
@yuanzhoulvpi2017 请问下,为啥这里要 if length == context_length
input_batch = []
for length, input_ids in zip(outputs["length"], outputs["input_ids"]):
if length == context_length:
input_batch.append(input_ids)
return {"input_ids": input_batch}
from zero_nlp.
Related Issues (20)
- ChatGLM2 lora finetuning 加载 lora 参数:RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3072, 32, 1, 1], but got 3-dimensional input of size [1, 64, 4096] instead HOT 4
- 4张3080ti跑chatglm2-6b-lora报oom HOT 5
- 求助:chatglm2 lora训练error:RuntimeError: Expected is_sm80 to be true, but got false. HOT 2
- 训练的时候报错ValueError: The current `device_map` had weights offloaded to the disk. HOT 11
- 训练出错
- 两张4090单机多卡跑,咋感觉越跑越慢了,比单卡慢 HOT 2
- 请问有部署或者运行的文档吗?在哪里可以看?
- 实时微调可以通过加入传统RL实现吗
- 请问如果单纯使用zeroth-order向前优化少量batch(只要体现出一定的优化效果)的话要怎么实现 HOT 2
- lora推理中只能指定一个输入吗?有办法实现batch_size的推理吗
- 救命!!ChatGlm-v2-6b_Lora该怎么设置epoch?? HOT 1
- 大佬,可以多个多个lora叠加使用吗?
- chatglm_v2_6b_lora多卡如何设置,没有找到 HOT 2
- 能出一个ChatGLM
- 能出一个ChatGLM的教程吗
- Segment Fault 是哪的问题?
- 大佬 chinese_llama 还可以用吗 HOT 1
- 出个chatglm3的吧 微调后 推理老是出问题 HOT 1
- internlm-sft 单机多卡微调 GPU 利用率低 HOT 5
- 大佬出个教程把 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from zero_nlp.