zjunlp / knowprompt Goto Github PK
View Code? Open in Web Editor NEW[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
License: MIT License
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
License: MIT License
According to your introduction, I get experiment result on semeval dataset by bash scripts/semeval.sh
. but the loss is not dropping during training. where is wrong? I don't change anywhere. @njcx-ai
使用本项目复现结果过程中,具体是用roberta-large复现semeveal数据集在8-shot下的结果,Eval/best_f1=0.149,与论文中的相差甚远,且在main.py的line 214(if not args.two_steps: trainer.test())这里提示No 'test_dataloader()' method defined to run 'Trainer.test',请问这个是因为公开项目不完整导致的吗?
尝试了几个数据集,都是在最后两个epoch时报错:FileNotFoundError: [Errno 2] No such file or directory: '/home/code/KnowPrompt/output/epoch=2-Eval/f1=0.906.ckpt',求指点
看到论文implement details作者实验中使用了8 3090 GPUs
The code calculates the F1 score without considering "no_relation." Is there any background information regarding this calculation method?
Furthermore, running the experiments with the parameters from the paper on TACRED/V results in a score of 0. This could be because there are too many "no_relation" instances in these datasets. How can the results from the paper be replicated? (Perhaps the authors altered the class distribution in the training set or considered "no_relation" when calculating F1 scores.)
Thanks!
请问运行>> bash scripts/retacred.sh的时候报错:
Traceback (most recent call last):
File "main.py", line 244, in
main()
File "main.py", line 128, in main
parser = _setup_parser()
File "main.py", line 55, in _setup_parser
litmodel_class = _import_class(f"lit_models.{temp_args.litmodel_class}")
File "main.py", line 27, in import_class
class = getattr(module, class_name)
AttributeError: module 'lit_models' has no attribute 'TransformerLitModel'
scripts/retacred.sh: line 2: --model_name_or_path: command not found
scripts/retacred.sh: line 3: --accumulate_grad_batches: command not found
scripts/retacred.sh: line 4: --batch_size: command not found
scripts/retacred.sh: line 5: --data_dir: command not found
scripts/retacred.sh: line 6: --check_val_every_n_epoch: command not found
scripts/retacred.sh: line 7: --data_class: command not found
scripts/retacred.sh: line 8: --max_seq_length: command not found
scripts/retacred.sh: line 9: --model_class: command not found
scripts/retacred.sh: line 10: --t_lambda: command not found
scripts/retacred.sh: line 11: --wandb: command not found
scripts/retacred.sh: line 12: --litmodel_class: command not found
scripts/retacred.sh: line 13: --task_name: command not found
scripts/retacred.sh: line 14: --lr: command not found
这是什么原因?
Hi, many thanks for providing the code. I have a question why do you have two configure_optimizers() in the Base lightning module?
another question is that I know that the configure_optimizers function in the BertLitModel is for two-stage training but we do not need to check the optimizer_grouped_parameters in the training_step() and then optimize them?
I had a question regarding the initialization of type words. According to the code:
if self.args.init_type_words:
so_word = [a[0] for a in self.tokenizer(["[obj]","[sub]"], add_special_tokens=False)['input_ids']]
meaning_word = [a[0] for a in self.tokenizer(["person","organization", "location", "date", "country"], add_special_tokens=False)['input_ids']]
The meaning words are initialized with certain entity types. While these are the probable entity types for the TACRED dataset, the same is not true for the SemEval dataset.
I wanted to know how this initialization affects the working of the algorithm on other datasets like SemEval. Should we change this initialization based on the dataset?
你好!
我使用自己标注的数据集,格式处理为这样
{'token': ['地', '面', '状', '况', '不', '良', '导', '致', '位', '置', '偏', '移', '。'], 'h': {'name': '位置偏移', 'pos': [8, 12]}, 't': {'name': '地面状况不良', 'pos': [0, 6]}, 'relation': '因果关系'}
(不知道格式这样是否正确。。
然后运行后,得到这个报错。
File "D:\re\knowprompt2\KnowPrompt\lit_models\transformer.py", line 210, in validation_step
input_ids, attention_mask, labels, _ = batch
ValueError: too many values to unpack (expected 4)
代码没有改动。我不知道是我格式的问题还是其他问题。
盼回复!谢谢!
sorry to borther you,I have seen your paper and code, but I couldn't see any prompt method,I just see the relation between two entities have the different #perfermence from trditional relation perfermence.Is there any unseen code published?
It seems that the virtual type words and soft prompt are the same thing, except that virtual type words are initialized according to the prior probability distribution.
May I know whether my understanding is correct?
I run get_label_word.py
get the PT file of semeval dataset, and then run script/semeval.sh
, but I find that loss is not dropping during training, and f1 score is relatively low(0.14). What's wrong with it? Waiting for your reply, thanks.
FileNotFoundError: [Errno 2] No such file or directory: './dataset/roberta-large_semeval.pt'
how can i get this pt
hi, congratulations on your great work!
I have a question after reading the paper (though I have not read the code yet). How do you calculate the value of \phi(r)? It seems that you don't explain that in the paper. Thank you.
According to your paper: you estimate the prior distributions over the candidate set C_sub and C_obj of potential entity types, according to a certain relation class, where the prior distributions are estimated by frequency statistics. But, how do you estimate the prior distributions with an unknown relation in the instance , just like “the chicken or the egg?”.
For example, the relation “per:country_of_birth” indicates the subject entity belongs to “person” and the object entity belongs to “country”. The prior distributions for C_sub can be counted as {"person":1} , but we should know this instance contains the relation "per:country_of_birth" in advance, then we can estimate the prior distributions of the candidate set.
大佬好!我有以下疑问:
论文 4.3 节说第一阶段训练是优化那几个虚拟类型词和答案词的 embedding,但是我看到代码(如下)transformer.py 里,else 里是用整个 embedding 来当做优化的参数,请问这是怎么回事呢?
def configure_optimizers(self):
no_decay_param = ["bias", "LayerNorm.weight"]
if not self.args.two_steps:
parameters = self.model.named_parameters()
else:
# model.bert.embeddings.weight
parameters = [next(self.model.named_parameters())]
Hi. Thanks for your great work.
The paper mentioned weighted average function
on page 4, indicating embeddings of virtual words should be initialized with respect to the probability distribution
. However, your code shows only a mean
operation was performed. Is that a bug or does it just shows a negligible difference so that we could ignore it?
Moreover, I am a little confused about the probability distribution
. Is it still based on prior distributions discussed in Entity Knowledge Injection
in Part 4.1?
Thanks in advance for your patience.
KnowPrompt/lit_models/transformer.py
Line 167 in 8734c20
您好,我想问一下您几个问题:
1: virtual template words 指的是构造的模板吗?也就是图一中的
Hamilton [MASK] British [SEP]
2: answer words 指的是 [MASK] 预测的的关系词吗? 图一中的 country,city,residence
3: 公式5中的 h,t,r 分别指的是什么?
4:文中多次提到 estimate the probability distributions ,这个值是怎么算的?
非常感谢
I want to know how the relation embedding was constructed? I learned from this repo where relation embedding was built from the bert 【mask】? But this is the same to type word embedding. Am I right?
您好,有以下几个问题想要请教您:
谢谢您!
大佬好!首先对你们的卓越工作以及开源精神表示敬意!
起因:我在尝试重新运行项目中的代码的时候,对 KE loss 的有效性有一些疑问,虽然论文中你们提到“In addition, there exists rich semantic knowledge among relation labels and structural knowledge implications among relational triples, which cannot be ignored.”,我在这里将 ke_loss 理解为文中所说的 structural knowledge
,但是直接使用
KnowPrompt/lit_models/transformer.py
Lines 292 to 293 in 9159e4b
(顺便一提,你们公开的代码里面这部分的 log 写错了,两个输出都是 loss)
KnowPrompt/lit_models/transformer.py
Lines 196 to 197 in 9159e4b
如果调整到正常的输出之后就会发现 ke_loss 一直在 20 左右。这也印证了我前面的想法,毕竟
总而言之,我的疑问如下,如果能够抽空解答一二,十分感谢~ @njcx-ai (应该是作者吧~😘)
KnowPrompt/lit_models/transformer.py
Lines 262 to 297 in 9159e4b
在阅读本篇优秀论文的一些疑问:
期待作者们的回复!非常感谢!
我看了一下代码好像[sub]和[obj]就只是个token,即对每个关系来说,这个type word embedding是一样的。为什么在paper表6里面不同的句子[sub]和[obj]周围的word会有差别,这个type word embedding在inference的时候会变吗?
(我认为的流程:训练得到每个relation的embedding+[sub]和[obj]的embedding后,inference时按照template把[sub]和[obj]、[MASK]插入,预测[MASK],和relation embedding求相似度。)
I want to change English roberta model to chinese roberta,the data processed module showed that use_bert is ture,so the dataset return batch that is analysed by 5 variables to express, I don't find any module to accept the batch
请问一下作者,如果我想选择bert作为预训练模型,应该修改哪些参数?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.