Comments (7)
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507Hi, did you have to do any modifications to the training code to get these results?
Yes.
First change: Please check my last reply on Jun 22 in #18
Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
from coreference-resolution.
I have the same issue, has anyone found a solution for that?
from coreference-resolution.
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
from coreference-resolution.
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507
Hi, did you have to do any modifications to the training code to get these results?
from coreference-resolution.
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507Hi, did you have to do any modifications to the training code to get these results?
Yes.
First change: Please check my last reply on Jun 22 in #18
from coreference-resolution.
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507Hi, did you have to do any modifications to the training code to get these results?
Yes.
First change: Please check my last reply on Jun 22 in #18Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
I also changed one line in coref.py file as suggested in issue #10 to handle index out of issue :
self.train_corpus = [doc for doc in self.train_corpus if doc.sents] in def train_epoch(self, epoch):
from coreference-resolution.
Well, I modified these and finished train and evaluation but my result is poor as follows:
Epoch: 150 | Loss: 2832.548317 | Mention recall: 0.067340 | Coref recall: 0.024316 | Coref precision: 0.020000
.
So did you sovle it?
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507Hi, did you have to do any modifications to the training code to get these results?
Yes.
First change: Please check my last reply on Jun 22 in #18Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?I also changed one line in coref.py file as suggested in issue #10 to handle index out of issue :
self.train_corpus = [doc for doc in self.train_corpus if doc.sents] in def train_epoch(self, epoch):
My current :
Epoch: 150 | Loss: 2366.799846 | Mention recall: 0.729597 | Coref recall: 0.673229 | Coref precision: 0.403507Hi, did you have to do any modifications to the training code to get these results?
Yes.
First change: Please check my last reply on Jun 22 in #18Thank you! So I changed the loss according to your suggestion, using:
loss = torch.sum(torch.log(torch.sum(torch.mul(probs, gold_indexes), dim=1).clamp_(eps, 1-eps)), dim=0) * -1
But model still does not converge, did you do any other changes besides that?
from coreference-resolution.
Related Issues (20)
- need data set HOT 1
- list index out of range in pad_sequence of torch implementation. HOT 8
- hello,author, I am
- Pretrained model?
- A bug, can you fix that? HOT 1
- train issue HOT 2
- Do you figure out your TODO list?
- Error during training HOT 4
- probs = [F.softmax(tensr) for tensr in with_epsilon] may be wrong?
- model does not predict clusters HOT 3
- What should I do with the data? HOT 4
- Is there a problem in function:"remove_overlapping(sorted_spans):" ? HOT 5
- Gamma in the LR scheduler is too small
- Error in sentence truncation
- RuntimeError: received an empty list of sequences HOT 1
- How to preprocess the Data ? HOT 2
- RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation HOT 2
- the performance of model ? HOT 1
- error when evaluating HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from coreference-resolution.