Code Monkey home page Code Monkey logo

docred's People

Contributors

hongwang600 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docred's Issues

Question about SentModel

Thank you for your work!
I have questions about the SentModel mentioned in your paper.

  1. Is the SentModel based on BiLstm or Bert?
  2. I can't find the code about SentModel. Can you point it up?

How exactly should I be running the code in order to fully implement the two-step process?

Hi. I'm trying to run your code but am experiencing a bit of confusion. Looking at the BERT model (I believe in the BiLSTM module) I'm confused which part is the first phase and which is the second. It seems to me that there's just one large phase rather than two.

I've checked Issue #3 and it seems that the code for the two-step process is contained in the rel_exist_bert_cls_sep branch? However, I'm also a little confused as to where the first and second steps take place.

Could you provide some tips on where I should be looking and how I should be running the code properly? Thanks!


Edit

Reading the paper again, in "3.2 Implementation Details" you state in the second paragraph:

In the first step, we set the relation label for all relational instances to be 1, while the label for all N/A relations to be 0. We randomly sample N/A relations at a ratio 3:1 within a batch. In the second step, we train a new model using only relational instances, and the specific relation label is kept in this step.

I initially thought that you "pretrain" a model in the first step using binary classification and further fine-tune the model in the second step. However, if you train a new model in the second step, how is the information from the first step used?

Question about the value of 'not NA acc'.

Dear authors:
Thanks for your implementation with BERT on the DocRED. I have a question that the value of 'not NA acc' is quite large when training, and when the model converges, it even approaches 1. But the test F1 is more normal with a number about 0.54. Beyond that, I find that the value of original implementation (ACL-19) with LSTM seems in line with the final test F1. Thus I want to know why the 'not NA acc' and 'test F1' are so different in training.
Looking for your reply!

Question about Bert.

Dear authors, thanks for your efforts. I am planning to use your Bert implementation as a baseline for my MSc project concerning document-level RE. In the final part of the paper, you compare the performance of the sentence-encoding model and BiLSTM. Would you like to tell me if the BiLSTM refers to the baseline model in the DocRED paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.