Code Monkey home page Code Monkey logo

tree-transformer's Introduction

Tree Transformer

This is the official implementation of the paper Tree Transformer: Integrating Tree Structures into Self-Attention. If you use this code or our results in your research, we'd appreciate you cite our paper as following:

@article{Wang2019TreeTransformer,
  title={Tree Transformer: Integrating Tree Structures into Self-Attention},
  author={Yau-Shian Wang and Hung-Yi Lee and Yun-Nung Chen},
  journal={arXiv preprint arXiv:1909.06639},
  year={2019}
}

Dependencies

  • python3
  • pytorch 1.0

We use BERT tokenizer from PyTorch-Transformers to tokenize words. Please install PyTorch-Transformers following the instructions of the repository.

Training

For grammar induction training:
python3 main.py -train -model_dir [model_dir] -num_step 60000
The default setting achieves F1 of approximatedly 49.5 on WSJ test set. The training file 'data/train.txt' includes all WSJ data except 'WSJ_22 and WSJ_23'.

Evaluation

For grammar induction testing:
python3 main.py -test -model_dir [model_dir]
The code creates a result directory named model_dir. The result directory includes 'bracket.json' and 'tree.txt'. File 'bracket.json' contains the brackets of trees outputted from the model and they can be used for evaluating F1. The ground truth brackets of testing data can be obtained by using code of on-lstm. File 'tree.txt' contains the parse trees. The default testing file 'data/test.txt' contains the tests of wsj_23.

Acknowledgements

Contact

[email protected]

tree-transformer's People

Contributors

glicerico avatar yaushian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tree-transformer's Issues

The group_prob & break_prob

in model.py,line 50
group_prob,break_prob = self.group_attn(x, mask, group_prob)may be
group_prob,break_prob = self.group_attn(x, mask, break_prob)?
cause the break_prob is neibor_attn in GroupAttention, which is the a in paper.

parse tree structure

Hi, thanks for your interesting work!

After reading the paper, I wonder why the parse tree may have three branches except the leaf nodes as shown in your paper Figure 3. For example, in your Figure 3, the root node have three subtrees.

question 1e-9?

Hi! I noticed that the the constant 1e-9 is used many times in your code(GroupAttention), why do you do this?
Thx

询问

您好博主,想问一下这个tree transformer有没有预训练好的比如hugging face社区可以直接下来拿来用的模型

Can't run without cuda

The code cannot be run without a working CUDA installation.
It would be beneficial to include the option, e.g. to be able to modify the architecture or debug in a computer without GPUs.

Share pre-trained model?

Do you have plans to share your pre-trained model?
I trained by myself using the WSJ data you provide, but my results, as explained in Issue #9 are not good. If you share your pre-trained model, I could test it.
Thanks!

Questions in paper

I found that the value of attention to oneself in the attention figure is 0, that is, the value of the diagonal part is 0, which means that one cannot attend to oneself. Will this affect performance? Is there any experimental verification? Thanks.

Code to reproduce Masked LM results

Could you please share the code to reproduce results in Sect 7.5 of your paper?

I am interested in using your model as a masked LM, to then generate sentences from it in a fashion similar to bert-gen. I implemented a similar procedure, using Tree-Transformer, with poor results. So my first step now is to see the predictions made by my trained model are reliable.

Add LICENSE to repo

HI!

I am interested in using your code, but I noticed that the repo does not have a LICENSE.

Would it be possible to add a LICENSE to the repo? Thank you for the help!

Reproduce parsing performance

Thank you for the awesome work!

We were trying to reproduce the performance of the parsing module with four different random seeds (1, 11, 111, 1111), while the average f1 score given by the default script is 47.6, with max f1 around 50.4.

For your reference, we are controlling random seeds using the following function:

def set_random_seed(seed=0):
    random.seed(seed)
    numpy.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)  # if you are using multi-GPU.
    torch.backends.cudnn.enabled = False 
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

If you may, could you please kindly share the random seeds or any specific settings aside from the default parameters for the Table 1 experiments for reproduction? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.