Code Monkey home page Code Monkey logo

tmn's Introduction

Triplet Matching Network

The source code used for self-supervised taxonomy completion method TMN, published in AAAI 21.

Please cite the following work if you find the code useful.

@inproceedings{zhang2021tmn,
	Author = {Zhang, Jieyu and Song, Xiangchen and Zeng, Ying and Chen, Jiaze and Shen, Jiaming and Mao, Yuning and Li, Lei},
	Booktitle = {AAAI},
	Title = {Taxonomy Completion via Triplet Matching Network},
	Year = {2021}
}	

Contact: Jieyu Zhang ([email protected])

Install Guide

Install DGL 0.4.0 version with GPU suppert using Conda

From following page: https://www.dgl.ai/pages/start.html

conda install -c dglteam dgl-cuda10.0

Other packages

ipdb tensorboard gensim networkx tqdm more_itertools

Data Preparation

For dataset used in our paper, you can directly download all input files below and skip this section.

For expanding new input taxonomies, you need to read this section and format your datasets accordingly.

MAG-CS

MAG-Psy

WordNet-Noun

WordNet-Verb

Step 0.a (Required): Organize your input taxonomy along with node features into the following 3 files

1. <TAXONOMY_NAME>.terms, each line represents one concept in the taxonomy, including its ID and surface name

taxon1_id \t taxon1_surface_name
taxon2_id \t taxon2_surface_name
taxon3_id \t taxon3_surface_name
...

2. <TAXONOMY_NAME>.taxo, each line represents one relation in the taxonomy, including the parent taxon ID and child taxon ID

parent_taxon1_id \t child_taxon1_id
parent_taxon2_id \t child_taxon2_id
parent_taxon3_id \t child_taxon3_id
...

3. <TAXONOMY_NAME>.terms.<EMBED_SUFFIX>.embed, the first line indicates the vocabulary size and embedding dimension, each of the following line represents one taxon with its pretrained embedding

<VOCAB_SIZE> <EMBED_DIM>
taxon1_id taxon1_embedding
taxon2_id taxon2_embedding
taxon3_id taxon3_embedding
...

The embedding file follows the gensim word2vec format.

Notes:

  1. Make sure the <TAXONOMY_NAME> is the same across all the 3 files.
  2. The <EMBED_SUFFIX> is used to chooose what initial embedding you will use. You can leave it empty to load the file "<TAXONOMY_NAME>.terms.embed". Make sure you can generate the embedding for a new given term.

Step 0.b (Optional): Generate train/validation/test partition files

You can generate your desired train/validation/test parition files by creating another 3 separated files (named <TAXONOMY_NAME>.terms.train, <TAXONOMY_NAME>.terms.validation, as well as <TAXONOMY_NAME>.terms.test) and puting them in the same directory as the above three required files.

These three partition files are of the same format -- each line includes one taxon_id that appears in the above <TAXONOMY_NAME>.terms file.

Step 1: Generate the binary dataset file

  1. create a folder "./data/{DATASET_NAME}"
  2. put the above three required files (as well as three optional partition files) in "./data/{DATASET_NAME}"
  3. under this root directory, run
python generate_dataset_binary.py \
    --taxon_name <TAXONOMY_NAME> \
    --data_dir <DATASET_NAME> \
    --embed_suffix <EMBED_SUFFIX> \
    --existing_partition 0 \
    --partition_pattern internal \

This script will first load the existing taxonomy (along with initial node features indicated by embed_suffix) from the previous three required files. Then, if existing_partition is 0, it will generate a random train/validation/test partitions, otherwise, it will load the existing train/validation/test partition files. Notice that if partition_pattern is internal, it will randomly sample both internal and leaf nodes for validation/test, which makes it a taxonomy completion task; if it is set leaf, it will become a taxonomy expansion task. Finally, it saves the generated dataset (along with all initial node features) in one pickle file for fast loading next time.

Model Training

Simplest training

Write all the parameters in an config file, let's say ./config_files/config.universal.json, and then start training.

Please check ./config_files/config.explain.json for explanation of all parameters in config file

There are four config files under each sub dirs of ./config_files:

  1. baselineex: baselines for taxonomy expansion;
  2. tmnex: TMN for taxonomy expansion;
  3. baseline: baselines for taxonomy completion;
  4. tmn: TMN for taxonomy completion;
python train.py --config config_files/config.universal.json

Specifying parameters in training command

For example, you can indicate the matching method as follow:

python train.py --config config_files/config.universal.json --mm BIM --device 0

Please check ./train.py for all configurable parameters.

Running one-to-one matching baselines

For example, BIM method on MAG-PSY:

python train.py --config config_files/MAG-PSY/config.test.baseline.json --mm BIM

Running Triplet Matching Network

For example, on MAG-PSY:

python train.py --config config_files/MAG-PSY/config.test.tmn.json --mm TMN

Supporting multiple feature encoders

Although we only use initial embedding as input in our paper, our code supports combinations of complicated encoders such as both GNN and LSTM.

Check out the mode parameter, there are three symbols for mode: r, p and g, representing initial embedding, LSTM and GNN respectively.

If you want to replace initial embedding with a GNN encoder, plz set mode to g;

If you want to use a combination of initial embedding and GNN encoder, plz set mode to rg, and then the initial embedding and embedding output by GNN encoder will be concatenated for calculating matching score;

For GNN encoder, we defer user to Jiaming's WWW'20 paper TaxoExpan;

For LSTM encoder, we collect a path from root to the anchor node, and them use LSTM to encoder it to generate representation of anchor node;

Model Inference

Predict on completely new taxons.

Data Format

We assume the input taxon list is of the following format:

term1 \t embeddings of term1
term2 \t embeddings of term2
term3 \t embeddings of term3
...

The term can be either a unigram or a phrase, as long as it doesn't contain "\t" character. The embedding is space-sperated and of the same dimension of trained model.

Infer

python infer.py --resume <MODEL_CHECKPOINT.pth> --taxon <INPUT_TAXON_LIST.txt> --save <OUTPUT_RESULT.tsv> --device 0

Model Organization

For all implementations, we follow the project organization in pytorch-template.

tmn's People

Contributors

jieyuz2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

tmn's Issues

Question regarding the train/dev/test split

Hi Jieyu,

thanks for the great work and for making it publically available.
Just a quick question I could not figure out. In your paper, you mentioned that for partition_pattern='internal', a number of nodes are randomly removed.

What if the original taxonomy is like:

      root
     /    \
   A       B
           /  \
         C     D
                \
                 E

Would you consider removing both "B" and "D" for constructing the test set?
If yes, what would be the ground-truth position for B, is it <root, D> or <root, E>? (worth noting E is now in test set, and my speculation is we can not access E).
This is somehow against the independent assumption of expanding $|C|$ independent concepts.

Best,
Jiaying

How to use after "Model Training" step

I have completed the Model Training step:

The best model saved in: data/saved/MAG-PSY/MatchModel/0408_153339/models/trial1/model_best.pth ...
Finish training in 14853.144903421402 seconds
& 242.570 +- 0.000  & 554.271 +- 0.000  & 0.136 +- 0.000  & 0.304 +- 0.000  & 0.379 +- 0.000  & 0.278 +- 0.000  & 0.124 +- 0.000  & 0.077 +- 0.000  & 0.476 +- 0.000

The README.md is not clear on how to use the model after training. Can you provide an example on how to add a new concept and print out the new taxonomy where it is added? Also, how can I reproduce the results shown in the paper? Again, there is no mention of this in the README.md...

Training tips?

Hi, thank you so much for releasing this code; it is really clean and works well!

I am trying to train TMN on my own taxonomy (I am the author of Arborist, which you compare with in your paper).

I was wondering if you had any tips for training to get the best performance? I am currently trying different values of:

  • k
  • Learning rate
  • Batch size
  • Number of negative samples per batch

Are there any other parameters you recommend I tune?

Thanks in advance!

How to use TMN for taxonomy expansion

Hi Jieyu! Thanks a lot for the well-organized code and data! I'm trying to apply TMN to the taxonomy expansion as shown in paper's Table 3. In the paper, it says the TMN is trained on the taxonomy completion task, but the config files config.test.tmnex.json are not directly runnable. I can imagine maybe the implementation is first train a TMN on the taxonomy completion task, then during inference on taxonomy expansion, only use s1 scorer.

Could you share some details about reproducing the taxonomy expansion result using TMN? Thanks a lot!

Paper abstract corpus

Hello!
Very impressive work, thank you for sharing your code.
In the paper, you mention that the nodes in the MAG are defined with related paper abstracts corpus. Could you please share the corpus or the textual data related to nodes in the MAG?
Thank you!
Inès

How to test the model?

I would like to reproduce the results in the paper, but I cannot find any test scripts in the code, and readme also lacks this part. Any help would be appreciated.

a possible bug?

hi jieyu, in line 50 of infer.py, should "kv.add" be replaced by "kv.add_vectors"?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.