Code Monkey home page Code Monkey logo

pgexplainer's Introduction

PGExplainer

This is a Tensorflow implementation of the PGExplainer:

Parameterized Explainer for Graph Neural Network

NeurIPS 2020

Towards Inductive and Efficient Explanations for Graph Neural Networks

TPAMI 2024

Requirements

  • Python 3.6.8
  • tensorflow 2.0
  • networkx

Pytorch Implementations

Now, PGExplainer is avilable at pytorch_geometric

https://github.com/pyg-team/pytorch_geometric/blob/master/torch_geometric/explain/algorithm/pg_explainer.py

Here are several re-implementations and reproduction reports from other groups. Thanks very much these researchers for re-implementing PGExplainer to make it more easy to use!

  1. [Re] Parameterized Explainer for Graph Neural Network

https://zenodo.org/record/4834242/files/article.pdf

Code:

https://github.com/LarsHoldijk/RE-ParameterizedExplainerForGraphNeuralNetworks

Note that in this report, they adopt different GCN models with our implementation.

  1. DIG

https://github.com/divelab/DIG/tree/main/dig/xgraph/PGExplainer

  1. Reproducing: Parameterized Explainer for Graph NeuralNetwork

https://openreview.net/forum?id=tt04glo-VrT

Code:

https://openreview.net/attachment?id=tt04glo-VrT&name=supplementary_material

  1. GitLab https://git.gtapp.xyz/zhangying/pgexplainer

Awesome Graph Explainability Papers

https://github.com/flyingdoog/awesome-graph-explainability-papers

References

@article{luo2020parameterized,
  title={Parameterized Explainer for Graph Neural Network},
  author={Luo, Dongsheng and Cheng, Wei and Xu, Dongkuan and Yu, Wenchao and Zong, Bo and Chen, Haifeng and Zhang, Xiang},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}
@article{luo2024towards,
  title={Towards Inductive and Efficient Explanations for Graph Neural Networks},
  author={Luo, Dongsheng and Zhao, Tianxiang and Cheng, Wei and Xu, Dongkuan and Han, Feng and Yu, Wenchao and Liu, Xiao and Chen, Haifeng and Zhang, Xiang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2024},
  publisher={IEEE}
}

pgexplainer's People

Contributors

flyingdoog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pgexplainer's Issues

MUTAG graph edge file

Hi,
I have processed the Mutagenicity.zip file and two nodes of the 2472th graph (with 74823 and 74824 numbers) have no recorded edges.

Got NaN output when running BA-shapes.ipynb

Hi

I attempt to run the code in "BA-shapes.ipynb" and got an error "ValueError: Input contains NaN, infinity or a value too large for dtype('float32')". Only "BA-shapes.ipynb" has this issue. The other three experiments are fine. I think this may be caused by the python environment. Could you please provide your environment for running experiments?

Here is my output of running "BA-shapes.ipynb" and "pip freeze":
https://gist.github.com/haolanut/814d3908fd4a91fd37da22b0349269cc

Also, I'm wondering are you planning to provide the script for explaining the graph classification model?

Thank you!

BA-2Motifs

Hi,

Really cool project. I was wondering if you had plans to release the BA-2Motifs datasets as well. At the moment it is the missing from the datasets folder, and if you would be able to provide more details on the hyper-parameter choices you have made in the graph classification models.

While I looked at the appendix, I see that there are additional options in the model definition including (Batchnorm, Concatenating an add pool option etc).

I'm trying to replicate your results in Pytorch/ Pytorch Geometric for the graph classification set-ups.

Thanks!

The loss in the code is not the same as in the paper

the loss in the paper is:
image
but in code:
image
you just take out the probability of the item corresponding to the label,and use -tf.math.log() to get the pred_loss,$\sum_{c=1}^C$ I don't think it shows up in code.

Explaination for DGL model

Dear authors,
Is there any implementation version that can explain the GNN models implemented by DGL?

Reproduce the results for Graph Classification

Dear author, may I know when you will release the complete code for explaining GNN for graph classification tasks?

We have some difficulties in reproducing the experimental results in your paper. In particular, we have the following questions

Questions:
1: For the dataset MUTAG, do you use any node features when training the GNN model for graph classification tasks? And did you use the batch norm layer in the GNN model? If yes, what is the batch size?

2: When you calculate the AUC, how do you know the ground truth for explaining the GNN model for graph classification tasks? And when training the explainer, do you use the entire dataset, including both classes, or only use the mutagen graphs.

MUTAG labels

Hi,
The 2210th graph has label zero (mutagen) but it has neither of NO2 nor NH2. Is there something that I am missing?
thanks

Figure_11

AUC calculation

Why the mean auc score calculated in BA-shapes.ipynb is about 99%. I also tried all nodes from three classes in house motif, the auc score is still around 99%. And that performs quite better than the 0.963±0.011 score you presented in the paper. Are the calculations of auc score different?

Disconnected motifs in BAMotifs

Hello,

I was playing around with train_BA-2motif.ipynb when I realized that some input graphs are not as expected. Specifically, by running this code after the cell where you read the dataset:

for i in adjs:
    d = nx.from_numpy_array(i)
    if not (nx.is_connected(d)):
        nx.draw(d)
        plt.show()

I saw that there are about 40 graphs that are disconnected and in which the target motif has a self-loop.
This behavior is unexpected after reading the dataset description of the original paper.

image

About the readout operation in graph classification

Dear author,

I hit upon that it seems like the global pooling operation in graph classification is on the wrong axis. For example, when reading out the "out" ( graph_num, node_size, fea_num) with axis=-1, the resulting shape is (graph_num, node_size) but not the desired (graph_num, fea_num).

PGexplainer is weak against graphs with mean degree greater than 1000

Hello. How are you? I'm investigating PGexplainer and I noticed that it is incredibly weak at explaining graphs with nodes with more than 1000 neighbors. An example is the bitcoin-alpha and -otc dataset. PGexplainer (with 3hop) does not achieve 1% explanation accuracy. For 1hop (average degree 17), PGexplainer achieves ~83% accuracy. Why does this occur?

Another point is the explanation time which is said to outperform GNNexplainer, however, for cases with nodes with degree greater than 1000, PGexplainer is much slower than GNNexplainer. Why does this occur?

About the number of samples K in the paper.

Hi,

Really cool project. When I compared this implementation with the one decribed in the paper, I found out that this code does not sample K times for each node's subgraph. Is there any reason for it and will it leads to a high variance for a certain node?

Looking forward to your response!

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.