Code Monkey home page Code Monkey logo

gdc's People

Contributors

gasteigerjo avatar swbg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gdc's Issues

the idea is similar with this paper?

great paper!
I find your idea is similar to the paper: GRAPH WAVELET NEURAL NETWORK?
it also changes the eigenvalues of the Laplacian, am I right?
look forward to your reply~

Could you please give some advice about more scalable algorithms for PPR and the heat kernel?

We want to use GDC on large graph with more than 200,000 nodes, but directly calculating T^{-1} and expm(T) is very show.
We also notice your advice: How to use GDC in large graph

For large graphs I'd recommend using more scalable algorithms for PPR and the heat kernel instead of directly calculating T^{-1} and expm(T), as we do in this repository.

We can't find the more scalable algorithms for PPR and the heat kernel in this repository.

How to use GDC in large graph

As for some large graph which is difficult to deal with in the format of AHX, may I ask you about some advice or suggestion on this actual view(such as some knowledge graph) ?

About the formulation

Thanks for writing such a great paper! I noticed this paper because of DCRNN(Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting). The author of DCRNN first purposed Graph Diffusion Convolution(personalized PageRank). However, I found his code is wrong while I'm implementing DCRNN.

I'm so curious about your code. Here, you just show the T_rw of GDC(PPR). Is the T_rw of GDC(HK) equals to T_rw of GDC(PPR) in an undiected graph? And, why T_rw of GDC(PPR) is alpha * (I_n - (1 - alpha) * \hat{A}) instead of D^{-1} * A ? More importantly, what's the formulation of GDC(PPR)? Is it \sum^{K}_{k=0}{\theta_k * T_rw^{k} * x}?

Could you please provide the code for applying diffusion with GAT?

Dear author,

Could you please provide the code for applying diffusion with GAT? I have tried to apply diffusion on GAT, but I can't get the performance improvement with this technique. Or could you give me some guidance or tips when using diffusion on GAT?

Best

Some question about the datasets

I have to say, this is an excellent model!
but I have some questions about the datasets

  1. I tried to get the co-purchase graphs AMAZON COMPUTERS and AMAZON PHOTO,but I just find this https://nijianmo.github.io/amazon/index.html,I don't know how to get these two datasets
  2. and this dataset "co-author graph COAUTHOR CS",which from "Microsoft Academic Graph dataset",I want to know how to deal with it
    THANKS!

Memory-Intensive process

Hello,

When I try to use it on my data, it is very memory intensive. Is there a way to make it take less memory?

A quetion about the way that diffusion matrix is used in the graph convolution

Dear authors,

I have one question about the way that sparsified diffusion matrix is used in the subsequent graph convolution.

To my understanding, the sparsified diffusion matrix is used as a weighted adjacency matrix when calling the function GCNConv. This is indicated by the code below. Am I right?

gdc/data.py

Line 226 in 996bc47

edge_attr.append(ppr_matrix[i, j])

If so, then the graph convolution can be expressed as \mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{\tilde{S}}}
\mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta}, in which \tilde{S} is the sparsified diffusion matrix.

However, in Eq. (3) of your paper, I learn that the diffusion matrix is equivalent to the graph filter, which shall be directly applied to multiplying the node attributes. Therefore the convolution shall be \mathbf{\tilde{S}}\mathbf{X}\mathbf{\Theta}.

If my understanding is correct, then the question is why the latter way of convolution is not applied in your codes.

Best regards,
Simon

The improvement on GAT

Hi, I check the number of GAT on Pubmed is about 79.0, and the performance of GAT in the figure is about 76% which corresponds to the "None" method. And I am also curious about why GAT performed worse than GCN on CORA/CITESEER/PUBMED?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.