Code Monkey home page Code Monkey logo

graph-u-nets's Issues

针对大图做节点分类的问题

你好,我准备使用Graph u-net来做节点分类,总共58个图,每个图的邻接矩阵是固定的,(10242*10242),每个节点有10个特征(10242,10),一共30个类别,我使用的池化率是[0.9, 0.8, 0.7],

大图处理起来太复杂了,每个epoch要做很多次gcn,谱域又比较耗时,请问有啥处理办法吗?感觉这个代码不太适合大图

批训练问题

您好,我这里有一批节点特征矩阵,大小为[B,N,C],B是批处理个数,N是节点个数,C是特征维度数;还有一批邻接矩阵[B,N,N]。由于我的邻接矩阵太过稠密,我在使用PYG复现的GraphUNet时显存不足,因为它是用邻接列表存储节点连接关系的。
但是现在我想使用邻接矩阵的形式,请问如何进行批训练呢?

Model selection?

Hi,

I have a question about selecting models.
In DGCNN muhanzhang/pytorch_DGCNN#19, the author uses the accuracy of the test set at the last iteration.
In this paper, the best accuracy of the test set is used.

Which one should be used in your opinion? In addition, how did you handle DGCNN in Table 4 of this paper? Thanks.

Have you provided the tensorflow version?

hi, thanks for providing your codes.
but how can I find the tensorflow version of codes?

ps:
In the down sampling process, scores = self.sigmoid(scores/100).
where is the 100 comes from? Does that mean the projection vector ||p|| = 100 by your default? why?

Dropout, Normalization

Hi,
in the paper, you mention dropout values on both the node features and the adjacency matrix. Could you please point me to that part in the code? I have trouble to find it.
I also wanted to use the pytorch geometric version of GraphUNet, instead of the one in ops. Do I then still have to normalize the adjacency matrix before (A = ops.normalize_adj(n2n_sp))? I think this might be covered within GCNConv in GraphUNet, but am not sure.
Thank you already!

Hyper-parameters for node classification tasks?

Hi, Hongyang, thanks for sharing the code.
Since the code for transductive tasks is not available, could you please share the hyper-parameters such as num of hidden layers in the Graph U-Net and optimizer setting used in node classification tasks like Cora? It would help a lot for re-implementation. Thanks!

The performance of graph classification on the D&D dataset.

I have ran the "run_GUNet.sh DD 0" for many times and the graph classification accuracy only reach 81.2% (The corresponding result reported in the paper is 82.43%). Is there anything wrong in the hyper-parameter settings or something else? And It seems normal in other two datasets. Thank you.

Nice job

But I think you can do more experiment about the node classification. Since I think the fixed split on nodes is not enough. Maybe the random split on nodes can give more convincing results.

cat: results/DD.txt: No such file or directory

g
degrees = torch.sum(g, 1)
RuntimeError: CUDA error: no kernel image is available for execution on the device
End of cross-validation using 14 seconds
The accuracy results for DD are as follows:
cat: results/DD.txt: No such file or directory
Mean and sstdev are:
./run_GNN.sh: line 43: datamash: command not found
cat: results/DD.txt: No such file or directory

How to run the code?

Hello, you said "./run_GUNet.sh DATA FOLD", but there is no file named "run_GUNet.sh", do you mean "run_DGCNN.sh"? Thank you for your reply.

Questions about the graph classification results

When I train the network. I find its test acc appear very high at the beginning.

Namespace(batch_size=20, data='PROTEINS', dropout=True, extract_features=False, feat_dim=0, fold=1, hidden=128, latent_dim=[32, 32, 32, 1], learning_rate=0.001, max_lv=4, mode='gpu', num_
class=0, num_epochs=100, out_dim=0, printAUC=False, seed=1, sortpooling_k=0.6, test_number=0)
loading data

classes: 2

maximum node tag: 3

train: 1001, # test: 112

k used in SortPooling is: 32
Initializing GUNet
loss: 0.50198 acc: 0.75000: 24%|████████████████████████████▌ | 12/50 [00:00<00:01, 33.74batch/s$
/workspace/zwq/GraphPool/gunet/ops.py:13: RuntimeWarning: divide by zero encountered in power
d_inv_sqrt = np.power(rowsum, -0.5).flatten()
loss: 0.36440 acc: 0.85000: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 35.79batch/s$
average training of epoch 0: loss 0.57412 acc 0.69500 auc 0.00000
loss: 0.42157 acc: 0.83333: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 52.19batch/s$
average test of epoch 0: loss 0.53319 acc 0.73214 auc 0.00000
loss: 0.61608 acc: 0.75000: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 36.62batch/s$
average training of epoch 1: loss 0.54999 acc 0.73100 auc 0.00000
loss: 0.35901 acc: 0.91667: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 52.55batch/s$
average test of epoch 1: loss 0.53982 acc 0.74107 auc 0.00000
loss: 0.46130 acc: 0.75000: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 37.91batch/s

Can anyone explain this to me?

Adjacency Matrix Augmentation

Hi I cannot seem to find where you augment the adjacency matrix. Could you please point me to the line in which you do it ? I wanted to check I fou you do A^2 or (A+I)^2

How is the projection vector p trainable?

Hi, In your "ops.py" line 89~line 107, you define graph pool. But it seems to be that you have missed the gate step, i.e. scores should be multiplied to X, but I do not see this step in your code. Could do please explain how the projection vector is trainable? Thank you!

How to do node classification?

Hello, I only see your code for graph classfication, But not for node classification, also, your data folder does not contain dataset like citeceer or cora, I want to know what's going on? Thank you for your reply.

The order of uppooling and skip-connection seems wrong

Hi Hongyang,
Based on your paper, I think the following code in ops.py has some ordering problem

   def forward(self, g, h):
        adj_ms = []
        indices_list = []
        down_outs = []
        hs = []
        org_h = h
        for i in range(self.l_n):
            h = self.down_gcns[i](g, h)
            adj_ms.append(g)
            down_outs.append(h)
            g, h, idx = self.pools[i](g, h)
            indices_list.append(idx)

        h = self.bottom_gcn(g, h)

        for i in range(self.l_n):
            up_idx = self.l_n - i - 1
            g, idx = adj_ms[up_idx], indices_list[up_idx]
            g, h = self.unpools[i](g, h, down_outs[up_idx], idx)
            h = self.up_gcns[i](g, h)
            h = h.add(down_outs[up_idx])
            hs.append(h)
        h = h.add(org_h)
        hs.append(h)
        return hs

In my understanding the correct order should be


        for i in range(self.l_n):
            up_idx = self.l_n - i - 1
            g, idx = adj_ms[up_idx], indices_list[up_idx]

            g, h = self.unpools[up_idx](g, h, down_outs[up_idx], idx)

            h = h.add(down_outs[up_idx])
            hs.append(h)

            h = self.up_gcns[up_idx](g, h)

        h = h.add(org_h)
        hs.append(h)
        return hs`

please let me know whether I'm correct. Thank you again for sharing the structured code!

Variational Auto-Encoder ?

I am working on a Graph Variational Auto-Encoder to reproduce node features. I was wondering whether a modification of graph UNets could do so. Could I get your opinion on the following code? It's quite similar the only difference is that I split them into encoder/decoder functions, added two outputs for the encoder (mu and sigma) and used the KL Divergence in the loss function

class VarGraphUNet(torch.nn.Module):
    def __init__(self, in_channels, hidden_channels, out_channels, depth,
                 pool_ratios, sum_res=True, act=F.tanh):
        super(VarGraphUNet, self).__init__()
        assert depth >= 1
        self.in_channels = in_channels
        self.hidden_channels = hidden_channels
        self.out_channels = out_channels
        self.depth = depth
        self.pool_ratios = repeat(pool_ratios, depth)
        self.act = act
        self.sum_res = sum_res

        channels = hidden_channels

        self.down_convs = torch.nn.ModuleList()
        self.pools = torch.nn.ModuleList()
        self.down_convs.append(GCNConv(in_channels, int(channels), improved=True))
        for i in range(depth):
            if i<depth-1: 
                self.pools.append(TopKPooling(channels, self.pool_ratios[i]))
                self.down_convs.append(GCNConv(channels, int(channels*2), improved=True))
                channels=int(channels*2)
            else:
              self.pools.append(TopKPooling(channels, self.pool_ratios[i]))
              self.down_convs.append(GCNConv(channels, int(channels), improved=True))
              
        in_channels = channels if sum_res else 2 * channels

        self.up_convs = torch.nn.ModuleList()
        for i in range(depth - 1):
            self.up_convs.append(GCNConv(channels, int(channels/2), improved=True))
            channels=int(channels/2)
        self.up_convs.append(GCNConv(channels, out_channels, improved=True))

        #self.reset_parameters()

        self.muLay=torch.nn.Linear(channels, int(channels*2))
        self.sigLay=torch.nn.Linear(channels, int(channels*2)

        self.dec=torch.nn.Linear(int(channels*2),channels)
        self.drop=torch.nn.Dropout(p=0.3)
        
    def encode(self,x,edge_index,batch): 

        if batch is None:
            batch = edge_index.new_zeros(x.size(0))
        edge_weight = x.new_ones(edge_index.size(1))

        x = self.down_convs[0](x, edge_index, edge_weight)
        x = self.act(x)

        xs = [x]
        edge_indices = [edge_index]
        edge_weights = [edge_weight]
        perms = []

        for i in range(1, self.depth + 1): 
            edge_index, edge_weight = self.augment_adj(edge_index, edge_weight,x.size(0))
            
            x, edge_index, edge_weight, batch, perm, _ = self.pools[i - 1](
                x, edge_index, edge_weight, batch)

            x=self.drop(x)
            x = self.down_convs[i](x, edge_index, edge_weight)
            x = self.act(x)
            

            if i < self.depth: ## < 3
                xs += [x]
                edge_indices += [edge_index]
                edge_weights += [edge_weight]
            perms += [perm]
            
        mu2=self.muLay(x)
        sig2=self.sigLay(x)
        
        return mu2,sig2,xs,edge_indices,edge_weights,perms
        
        
    def reparametrize(self, mu, logvar):
      if self.training:
          return mu + torch.randn_like(logvar) * torch.exp(logvar)
      else:
          return mu  

        
    def decode(self,z,xs,edge_indices,edge_weights,perms):
        
        z=self.dec(z)
        for i in range(self.depth):
            j = self.depth - 1 - i

            res = xs[j]
          
            edge_index = edge_indices[j]
            edge_weight = edge_weights[j]
            perm = perms[j]


            up = torch.zeros_like(res)
            up[perm] = z
            z = res + up if self.sum_res else torch.cat((res, up), dim=-1)
            #print(z.shape)
            z = self.up_convs[i](z, edge_index, edge_weight)
            z = self.act(z) if i < self.depth - 1 else z
        return z


    def augment_adj(self, edge_index, edge_weight, num_nodes):
        
        edge_index, edge_weight = add_self_loops(edge_index, edge_weight,
                                                 num_nodes=num_nodes)
        
        edge_index, edge_weight = sort_edge_index(edge_index, edge_weight,num_nodes)
        
        edge_index, edge_weight = spspmm(edge_index, edge_weight, edge_index,
                                         edge_weight, num_nodes, num_nodes,num_nodes)
        
        edge_index, edge_weight = remove_self_loops(edge_index, edge_weight)
        return edge_index, edge_weight      
    
    

    def forward(self,x,adj,lengs):
        
        mu2,sig2,xs,edge_indices,edge_weights,perms= self.encode(x,adj,lengs)  
        z = self.reparametrize(mu2,sig2) ## z = mu + eps*sigma 
        z2=self.decode(z,xs,edge_indices,edge_weights,perms)
        return z2, mu2, sig2  

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.