Code Monkey home page Code Monkey logo

gnn-meta-attack's People

Contributors

danielzuegner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gnn-meta-attack's Issues

The dataset problem

Hi, Daniel:
The Polblogs dataset cannot be loaded successfully.
Could you please proivde the valid dataset? Thanks so much! :)

About attack types

Hi, Dainel:

I read in some other papers that they defined Meta-attack as a grey-box attack. What do you think of this?

When using a surrogate model, we don't have any parameters about the target model. In this case, can't it be counted as a black-box attack? Perhaps it is because of the use of complete datasets, such as training labels.

I'm confused that the definition of grey-box and black-box, in the field of graph adversarial.

I will appreciate if you could share your thoughts with me.

Questions about codes.

Hi, Daniel:
when the graph has been destroyed, why we need the new model, gcn_after_attack, to evaluate performances of attack, instead of use the gcn_before_attack directly? If both of two models are used, which one is the targeted model mentioned in your paper?
Thanks for your available codes!

Attack variants

Reading the paper and going through the demo, I'm not sure I understand the differences between the attack variants ("Meta-Train", "Meta-Self","A-Meta-Train", "A-Meta-Self", "A-Meta-Both").

Particularly, when using "Meta-Train" should labels_self_training be calculated? Can I run the demo just by modifying the variant definition to be variant = "Meta-Train" and leave everything else as is or should I change anything else?

Confusion about the Loss function L_{atk}

Hello,

I am confused about the loss function in the paper Zuegner et al ICLR 2019.

In the paper, you mention that portfolio_view or portfolio_view and then gradient is
portfolio_view

But, in the algorithm the gradient equation is missing the -ve sign.

image

I am also confused about the same thing in the code. For 'Meta-Train' variant, I think the gradient calculation here is trying to minimize the classification error of the examples in the training here, instead of trying to maximize the classification error.

What am I missing?

Cannot reproduce the performance on citeseer and polblogs

Hi Daniel,

I run your code on different datasets, but I just cannot reproduce the results in the paper.

The reported GCN misclassification rate on citeseer dataset (clean) in your paper is 28.5 ± 0.9, howover, in my implemention (I used pygcn ), the misclassification rate is about 25%.

I run your code to generate 5% perturbed graph and use it as input, then get 73.7% classification accuracy. So on citeseer dataset, with 5% perturbations, the performance of GCN model only drop 1-2%, not 6% as in the paper.

I don't know what is wrong, and I am wondering if there are some more parameters of the attacker I need to tune.

I would really appreciate it if you could help me with this.

Pick perturbation with lowest score

Hi Daniel,

I have a scenario in which I want to perturb the adjacency matrix but I want to do it in a such a way that in every perturbation I choose the edge which has the least impact on Latk, but still increases it (and thus brings GCN's accuracy down). In other words, I want to greedily pick the perturbation e = (u,v) with the lowest score one at a time, that still has a negative impact on the overall accuracy

In the code, is it enough to use the index of the smallest positive number from adjacency_meta_grad instead of adj_meta_grad_argmax = tf.argmax(self.adjacency_meta_grad)?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.