Code Monkey home page Code Monkey logo

Comments (12)

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

I guess there might be a flaw in your code. I don't think HSJA is that time-consuming. For a single MNIST image, it takes several or tens of seconds in my memory.

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

I think so. I am using a tool named adversarial-robustness-toolbox, where you can find from https://github.com/IBM/adversarial-robustness-toolbox. In this repo, I use the art.attacks.HopSkipJump to attack a ResNet based on Pytorch 1.0. I have compared the difference between your codes and another. But I really can't tell you the incorrect parts. So maybe the weak performance relate to the difference between Tensorflow and Pytorch?

from hsja.

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

The implementation there was provided by a third party. At first glance, there is an inappropriate choice of batch size in the source code. I am not sure if there exist extra bugs, and will take a detailed look later. I implemented the version in Foolbox and CleverHans, together with the one here. I think all the three are easy to use for Pytorch 1.0. (As long as you have a predict method from the model, which outputs a batch of labels given a batch of inputs.)

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

I tried to use Foolbox before. To be honest, it was as slow as the 'art'. Thanks for your attention. And I will keep trying to find the reason. Maybe there some errors in the attack script. BTW, do you mean my batchsize setting too big?

from hsja.

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

Oh I was saying the batchsize in https://github.com/IBM/adversarial-robustness-toolbox, which was set as 1, is too small.

Then there might be some bugs in your code? Please try to run the codes provided in this repo first. It is self-contained and does not depend on too many dependencies. (Actually the core file hsja.py only depends on numpy, although the model was implemented in TF.

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

I run your code and it is very fast on every image from cifar10. And I test my model as well as original ResNet-101 by using Foolbox again. There are some results corresponding to the models.
ResNet-101:
Step 0: 4.60625e-02 Step 1: 3.27978e-02 (took 7.23572 seconds) Step 2: 2.81322e-02 (took 7.68240 seconds) Step 3: 2.37869e-02 (took 8.22067 seconds) Step 4: 1.97171e-02 (took 8.58134 seconds) Step 5: 1.77921e-02 (took 9.11594 seconds) Step 6: 1.62260e-02 (took 9.51927 seconds) Step 7: 1.44082e-02 (took 9.94972 seconds) Step 8: 1.23227e-02 (took 10.23287 seconds) Step 9: 1.10259e-02 (took 10.47782 seconds) Step 10: 1.01493e-02 (took 10.69057 seconds) Step 11: 9.16724e-03 (took 11.07651 seconds) Step 12: 8.36470e-03 (took 11.00972 seconds) Step 13: 7.64027e-03 (took 11.64099 seconds) Step 14: 6.99664e-03 (took 11.85349 seconds) Step 15: 6.45731e-03 (took 12.10667 seconds) Step 16: 6.00492e-03 (took 12.30260 seconds) Step 17: 5.69557e-03 (took 12.49289 seconds) Step 18: 5.49350e-03 (took 12.58823 seconds) Step 19: 5.27589e-03 (took 12.94148 seconds) Step 20: 5.00508e-03 (took 13.37048 seconds) Step 21: 4.70508e-03 (took 13.45919 seconds) Step 22: 4.35110e-03 (took 13.47490 seconds) Step 23: 4.11123e-03 (took 13.78721 seconds) Step 24: 3.98573e-03 (took 13.61541 seconds) Step 25: 3.80687e-03 (took 14.34634 seconds) Step 26: 3.68341e-03 (took 14.14387 seconds) Step 27: 3.51115e-03 (took 14.36170 seconds) Step 28: 3.38134e-03 (took 14.49981 seconds) Step 29: 3.27868e-03 (took 14.61176 seconds) Step 30: 3.16688e-03 (took 16.00583 seconds) Step 31: 3.05201e-03 (took 15.04396 seconds) Step 32: 2.98084e-03 (took 15.30211 seconds) Step 33: 2.89775e-03 (took 15.33851 seconds) Step 34: 2.82606e-03 (took 15.18392 seconds) Step 35: 2.76933e-03 (took 15.76020 seconds) Step 36: 2.71421e-03 (took 15.90119 seconds) Step 37: 2.65511e-03 (took 15.87403 seconds) Step 38: 2.58973e-03 (took 16.12542 seconds) Step 39: 2.53971e-03 (took 15.96519 seconds) Step 40: 2.49743e-03 (took 16.33340 seconds) Step 41: 2.45775e-03 (took 16.74176 seconds) Step 42: 2.41903e-03 (took 16.80585 seconds) Step 43: 2.35479e-03 (took 16.50304 seconds) Step 44: 2.29724e-03 (took 16.91512 seconds) Step 45: 2.25967e-03 (took 17.36262 seconds) Step 46: 2.20794e-03 (took 17.35765 seconds) Step 47: 2.16841e-03 (took 17.34491 seconds) Step 48: 2.12551e-03 (took 17.39347 seconds) Step 49: 2.09483e-03 (took 17.53311 seconds) Step 50: 2.05026e-03 (took 17.68684 seconds) Step 51: 2.02154e-03 (took 18.02103 seconds) Step 52: 1.98687e-03 (took 18.01978 seconds) Step 53: 1.96260e-03 (took 18.09659 seconds) Step 54: 1.93356e-03 (took 18.48632 seconds) Step 55: 1.89160e-03 (took 18.22741 seconds) Step 56: 1.86470e-03 (took 18.07180 seconds) Step 57: 1.84219e-03 (took 18.54211 seconds) Step 58: 1.82473e-03 (took 18.98079 seconds) Step 59: 1.79836e-03 (took 18.79716 seconds) Step 60: 1.77454e-03 (took 19.03198 seconds) Step 61: 1.75561e-03 (took 19.04325 seconds) Step 62: 1.74272e-03 (took 19.05453 seconds) Step 63: 1.72044e-03 (took 19.40375 seconds) Step 64: 1.69175e-03 (took 19.07461 seconds)

My Model (based on ResNet-101):
Step 0: 5.22884e-04 Step 1: 4.89930e-04 (took 12.84154 seconds) Step 2: 4.58997e-04 (took 14.27321 seconds) Step 3: 4.34719e-04 (took 15.54770 seconds) Step 4: 4.07186e-04 (took 16.61848 seconds) Step 5: 3.87346e-04 (took 17.36631 seconds) Step 6: 3.66333e-04 (took 18.13523 seconds) Step 7: 3.50971e-04 (took 18.84724 seconds) Step 8: 3.35720e-04 (took 19.39295 seconds) Step 9: 3.20264e-04 (took 19.97610 seconds) Step 10: 3.05387e-04 (took 20.80850 seconds) Step 11: 2.90005e-04 (took 21.12963 seconds) Step 12: 2.75036e-04 (took 21.59904 seconds) Step 13: 2.61739e-04 (took 22.47242 seconds) Step 14: 2.45819e-04 (took 22.53846 seconds) Step 15: 2.37440e-04 (took 23.13363 seconds) Step 16: 2.28764e-04 (took 23.72318 seconds) Step 17: 2.20461e-04 (took 24.15011 seconds) Step 18: 2.14140e-04 (took 24.74466 seconds) Step 19: 2.07107e-04 (took 25.45676 seconds) Step 20: 2.00764e-04 (took 26.88779 seconds) Step 21: 1.93757e-04 (took 25.52310 seconds) Step 22: 1.86527e-04 (took 25.79839 seconds) Step 23: 1.79363e-04 (took 26.51799 seconds) Step 24: 1.75201e-04 (took 26.90847 seconds) Step 25: 1.69640e-04 (took 27.03254 seconds) Step 26: 1.64856e-04 (took 28.04224 seconds) Step 27: 1.60520e-04 (took 28.56275 seconds) Step 28: 1.55339e-04 (took 27.96912 seconds) Step 29: 1.51347e-04 (took 28.04156 seconds) Step 30: 1.47319e-04 (took 28.52833 seconds) Step 31: 1.44069e-04 (took 28.69638 seconds) Step 32: 1.40574e-04 (took 28.97581 seconds) Step 33: 1.37351e-04 (took 30.18915 seconds) Step 34: 1.34335e-04 (took 29.80015 seconds) Step 35: 1.31575e-04 (took 29.91462 seconds) Step 36: 1.29398e-04 (took 29.87797 seconds) Step 37: 1.27134e-04 (took 30.05621 seconds) Step 38: 1.24397e-04 (took 30.49374 seconds) Step 39: 1.22667e-04 (took 30.95763 seconds) Step 40: 1.20794e-04 (took 31.05900 seconds) Step 41: 1.18909e-04 (took 30.66405 seconds) Step 42: 1.17300e-04 (took 31.70481 seconds) Step 43: 1.15721e-04 (took 31.77129 seconds) Step 44: 1.14184e-04 (took 31.58160 seconds) Step 45: 1.12421e-04 (took 31.99961 seconds) Step 46: 1.10901e-04 (took 32.19371 seconds) Step 47: 1.09789e-04 (took 32.08413 seconds) Step 48: 1.08379e-04 (took 32.94917 seconds) Step 49: 1.06990e-04 (took 33.81155 seconds) Step 50: 1.05990e-04 (took 33.09325 seconds) Step 51: 1.04828e-04 (took 32.57028 seconds) Step 52: 1.03606e-04 (took 33.31491 seconds) Step 53: 1.02711e-04 (took 33.75568 seconds) Step 54: 1.01614e-04 (took 33.97048 seconds) Step 55: 1.00642e-04 (took 34.38798 seconds) Step 56: 9.94918e-05 (took 34.75719 seconds) Step 57: 9.86137e-05 (took 34.44992 seconds) Step 58: 9.79166e-05 (took 34.69828 seconds) Step 59: 9.68469e-05 (took 35.15510 seconds) Step 60: 9.62884e-05 (took 34.69310 seconds) Step 61: 9.53712e-05 (took 35.57666 seconds) Step 62: 9.46131e-05 (took 35.99384 seconds) Step 63: 9.37303e-05 (took 36.35662 seconds) Step 64: 9.31471e-05 (took 38.11791 seconds)

So the main reason that affects the speed is the depth of ResNet from the results of the experiments. And I am not sure about the big gap between the speed of a shallow net and a deep one is reasonable.

from hsja.

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

I think 36 seconds per image on ResNet-101 is reasonable. So did you fix the problem that "the algorithm spends more than 12 hours on 200 MNIST images"?

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

I don't think so. The time is for each step, so for ResNet-101 it takes about 960s for each image. And I think there are some bugs in Foolbox 2.0 that I can not feed a batch of images into the network. So the algorithm still needs to spend about 53 hours on 200 MNIST images.

from hsja.

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

Okay, so you are running ResNet-101 on MNIST, and meet this problem. Can you check the part of the code that wraps the model into the foolbox format? In particular, what is your default choice of batchsize in your wrapper?

I don't think Foolbox has a problem of batch computation. Check line 53 of the code at https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/bapp.py. You can see how it was used in Line 171. It assumes the input model takes arbitrary batchsize, and decompose the input data into batches based on the parameter batchsize for HSJA. One possibility is 256 is too large for your GPU to process, and thus it takes longer than usual.

Another way to debug is to run Boundary Attack in Foolbox. If it is also slow, then there is no bug in the implementation of Foolbox. The inefficiency might be in nature (due to the size of your model).

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

I think it is incorrect in https://github.com/bethgelab/foolbox/blob/master/foolbox/models/pytorch.py line 66 to 83 as:

def forward(self, inputs):

    import torch

    inputs, _ = self._process_input(inputs)

    n = len(inputs)

    inputs = torch.from_numpy(inputs).to(self.device)

    predictions = self._model(inputs)

    # TODO: add no_grad once we have a solution

    # for models that require grads internally

    # for inference

    # with torch.no_grad():

    #     predictions = self._model(inputs)

    predictions = predictions.detach().cpu().numpy()

    assert predictions.ndim == 2

    assert predictions.shape == (n, self.num_classes())

    return predictions

As I input a numpy array having shape (batchsize,channels,w,h), it takes batch size as len(inputs). But I find the BaseModel class already resizes the inputs as (1,batchsize,channels,w,h) as it uses

def forward_one(self, x):

    return np.squeeze(self.forward(x[np.newaxis]), axis=0)

, and

@abstractmethod
def forward(self, inputs):

    raise NotImplementedError

So I cannot use bachsize more than 1. And the wrapper I use as fmodel=foolbox.models.PyTorchModel(net, bounds=(0, 1), num_classes=10) . I check the most recently doc and do not find I need to set the batchsize in the wrapper.
BTW BoundaryAttack is much slower than BoundaryAttackPlusPlus, that seems like normally.

from hsja.

Jianbo-Lab avatar Jianbo-Lab commented on September 26, 2024

Got it. I would suggest the following.

  1. Open an issue on Foolbox about this issue, as I cannot help about the problem about Foolbox.
  2. For now, try my code here. In particular, it is VERY easy to use hsja.py for your model, simply define a new class with two methods, init and predict. Init defines your model, and predict takes a batch of input and outputs a batch of probabilities. (See build_model.py for details.)
  3. If you are still not satisfied with the efficiency, try to adjust two hyper-parameters.
    a. Decrease init_num_evals to 10.
    b. Increase gamma to 10 or 100.

from hsja.

GuanlinLee avatar GuanlinLee commented on September 26, 2024

Thanks a lot for your patience. And I am trying to use hsja.py for a Pytorch model and opening an issue on Foolbox. I think there is no more question.

from hsja.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.