Code Monkey home page Code Monkey logo

secml's People

Contributors

a-demontis avatar asotgiu avatar battistabiggio avatar giorgiopiras avatar giovanni896 avatar giudasg avatar m-melis avatar maurapintor avatar zangobot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

secml's Issues

Support for Python 3.10

SecML now supports Python < 3.10.

Support for Python 3.10 should be added, as now it is also needed for running experiments on Google Colab.

Adam optimizer missing one required argument

(Summarize the bug encountered concisely)

Instantiation of AdamOptimizer in fb_cw_attack.py is missing one required argument ('stepsize').

Steps to reproduce

Sequential run of the '15-Foolbox.ipynb' notebook

[optional] Relevant logs, screenshots, CI/CD tests

Screenshot 2022-06-29 at 11 56 48

Poisoning of deep neural networks

If I want to poison datasets(not evasion attacks) such as MNIST and CIFAR-10 how should I use the CAttackPoisoningSVM() for poisoning MNIST dataset then store the poisoned dataset and evaluate my neural network on the same?

Deepfool (Foolbox) obj function gradient not working

The objective function gradient of the Deepfool attack does not work. Possible error in the implementation of the loss function, as the distance computation might not be required or can be simplified.

Steps to reproduce

The following script reproduces the error.

from secml.adv.attacks import CFoolboxDeepfool
from secml.data.loader import CDataLoaderMNIST
from secml.model_zoo import load_model

loader = CDataLoaderMNIST()
digits = (1, 5, 9)
ts = loader.load('testing', digits=digits, num_samples=10)

clf = load_model('mnist159-cnn')

pt1 = ts[0, :]
pt1.X /= 255
attack = CFoolboxDeepfool(steps=100, classifier=clf, candidates=2)

# compute gradient of the objective function
grad_obj = attack.objective_function_gradient(pt1.X)

Error output:

File ".../secml/adv/attacks/evasion/foolbox/c_attack_evasion_foolbox.py", line 124, in objective_function_gradient
x_t.requires_grad_()
RuntimeError: only Tensors of floating point dtype can require gradients

What is the expected correct behavior?

The correct behavior should not produce the error, and should return the gradient of the objective function.

Important: we should update the tests of the foolbox wrapper and add a check for the gradients of the objective function to make sure this does not happen again.

Poisoning Multiclass dataset

I am trying to poison a multiclass dataset, however, whichever poisoning strategy I use I get an error that looks like this:

ValueError: The data (x,y) has more than two classes.

or in the case of CAttackPoisoningSVM

ValueError: input labels should be binary in {0, +1} interval

I have tried to wrap my secml classifier with CClassifierMulticlassOVA but then I get:

NotImplementedError: We cannot poisoning that classifier

So, does secml supports the poisoning of multiclass datasets? and If yes how?

Thank you in advance.

Steps to reproduce

# x_train is a np.ndarray
# y_train is a np.ndarray consisting of multiple classes
dataset_train = CDataset(x_train, y_train)
clf = CClassifierSVM(kernel=CKernelRBF(gamma=0.001), C=1)
clf.fit(dataset_train.X, dataset_train.Y)
lb, ub = dataset_train.X.min(), dataset_train.X.max()
solver_params = {
    'eta': eps,
    'max_iter': max_iter,
    'eps': eps
}
pois_attack = CAttackPoisoningSVM(classifier=clf,
                                  training_data=dataset_train,
                                  val=dataset_train,
                                  lb=lb, ub=ub,
                                  dmax=1,
                                  solver_type='pgd',
                                  solver_params=solver_params)

pois_attack.n_points = adversarial_points_indices.shape[0]
pois_y_pred, pois_scores, pois_ds, f_opt = pois_attack.run(x=dataset_train.X, y=dataset_train.Y)

Output:

ValueError: input labels should be binary in {0, +1} interval

Boolean feature value return from CAttackPoisoningSVM attack issue

I am using the CAttackPoisoningSVM attack on a Drebin dataset and looking at the poison dataset that results from the adv_xc returned from the run method. I am looking to have a set of poison points with binary feature values to add to my initial set so I can send the new poisoned dataset to another framework. I set my solver params to {eta:1, eta_min:1, eta_max:none, eps:1e-4}

The returned value for adv_xc in the debugger
CDataset{'X': CArray(10, 10000)(dense: [[6.068561e-04 5.504459e-05 0.000000e+00 ... 7.285119e-04 5.873739e-04 ...

The ideal returned value for adv_xc in the debugger
CDataset{'X': CArray(10, 10000)(dense: [[0.000000e+00 0.000000e+00 1.000000e+00 ... 0.000000e+00 1.000000e+00...

Android Malware detection

Hi im wondering are there any attacks that i can run for Android Malware Detection with Drebin like the tutorial ? It would be helpful if you could show me how to initialize those attacks. I will cite your work after my paper is done

Failing tests with Foolbox

Tests are failing with the new version of Foolbox.

[optional] Relevant logs, screenshots, CI/CD tests

https://github.com/pralab/secml/runs/5618069131?check_suite_focus=true

[optional] Possible fixes

This is likely related to the new changes in the Foolbox library (likely to be reverted) https://github.com/bethgelab/foolbox/blob/b7f7ee6fa3d5888b0bef411712cff480f6370e2f/foolbox/attacks/__init__.py#L97

Possible fix: Import the correct version of PGD (in order to bypass future refactoring by Foolbox)

Python 3.10 import errors

Steps to reproduce

import secml
from secml.adv.attacks import CFoolboxPGDLinf

[optional] What is the expected correct behavior?

It should correctly import the attack, but the class can not be found

Possible fixes

Better specify import in __init__.py file inside the package. As a workaround, the class can be imported as:

import secml
from secml.adv.attacks.evasion.foolbox.fb_attacks.fb_pgd_attack import CFoolboxPGDLinf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.