Code Monkey home page Code Monkey logo

es_adversarial_attacks's Introduction

Evolutionary-driven Adversarial Attacks

This repository contains a framework for applying evolutionary strategies (ES) on arbitrary optimization problems. In the specific scope of our project, we applied ES for adversarial attacks on deep neural networks. The experiments conducted are only image oriented. Specifically, we search for otimal noises, which, combined to an original image, are able to fool a network (e.g. misclassification). Given that we deal with very high-dimensional search spaces, we implemented different methodologies to efficiently tackle the problem.

Authors

Dimitrios Ieronymakis, Riccardo Majellaro, Ivan Horokhovskyi, Pavlos Zakkas, Andreas Paraskeva

Install

A Python 3 environment is required to run the repository, to gether with the packages specified in requirements.txt. It is recommended to use Anaconda for the Python environment, with the following commands:

conda create --name my_env
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Run ES for Adversarial Attacks

In this section we describe how to launch an ES for adversarial attacks, using the attack_main.py script. Using our script as a reference it is possible to create personalized scripts to apply ES on arbitrary problems.

In order to launch the ES execute the following commands from the main directory:

cd testing
python attack_main.py [-args]

The following arguments can be used:

  • dataloader: (store_true) will use dataloader to load the images when set to True. This should be used when the dataset of images is huge. If everything can fit into memory, this option is not recommended for better performance.
  • -eval : (str) defines the fitness evaluation function. e.g. "crossentropy"
  • -min : (store_true) use this flag in case of minimization problem. Maximization problem when not used.
  • -t : (store_true) use this flag in case of a targeted attack. Untargeted attack when not used.
  • -d : (float) downsample factor in (0,1] where 1 means no downscaling. Lower values mean greater downscaling.
  • -b : (int) number of maximum fitness function evaluations.
  • -model : (str) defines the model to be used in the evaluation. e.g. "xception_classifier"
  • -in : (str) defines the path of the input image used to attack the chosen model.
  • -tl : (int) ground truth label of the input image.
  • -ps : (int) defines the size of the parent population. e.g. 2
  • -os : (int) defines the size of the offspring population. e.g. 4
  • -r : (str) defines the recombination to use in the ES strategy. eg. "intermediate"
  • -m : (str) defines the mutation to use in the ES strategy. eg. "one_fifth"
  • -s : (str) defines selection to use in the ES strategy. eg. "plus_selection"
  • -e : (float) defines the epsilon used when clipping the noise. The noise is then constrained in [-e,e]
  • -fb : (int) defines the fallback patience interval. Fallback patience defines the iterations to reset sigmas or population after which we haven't had any improvement.
  • -sn : (str) to be used if you want to initialize the parent population with a single predefined noise. Set this argument to the path of the noise in the form of a numpy array.
  • -v : (int) verbose intensity parameter. Set to a value between 0 and 2, with 2 as the most intense.
  • -device : (str) defines the device to use for model computations. Default is "cpu", but "cuda" ("cuda:0" etc..) can be used for GPU usage.

Run evaluation

In this section we describe how to launch an evaluation on a chosen model and image (with both noise or not), using the evaluate.py script.

In order to launch the evaluation execute the following commands from the main directory:

cd testing
python evaluate.py

In order to customize the evaluation you need to modify the script. A CLI will probably be provided in the future.

Examples

Some example scripts to setup configurations are available under the testing directory, both in .sh and .bat formats.

The following image instead, is an example of the results expected with different configurations of epsilon and downsampling, on the Xception classification model.

es_adversarial_attacks's People

Contributors

ohgreat avatar riccardomajellaro avatar andreasparaskeva avatar

Stargazers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.