Code Monkey home page Code Monkey logo

dl-final-project-universal-adversarial-perturbations-on-vo-systems's Introduction

Final Project: Universal Adversarial Perturbations on VO systems

Code structure

In the src directory are the following directories:

• Attacks

• Datasets

• Docker

• Evaluator

• Models

• Network

And the following files:

• loss.py

• run_attacks.py

• TartanVO.py

• Tartanvo_node.py

• utils.py

The relevant locations for our project are as follows:

  1. run_attacks.py This module is the main module that runs the entire process, from calculating the clean results through the optimization process to the report of the final results. We implemented the following 3 new methods in this module:

    a. run_attacks_train_oos: This method implements the full optimization scheme as described in our report. We should mention that this method runs only one model as sent to it by the project args. As we only tested Adam model in our CV, this method is calling only to the perturb function which executes the ADAM model. In order to run other models, one should manually change the call to the perturb method.

    b. make_exp_folders: This method creates empty folders for train, eval and test data, to save the results generated by the optimization scheme process.

    c. out_of_sample_make_folders: This method creates empty folders for train, eval and test data, to save the results generated by the optimization scheme process, by calling to make_exp_folders method.

    d. masked_list: This method receives List A and a list of indices as inputs and returns all elements in list A that their index matches the given list of indices.

General notes:

  • The implementatios of b,c,d are taken from the git repository of [1]
  • The implementation of a is an adaptation of a method called run_attacks_out_of_sample_test(args) which is under run_attacks.py in the git repository of [1].
  • In order to run our optimization scheme we changed the main function such that it will call our method run_attacks_train_oos(args), instead of the original provided method run_attacks_train(args).
  1. Attack package

This package includes the const and PGD attacks modules and the attack class module. In this package, we used and changed 2 modules: attack.py, pgd.py

a.	attack.py
In this module, we added implementations to the optimizers steps we tested and experimented. The added methods are as follows:
    
i.	gradient_ascent_rmsprop_step 
    This method implements the optimization step of the RMSProp method as described in section 2 of the report.

ii.	gradient_ascent_Adam_step_sign 
    This method implements the optimization step of the adapted Adam method as described in section 2 of the report.

iii. gradient_ascent_APGD 
    This method implements the optimization step of the APGD algorithm as described in section 2 of the report.

b.	pgd.py
In this module we implemented several perturb methods to be used for different models we tested with the PGD attack. The added methods are as follows:

   i.	perturb_adam
        This method includes slight adjustments to the original perturb method so we can use the Adam model. It can also be used to check the RMSProp model, you only need to change the step function manually from gradient_ascent_Adam_step_sign to gradient_ascent_rmsprop_step (line 288) .
   
   ii.	perturb_APGD 
        This method implements the APGD model including perturbing the VO model.
   
   iii. halve_step_size
        This function is being used by the perturb_APGD method, and halves the optimization step size if certain conditions are met.
    
   iv.	calc_sample_grad_single_APGD
   
        This function does the same as the original given calc_sample_grad_single function, except that it also returns the loss sum which is required to APGD, in addition to the grad itself. 
  1. loss.py This module implements the loss functions used to optimize the models and sets l_train . The strings that should be sent as arguments to the model in order to use our new loss functions appears in the init method of the VOCriterion class. In addition, we added the methods calc_rot_quat_product and rotation_quat_product which calculate the rotation error ( l_rot in the report). These methods are taken from the git repository of [1].

  2. utils.py This module includes all utils function used for parsing the arguments given to the project. We added some extra arguments for our hyperparameters such as β for the RMSprop model.

Note : All other code modules and packages are the same as given to us, they were not changed and have the same initial purposes.

II. Reproducing the results

Firstly, the data directory (test_dir) should be in the same directory as run_attacks.py.

Secondly, the commands used to produce our results are as follows:

  1. l_train=l_RMS ,l_eval=l_RMS

for L in 0.04 0.045 0.05; do srun -c 2 --gres=gpu:1 --pty python run_attacks.py --model-name tartanvo_1914.pkl --worker-num 1 --attack_k 50 --alpha $L --save_best_pert --save_csv --seed 42 --preprocessed_data --attack pgd --test-dir=VO_adv_project_train_dataset_8_frames; done;

  1. l_train=l_(RMS_t ) ,l_eval=l_RMS for L in 0.04 0.045 0.05; do srun -c 2 --gres=gpu:1 --pty python run_attacks.py --model-name tartanvo_1914.pkl --worker-num 1 --attack_t_crit target_reg --attack_k 50 --alpha $L --save_best_pert --save_csv --seed 42 --preprocessed_data --attack pgd --test-dir=VO_adv_project_train_dataset_8_frames; done;

  2. l_train=l_(RMS_rt ) ,l_eval=l_RMS for L in 0.04 0.045 0.05; do srun -c 2 --gres=gpu:1 --pty python run_attacks.py --model-name tartanvo_1914.pkl --worker-num 1 --attack_t_crit reg_rotation_target --attack_k 50 --alpha $L --save_best_pert --save_csv --seed 42 --preprocessed_data --attack pgd --test-dir=VO_adv_project_train_dataset_8_frames; done;

  • The above commands will run the full optimization scheme over all models, and save all the results of train, eval and test in the results directory. Individual directory will be created for each combination of optimization loss, test fold, α, and type (train/test/eval).

baseline results

• To get the baseline results you should change line 923 in run_attacks.py in run_attacks_train_oos(args) from attack.perturb_adam(…) to attack.perturb(…) And run the following command:

srun -c 2 --gres=gpu:1 --pty python run_attacks.py --model-name tartanvo_1914.pkl --worker-num 1 --attack_k 50 --save_csv --seed 42 --preprocessed_data --attack pgd --test-dir=VO_adv_project_train_dataset_8_frames;

dl-final-project-universal-adversarial-perturbations-on-vo-systems's People

Contributors

liormotola avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.