This is the Tensorflow code for our paper "Staircase Sign Method for Boosting Adversarial Attacks" (is under review now). Pytorch version can be found here.
In our paper, we rethink the limitation of Sign method (SM), e.g., I-FGSM, and propose a novel Staircase Sign Method (SSM) to alleviate this issue, thus boosting both targeted and non-targeted transfer-based attacks. Comparing with state-of-the-art targeted attacks, we significantly improve the transferability (i.e. on average, 5.1% for normally trained models and 11.2% for adversarially trained defenses).
โ We hope that our proposed method can completely replace the existing SM, not only in terms of transfer-based attacks (e.g. query-based attack). We encourage you to test our SSM and demonstrate the effectiveness of this algorithm. If you have any questions, please feel free to contact me.
-
Requirement
- Python 3.7
- Tensorflow 1.14.0
- pandas 1.1.3
- gast 0.2.2
- matplotlib 3.3.4
- tqdm 4.43.0
-
Download the models
-
Then put these models into
"models/"
-
Run the code
- The vanilla I-FGSSM attack method
python attack_iter_SSM_NT.py # If the victim's model is in normally trained models
- The more powerful P-T-DI2++-FGSSM
python attack_iter_SSM_EAT.py # If the victim's model is in ensemble adversarially trained models
-
The output images are in
"output/"
If you find this work is useful in your research, please consider citing:
@article{GaoZhang2021SSM,
title={Staircase Sign Method for Boosting Adversarial Attacks},
author={Gao, Lianli and Zhang, Qilong and Zhu, Xiaosu and Song, Jingkuan and Shen, Hengtao},
journal = {CoRR},
volume = {abs/2104.09722},
year = {2021}
}