Code Monkey home page Code Monkey logo

p-lambda / robust_tradeoff Goto Github PK

View Code? Open in Web Editor NEW
8.0 4.0 0.0 774 KB

Code for the ICML 2020 paper "Understanding and Mitigating the Tradeoff Between Robustness and Accuracy", Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Paper available at https://arxiv.org/pdf/2002.10716.pdf.

Home Page: https://worksheets.codalab.org/worksheets/0x16e1477c039b40b38534353108755541

License: MIT License

Dockerfile 0.11% Python 97.15% Shell 2.74%
adversarial-examples robustness tradeoff

robust_tradeoff's Introduction

Understanding and Mitigating the Tradeoff Between Robustness and Accuracy

The repository contains the code for reproducing experiments in the following paper:

@inproceedings{raghunathan2020understanding,
  author = {A. Raghunathan and S. M. Xie and F. Yang and J. C. Duchi and P. Liang},
  booktitle = {International Conference on Machine Learning (ICML)},
  title = {Understanding and Mitigating the Tradeoff Between Robustness and Accuracy},
  year = {2020},
}

The experiments in this repository are reproduced in this CodaLab worksheet..

Setup

To get started, please activate a new virtualenv with Python 3.6 or above and install the dependencies using pip install -r requirements.txt. The CIFAR experiments are in the cifar/ directory and the code to reproduce spline simulations and figures are in the splines directory. The Dockerfile can also be used to construct a suitable environment to run the code in a personal setup or on CodaLab.

Description

In this paper, we study empirically-documented tradeoff between adversarial robustness and standard accuracy, where adding adversarial examples during training tends to significantly decrease standard accuracy. The tradeoff is particularly suprising given that the adversarial perturbations are typically very small, such that the true target of the perturbed example does not change. We call these consistent perturbations. Furthermore, since we use powerful neural networks, the model should be expressive enough to contain the true predictor (well-specification).

In this paper we ask, if we assume that perturbations are consistent and the model family is well specified such that there is no inherent trade off, why do we observe a trade off in practice? We can make the following observations and conclusions:

We characterize how training with consistent extra data can increase standard error even in well specified, noiseless linear regression. Our analysis suggests that using unlabeled data with the recent robust self training algorithm can mitigate the tradeoff. We prove that robust self training improves the robust error without hurting standard error, therefore eliminating the tradeoff in the linear setting using unlabeled data. Empirically, RST improves both robust and standard error across different adversarial training algorithms and perturbations.

table

robust_tradeoff's People

Contributors

sangmichaelxie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.