Code Monkey home page Code Monkey logo

textfooler's Introduction

TextFooler

A Model for Natural Language Attack on Text Classification and Inference

This is the source code for the paper: Jin, Di, et al. "Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment." arXiv preprint arXiv:1907.11932 (2019). If you use the code, please cite the paper:

@article{jin2019bert,
  title={Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment},
  author={Jin, Di and Jin, Zhijing and Zhou, Joey Tianyi and Szolovits, Peter},
  journal={arXiv preprint arXiv:1907.11932},
  year={2019}
}

Data

Our 7 datasets are here.

Prerequisites:

Required packages are listed in the requirements.txt file:

pip install -r requirements.txt

How to use

  • Run the following code to install the esim package:
cd ESIM
python setup.py install
cd ..
python comp_cos_sim_mat.py [PATH_TO_COUNTER_FITTING_WORD_EMBEDDINGS]
  • Run the following code to generate the adversaries for text classification:
python attack_classification.py

For Natural langauge inference:

python attack_nli.py

Examples of run code for these two files are in run_attack_classification.py and run_attack_nli.py. Here we explain each required argument in details:

  • --dataset_path: The path to the dataset. We put the 1000 examples for each dataset we used in the paper in the folder data.
  • --target_model: Name of the target model such as ''bert''.
  • --target_model_path: The path to the trained parameters of the target model. For ease of replication, we shared the trained BERT model parameters, the trained LSTM model parameters, and the trained CNN model parameters on each dataset we used.
  • --counter_fitting_embeddings_path: The path to the counter-fitting word embeddings.
  • --counter_fitting_cos_sim_path: This is optional. If given, then the pre-computed cosine similarity scores based on the counter-fitting word embeddings will be loaded to save time. If not, it will be calculated.
  • --USE_cache_path: The path to save the USE model file (Downloading is automatic if this path is empty).

Two more things to share with you:

  1. In case someone wants to replicate our experiments for training the target models, we shared the used seven datasets we have processed for you!

  2. In case someone may want to use our generated adversary results towards the benchmark data directly, here it is.

textfooler's People

Contributors

jind11 avatar leix28 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.