This repository provides the code for the paper
Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
- Clone this repository
https://github.com/rigley007/Invi_Poison.git
-
Install PyTorch and other dependencies (e.g., torchvision, tqdm, Numpy)
-
Download the dataset from here. Then you need to unzip them and change the path in config.py accordingly.
- To train the auto-encoder to convert original image to noised image, you can run the script as follows:
python3 main.py
The model will save a checkpoint every 20 epochs on the fly.
- To train and test the target model with poisoned data injected (demostration of the attack), run the script with:
python3 training_with_poisoned_dataset.py
- You also can experiment with different settings by editing configuration files in
configs.py
.