Code Monkey home page Code Monkey logo

3dfed's Introduction

3DFed

This is the code for paper "3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning".

Installation

This code is tested on NVIDIA GeForce RTX3060 with CUDA 11.6.58 and 12th Gen Intel(R) Core(TM) i9-12900F for python=3.6.13, tprch=1.7.0 and torchvision=0.8.1.

  • Install all dependencies using the requirements.txt in utils folder: pip install -r utils/requirements.txt.
  • Install PyTorch for your CUDA version and install hdbscan~=0.8.15.
  • Create a directory saved_models to save the results and load the pretrained model.

Repeat experiments for MNIST and CIFAR10

Run experiments for MNIST and CIFAR10 datasets by the following command:

python training.py --name mnist --params configs/mnist_fed.yaml
python training.py --name cifar --params configs/cifar_fed.yaml

YAML files configs/mnist_fed.yaml and configs/cifar_fed.yaml store the configuration for experiments. To have a different attack or defense, modify the corresponding parameters in those files.

For CIFAR10, to save time, it is encouraged to pretrain a model using clean datasets and perform the attack using this pretrained model, just like the Tiny-Imagenet experiments.

Choose an attack

attacks/modelreplace.py is the implementation of basic model replacement attack. Launch this attack by setting the attack parameter in YAML file to 'ModelReplace' and specifying wether it's a single-epoch or multi-epoch and the number of attackers.

attacks/thrdfed.py is the implementation of 3DFed. Launch this attack by setting the attack parameter in YAML file to 'ThrDFed' and configuring other parameters related to this attack (they are preset as the default values). For demonstration, we add noise mask to the output layer, we use the last conv layer to find indicator, and we use the output layer to find the decoy model parameter. To add noise mask for the rest layers, the procedure is similar: 1. Use the attacker's benign reference model to calculate the number of low-UPs neurons need to perturb, 2. Randomly select those neurons and optimize their noise masks.

Choose a defense

Following Foolsgold [2], for large models (i.e., ResNet), we use the output layer for any multi-models operation (e.g., the PCA in RFLBAT and HDBSCAN in FLAME) for time-saving.

No defense

defenses/fedavg.py is the implementation of basic FedAvg algorithm. Switch to this defense by setting the defense parameter in YAML file to 'FedAvg'.

FLAME

defenses/flame.py is the implementation of FLAME [1]. Switch to this defense by setting the defense parameter in YAML file to 'FLAME'.

Foolsgold

defenses/foolsgold.py is the implementation of Foolsgold [2]. Switch to this defense by setting the defense parameter in YAML file to 'Foolsgold'. This defense will create saved_models/foolsgold to save historical updates.

Deepsight

defenses/deepsight.py is the implementation of Deepsight [3]. Switch to this defense by setting the defense parameter in YAML file to 'Deepsight'.

FLDetector

defenses/fldetector.py is the implementation of FLDetector [4]. Switch to this defense by setting the defense parameter in YAML file to 'FLDetector'. Since FLDetector requires running ahead, if resuming a model, please ensure that the start round of poisoning is about 10-20 rounds later than the recovery round. For example, if you resume a model trained on 200 epochs, you need to set poison_epoch parameter in YAML file to 215 or 220, instead of 200 and change the poison_epoch_stop accordingly.

RFLBAT

defenses/rflbat.py is the implementation of RFLBAT [5]. Switch to this defense by setting the defense parameter in YAML file to 'RFLBAT'. This defense will create saved_models/RFLBAT to save PCA graphs.

Repeat experiments for Tiny-Imagenet

To prepare the dataset, download tiny-imagenet-200.zip into directory ./utils. Reformat the dataset:

cd ./utils
./process_tiny_data.sh

Then run experiments for Tiny-Imagenet by the following command:

python training.py --name imagenet --params configs/imagenet_fed.yaml

YAML file configs/imagenet_fed.yaml stores the configuration for experiments. To have a different attack or defense, modify the corresponding parameters in this file.

PCA toy example

To repeat the toy example for fooling the PCA, run the following command:

python utils/pca_toy_example.py

Acknowledgement

Credit to Eugene Bagdasaryan (Github repo: https://github.com/ebagdasa/backdoors101) for providing the FL and backdoor attack backbone.

References

[1]. Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Mollering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, ¨ Azalia Mirhoseini, Shaza Zeitouni, et al. {FLAME}: Taming backdoors in federated learning. In 31st USENIX Security Symposium (USENIX Security 22), pages 1415–1432, 2022.

[2]. Clement Fung, Chris J. M. Yoon, and Ivan Beschastnikh. The Limitations of Federated Learning in Sybil Settings. In Symposium on Research in Attacks, Intrusion, and Defenses, RAID, 2020.

[3]. Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, and Ahmad-Reza Sadeghi. Deepsight: Mitigating backdoor attacks in federated learning through deep model inspection. In 29th Annual Network and Distributed System Security Symposium, NDSS 2022, San Diego, California, USA, April 24-28, 2022. The Internet Society, 2022.

[4]. Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2545–2555, 2022.

[5]. Yongkang Wang, Dihua Zhai, Yufeng Zhan, and Yuanqing Xia. Rflbat: A robust federated learning algorithm against backdoor attack. arXiv preprint arXiv:2201.03772, 2022.

3dfed's People

Contributors

haoyangliastaple avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

3dfed's Issues

Backdoor ASR is still 10%

Thank you for releasing the code,when I run training.py, set the parmas attack: ThrDFed defense: Deepsight.
2023-09-24 11:08:14 - WARNING - Backdoor False. Epoch: 30. Accuracy: Top-1: 95.10 | Loss: value: 0.17
2023-09-24 11:08:16 - WARNING - Backdoor True . Epoch: 30. Accuracy: Top-1: 10.55 | Loss: value: 5.76
This is the default setting in the file, I did not change any parameters.
Why is the accuracy of backdoor attacks still only 10%?
command:python training.py --name mnist --params configs/mnist_fed.yaml

ValueError: math domain error

Hello, thank you for your amazing works. Recently I've tried to run some experiments with defenses and come up with this error ValueError: math domain error for FLDetector. My settings are for CIFAR-10 task.
image

Incorrect implement of Deepsight

Hi, I may find a mistake in the deepsight.py.
In line 176 of the function dists_from_clust,pairwise_dists = np.ones((N,N))suppose to be modifed to pairwise_dists = np.zeros((N,N)).Otherwise, pairwise_dists[i][j]=1 in line 181 is wrong while assighing value 1 to a ones array is meaningless.
This mistake would make deepsight invalid.So I conduct a simple reproduction:
without modifing the code ,the best backdoor result is 96.88 while the result decline to only 68.22 after modified.

Optimization of decoy model

I'm not sure if it was a misunderstanding because I didn't see the correlation function. I noticed that $L_2$ in the optimization algorithm of decoy model (Algorithm 5) in the article is embedded in the activation function $Relu$, but it is not reflected in the code

image

At the same time, $L_2$ should be the difference between two losses $L_{task}(S_i,D)−L_{task}(\omega_b,D)$, this is also not reflected in the above code

ModuleNotFoundError

Hi,
Your paper is wonderful. The thoughts of each module are artful. I really like your work and want to follow it.

I encounter some difficulties in running your code. I run the command python training.py --name mnist --params configs/mnist_fed.yaml. But the ModuleNotFoundError is reported. The full error log is the following.

Traceback (most recent call last):
  File "~/3DFed/training.py", line 92, in <module>
    helper = Helper(params)
  File "~/3DFed/helper.py", line 41, in __init__
    self.make_attack()
  File "~/3DFed/helper.py", line 85, in make_attack
    raise ModuleNotFoundError(f'Your attack: {self.params.attack} should '
ModuleNotFoundError: Your attack: ThrDFed should be defined either ThrDFed (3DFed) or                                         ModelReplace (Model Replacement Attack)

I am not sure if I made any mistake on the command or the configs file (both of them are unchanged). Could you please guide me to run it properly?

Inquiry about Table II in the Paper: Inconsistency Between Experimental Results and Conclusions

Hello, I have a confusion regarding the conclusions in Table II of the paper. For a ResNet18 model trained for two hundred epochs, I am unable to reproduce similar data whether using your provided code or my own. Could you please clarify the method you used for calculating parameter changes? I hope you can assist me. (Attached: The two images respectively represent the results from the paper and the results I obtained).
D 4TQETC9YAX6 A%VSA Q(7
YWP~RUOZ{ {0ZU`K_YK94Z2

Pretrained model access

Could you upload pretrained models for all dataset: Cifar, ImageNet and Mnist? (the resume model in yaml config file)?

Incompatible with original paper in Deepsight defense

Hi,

I may find a mistake in the deepsight.py which is incompatible with the original paper. I think the >= in line 54 should be <= or < according to the original paper "A model is labeled as poisoned, iff its number of Threshold Exceedings is below this threshold."

Please enlighten me if i make any mistake.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.