Adaptive Simulation Backdoor Attack Based on Federated Learning

Xiujin SHI , Kaixiong XIA , Guoying YAN , Xuan TAN , Yanxu SUN , Xiaolong ZHU

Journal of Donghua University(English Edition) ›› 2026, Vol. 43 ›› Issue (1) : 50 -58.

PDF
Journal of Donghua University(English Edition) ›› 2026, Vol. 43 ›› Issue (1) :50 -58. DOI: 10.19884/j.1672-5220.202412010
Information Technology and Artificial Intelligence
research-article
Adaptive Simulation Backdoor Attack Based on Federated Learning
Author information +
History +
PDF

Abstract

In federated learning, backdoor attacks have become an important research topic with their wide application in processing sensitive datasets. Since federated learning detects or modifies local models through defense mechanisms during aggregation, it is difficult to conduct effective backdoor attacks. In addition, existing backdoor attack methods are faced with challenges, such as low backdoor accuracy, poor ability to evade anomaly detection, and unstable model training. To address these challenges, a method called adaptive simulation backdoor attack(ASBA) is proposed. Specifically, ASBA improves the stability of model training by manipulating the local training process and using an adaptive mechanism, the ability of the malicious model to evade anomaly detection by combing large simulation training and clipping, and the backdoor accuracy by introducing a stimulus model to amplify the impact of the backdoor in the global model. Extensive comparative experiments under five advanced defense scenarios show that ASBA can effectively evade anomaly detection and achieve high backdoor accuracy in the global model. Furthermore, it exhibits excellent stability and effectiveness after multiple rounds of attacks, outperforming state-of-the-art backdoor attack methods.

Keywords

federated learning / backdoor attack / privacy / adaptive attack / simulation

Cite this article

Download citation ▾
Xiujin SHI, Kaixiong XIA, Guoying YAN, Xuan TAN, Yanxu SUN, Xiaolong ZHU. Adaptive Simulation Backdoor Attack Based on Federated Learning. Journal of Donghua University(English Edition), 2026, 43(1): 50-58 DOI:10.19884/j.1672-5220.202412010

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics. New York: PMLR, 2017:1273-1282.

[2]

LI L, FAN Y X, TSE M, et al. A review of applications in federated learning[J]. Computers & Industrial Engineering, 2020, 149:106854.

[3]

LI T, SAHU A K, TALWALKAR A, et al. Federated learning:challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3):50-60.

[4]

LYU L J, YU H, MA X J, et al. Privacy and robustness in federated learning:attacks and defenses[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(7):8726-8746.

[5]

BONAWITZ K. Towards federated learning at scale:system design[EB/OL].(2019-03-22)[ 2024-12-16]. https://arxiv.org/pdf/1902.01046.

[6]

FANG M, CAO X, JIA J, et al. Local model poisoning attacks to { Byzantine-Robust } federated learning[C]// 29th USENIX Security Symposium(USENIX Security 20). Berkeley, CA: USENIX Association, 2020:1605-1622.

[7]

BHAGOJI A N, CHAKRABORTY S, MITTAL P, et al. Analyzing federated learning through an adversarial lens[C]// International Conference on Machine Learning. New York: PMLR, 2019:634-643.

[8]

JERE M S, FARNAN T, KOUSHANFAR F. A taxonomy of attacks on federated learning[J]. IEEE Security & Privacy, 2021, 19(2):20-28.

[9]

BAGDASARYAN E, VEIT A, HUA Y, et al. How to backdoor federated learning[C]// International Conference on Artificial Intelligence and Statistics. New York: PMLR, 2020:2938- 2948.

[10]

BAGDASARYAN E, SHMATIKOV V. Blind backdoors in deep learning models[C]// 30th USENIX Security Symposium(USENIX Security 21). Berkeley, CA: USENIX Association, 2021:1505-1521.

[11]

SHEJWALKAR V, HOUMANSADR A. Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning[EB/OL].(2021-02-25)[2024-12-16]. https://www.ndss-symposium.org/wp-content/uploads/ndss2021_6C-3_24498_paper.pdf.

[12]

STEINHARDT J, KOH P W, LIANG P. Certified defenses for data poisoning attacks[EB/OL].(2017-11-24)[2024-12-16]. https://arxiv.org/pdf/1706.03691.

[13]

XIE C, HUANG K, CHEN P Y, et al. DBA:distributed backdoor attacks against federated learning[C]// International Conference on Learning Representations. Orleans: ICLR, 2019.

[14]

WANG H Y, SREENIVASAN K, RAJPUT S, et al. Attack of the tails:yes, you really can backdoor federated learning[J]. Advances in Neural Information Processing Systems, 2020, 33:16070-16084.

[15]

LI H Y, YE Q Q, HU H B, et al. 3DFed:adaptive and extensible framework for covert backdoor attack in federated learning[C]//2023 IEEE Symposium on Security and Privacy(SP). New York: IEEE, 2023:1893-1907.

[16]

KIM Y, CHEN H L, KOUSHANFAR F. Backdoor defense in federated learning using differential testing and outlier detection[EB/OL].(2022-02-21)[2024-12-16]. https://arxiv.org/pdf/2202.11196.

[17]

MI Y X, GUAN J H, ZHOU S G. ARIBA:towards accurate and robust identification of backdoor attacks in federated learning[EB/OL].(2022-02-09)[2024-12-16]. https://arxiv.org/pdf/2202.04311v1.

[18]

ALORAN I, SAMET S. Defending federated learning against model poisoning attacks[C]//2023 IEEE International Conference on Big Data. New York: IEEE, 2023:3584-3589.

[19]

ZHAO C, WEN Y, LI S L, et al. FederatedReverse:a detection and defense method against backdoor attacks in federated learning[C]// Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security. New York: ACM, 2021:51-62.

[20]

RIEGER P, NGUYEN T D, MIETTINEN M, et al. DeepSight:mitigating backdoor attacks in federated learning through deep model inspection[C]//Proceedings 2022 Network and Distributed System Security Symposium. VA: Internet Society, 2022:2201. 00763.

[21]

FUNG C, YOON C J M, BESCHASTNIKH I. BESCHASTNIKH I. The limitations of federated learning in sybil settings[C]//23rd International Symposium on Research in Attacks, Intrusions and Defenses(RAID 2020). New York: USENIX, 2020:301-316.

[22]

NGUYEN T D, RIEGER P, DE VITI R, et al. { FLAME }:taming backdoors in federated learning[C]//31st USENIX Security Symposium. New York: USENIX, 2022:1415- 1432.

[23]

ZHANG Z X, CAO X Y, JIA J Y, et al. FLDetector:defending federated learning against model poisoning attacks via detecting malicious clients[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2022:2545-2555.

[24]

WANG Y K, ZHAI D H, ZHAN Y F, et al. RFLBAT:a robust federated learning algorithm against backdoor attack[EB/OL].(2022-01-11)[2024-12-16]. https://arxiv.org/pdf/2201.03772.

[25]

KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and open problems in federated learning[J]. Foundations and Trends® in Machine Learning, 2021, 14(1 / 2):1-210.

[26]

YANG L, JIN R. Distance metric learning:a comprehensive survey[J]. Michigan State University, 2006, 2(2):4.

[27]

IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[EB/OL].(2015-03-02)[2024-12-16]. https://arxiv.org/pdf/1502.03167.

[28]

COHEN G, AFSHAR S, TAPSON J, et al. EMNIST:extending MNIST to handwritten letters[C]//2017 International Joint Conference on Neural Networks(IJCNN). New York: IEEE, 2017:2921-2926.

[29]

KRIZHEVSKY A. Learning multiple layers of features from tiny images[EB/OL].(2009-04-08)[2024-12-16]. http://www.cs.toronto.edu/-kriz/learning-features-2009-TR.pdf.

[30]

HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2016:770-778.

PDF

0

Accesses

0

Citation

Detail

Sections
Recommended

/