FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack

Shiwei LU, Ruihu LI, Wenbin LIU

PDF(11879 KB)
PDF(11879 KB)
Front. Comput. Sci. ›› 2024, Vol. 18 ›› Issue (2) : 182307. DOI: 10.1007/s11704-023-2283-x
Artificial Intelligence
RESEARCH ARTICLE

FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack

Author information +
History +

Abstract

Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA.

Graphical abstract

Keywords

federated learning / privacy protection / adversarial attacks / aggregated rule / correctness verification

Cite this article

Download citation ▾
Shiwei LU, Ruihu LI, Wenbin LIU. FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack. Front. Comput. Sci., 2024, 18(2): 182307 https://doi.org/10.1007/s11704-023-2283-x

Shiwei Lu is a Doctor at Department of Basic Sciences, Air Force Engineering University, China. He received his MS from Air Force Engineering University, China and achieved his BS degree in computer science and technology from Zhejiang University, China in 2017. His major interests include cyberspace security and machine learning

Ruihu Li received his PhD degree from North-western Polytechnical University of Technology, China in 2004. He is currently a professor in Department of Basic Sciences, Air Force Engineering University, China. His research interests include group theory, coding theory and cryptography

Wenbin Liu (Member, IEEE) received the PhD degree in systems engineering from the Huazhong University of Science and Technology, China in 2004. In 2004, he joined the College of Mathematics, Physics and Electronic Information Engineering, Wenzhou University, China. He was a Visiting Scholar with the Institute for Systems Biology, USA in 2006, and Texas A&M University, USA in 2013. He is currently a Professor with Guangzhou University, China. His work was supported by four NSFC grant and other funds from Zhejiang Province. His research interests include pattern recognition, data mining, Image encryption storage, and DNA storage

References

[1]
McMahan H B, Moore E, Ramage D, Arcas B A Y. Federated learning of deep networks using model averaging. 2016, arXiv preprint arXiv: 1602.05629
[2]
McMahan H B, Moore E, Ramage D, Hampson S, Arcas B A Y. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017, 1273−1282
[3]
Geiping J, Bauermeister H, Dröge H, Moeller M. Inverting gradients-how easy is it to break privacy in federated learning?. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 1421
[4]
Jeon J, Kim J, Lee K, Oh S, Ok J. Gradient inversion with generative image prior. In: Proceedings of the 35th Conference on Neural Information Processing Systems. 2021, 29898−29908
[5]
Yin H, Mallya A, Vahdat A, Alvarez J M, Kautz J, Molchanov P. See through gradients: image batch recovery via gradInversion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 16332−16341
[6]
Zhao B, Mopuri K R, Bilen H. iDLG: improved deep leakage from gradients. 2020, arXiv preprint arXiv: 2001.02610
[7]
Zhu L, Liu Z, Han S. Deep leakage from gradients. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 1323
[8]
Bhagoji A N, Chakraborty S, Mittal P, Calo S B. Analyzing federated learning through an adversarial lens. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 634−643
[9]
Fung C, Yoon C J M, Beschastnikh I. Mitigating sybils in federated learning poisoning. 2018, arXiv preprint arXiv: 1808.04866
[10]
Lyu L, Yu H, Yang Q. Threats to federated learning: a survey. 2020, arXiv preprint arXiv: 2003.02133
[11]
Tolpegin V, Truex S, Gursoy M E, Liu L. Data poisoning attacks against federated learning systems. In: Proceedings of the 25th European Symposium on Research in Computer Security. 2020, 480−501
[12]
Bagdasaryan E, Veit A, Hua Y, Estrin D, Shmatikov V. How to backdoor federated learning. In: Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. 2020, 2938−2948
[13]
Sun Z, Kairouz P, Suresh A T, McMahan H B. Can you really backdoor federated learning?. 2019, arXiv preprint arXiv: 1911.07963
[14]
Wang H, Sreenivasan K, Rajput S, Vishwakarma H, Agarwal S, Sohn J Y, Lee K, Papailiopoulos D. Attack of the tails: yes, you really can backdoor federated learning. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 1348
[15]
Fang M, Cao X, Jia J, Gong N Z. Local model poisoning attacks to byzantine-robust federated learning. In: Proceedings of the 29th USENIX Conference on Security Symposium (USENIX Security 20). 2020, 92
[16]
Li S, Cheng Y, Wang W, Liu Y, Chen T. Learning to detect malicious clients for robust federated learning. 2020, arXiv preprint arXiv: 2002.00211
[17]
So J, Güler B, Avestimehr A S . Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 2021, 39( 7): 2168–2181
[18]
Fang H, Qian Q . Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet, 2021, 13( 4): 94
[19]
Hardy S, Henecka W, Ivey-Law H, Nock R, Patrini G, Smith G, Thorne B. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. 2017, arXiv preprint arXiv: 1711.10677
[20]
Jiang Z, Wang W, Liu Y. FLASHE: additively symmetric homomorphic encryption for cross-silo federated learning. 2021, arXiv preprint arXiv: 2109.00675
[21]
Girgis A, Data D, Diggavi S, Kairouz P, Suresh A T. Shuffled model of differential privacy in federated learning. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics. 2021, 2521−2529
[22]
Sun L, Qian J, Chen X. LDP-FL: practical private aggregation in federated learning with local differential privacy. In: Proceedings of the 30th International Joint Conference on Artificial Intelligence. 2021, 1571−1578
[23]
Truex S, Liu L, Chow K H, Gursoy M E, Wei W. LDP-Fed: federated learning with local differential privacy. In: Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking. 2020, 61−66
[24]
Wei K, Li J, Ding M, Ma C, Yang H H, Farokhi F, Jin S, Quek T Q S, Poor H V . Federated learning with differential privacy: algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454–3469
[25]
Zhao Y, Zhao J, Yang M, Wang T, Wang N, Lyu L, Niyato D, Lam K Y . Local differential privacy-based federated learning for internet of things. IEEE Internet of Things Journal, 2021, 8( 11): 8836–8853
[26]
Bonawitz K, Ivanov V, Kreuter B, Marcedone A, McMahan H B, Patel S, Ramage D, Segal A, Seth K. Practical secure aggregation for federated learning on user-held data. 2016, arXiv preprint arXiv: 1611.04482
[27]
Choi B, Sohn J Y, Han D J, Moon J. Communication-computation efficient secure aggregation for federated learning. 2020, arXiv preprint arXiv: 2012.05433
[28]
Fereidooni H, Marchal S, Miettinen M, Mirhoseini A, Möllering H, Nguyen T D, Rieger P, Sadeghi A R, Schneider T, Yalame H, Zeitouni S. SAFELearn: secure aggregation for private FEderated learning. In: Proceedings of 2021 IEEE Security and Privacy Workshops (SPW). 2021, 56–62
[29]
Truex S, Baracaldo N, Anwar A, Steinke T, Ludwig H, Zhang R, Zhou Y. A hybrid approach to privacy-preserving federated learning. In: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 2019, 1–11
[30]
Xu G, Li H, Liu S, Yang K, Lin X D . VerifyNet: secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 2020, 15: 911–926
[31]
Dong Y, Chen X, Shen L, Wang D . EaSTFLy: efficient and secure ternary federated learning. Computers & Security, 2020, 94: 101824
[32]
Fang C, Guo Y, Wang N, Ju A . Highly efficient federated learning with strong privacy preservation in cloud computing. Computers & Security, 2020, 96: 101889
[33]
Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128
[34]
El Mhamdi E M, Guerraoui R, Rouault S. The hidden vulnerability of distributed learning in Byzantium. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 3521–3530
[35]
Yin D, Chen Y, Kannan R, Bartlett P. Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5650–5659
[36]
Andreina S, Marson G A, Möllering H, Karame G. BaFFle: backdoor detection via feedback-based federated learning. In: Proceedings of the 41st International Conference on Distributed Computing Systems (ICDCS). 2021, 852–863
[37]
Chen C, Zhang J, Tung A K H, Kankanhalli M, Chen G. Robust federated recommendation system. 2020, arXiv preprint arXiv: 2006.08259
[38]
Melis L, Song C, De Cristofaro E, Shmatikov V. Exploiting unintended feature leakage in collaborative learning. In: Proceedings of 2019 IEEE Symposium on Security and Privacy (SP). 2019, 691–706
[39]
Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: Proceedings of the IEEE Symposium on Security and Privacy (SP). 2017, 3–18
[40]
Yang D, Zhang D, Yu Z, Yu Z. Fine-grained preference-aware location search leveraging crowdsourced digital footprints from LBSNs. In: Proceedings of 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2013, 479–488
[41]
Huang G B, Mattar M, Berg T, Learned-Miller E. Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Proceedings of the Workshop on Faces in ’Real-Life’ Images: detection, Alignment, and Recognition. 2008
[42]
Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 2015, 1322–1333
[43]
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y . Generative adversarial networks. Communications of the ACM, 2020, 63( 11): 139–144
[44]
Phong L T, Aono Y, Hayashi T, Wang L, Moriai S . Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 2018, 13( 5): 1333–1345
[45]
Lin Y, Han S, Mao H, Wang Y, Dally W J. Deep gradient compression: reducing the communication bandwidth for distributed training. In: Proceedings of the 6th International Conference on Learning Representations. 2018
[46]
Tsuzuku Y, Imachi H, Akiba T. Variance-based gradient compression for efficient distributed deep learning. In: Proceedings of the 6th International Conference on Learning Representations. 2018
[47]
Kairouz P, McMahan H B, Avent B, Bellet A, Bennis M, . . Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 2021, 14( 1−2): 1–210
[48]
Stallkamp J, Schlipsing M, Salmen J, Igel C . Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 2012, 32: 323–332
[49]
Li L, Xu W, Chen T, Giannakis G B, Ling Q. RSA: byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. 2019, 1544–1551
[50]
Wu Z, Ling Q, Chen T, Giannakis G B . Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. IEEE Transactions on Signal Processing, 2020, 68: 4583–4596
[51]
Lorenz E N. Section of planetary sciences: the predictability of hydrodynamic flow. Transactions of the New York Academy of Sciences, 1963, 25(4 Series II): 409−432
[52]
May R M. Simple mathematical models with very complicated dynamics. In: Hunt B R, Li T Y, Kennedy J A, Nusse H E, eds. The Theory of Chaotic Attractors. New York: Springer, 2004, 85–93
[53]
Hsu T M H, Qi H, Brown M. Measuring the effects of non-identical data distribution for federated visual classification. 2019, arXiv preprint arXiv: 1909.06335
[54]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770–778

Acknowledgements

The work was supported by the National Natural Science Foundation of China (Grand Nos. 62072128, 11901579, 11801564) and the Natural Science Foundation of Shaanxi (2022JQ-046, 2021JQ-335, 2021JM-216).

RIGHTS & PERMISSIONS

2024 Higher Education Press
AI Summary AI Mindmap
PDF(11879 KB)

Accesses

Citations

Detail

Sections
Recommended

/