Analysis of deep learning under adversarial attacks in hierarchical federated learning

Duaa S. Alqattan , Vaclav Snasel , Rajiv Ranjan , Varun Ojha

High-Confidence Computing ›› 2025, Vol. 5 ›› Issue (4) : 100321

PDF
High-Confidence Computing ›› 2025, Vol. 5 ›› Issue (4) :100321 DOI: 10.1016/j.hcc.2025.100321
Research Articles
research-article

Analysis of deep learning under adversarial attacks in hierarchical federated learning

Author information +
History +
PDF

Abstract

Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.

Keywords

Hierarchical federated learning / Model discrepancy / Targeted label flipping / Untargeted label flipping / Client-side sign flipping / Server-side sign flipping

Cite this article

Download citation ▾
Duaa S. Alqattan, Vaclav Snasel, Rajiv Ranjan, Varun Ojha. Analysis of deep learning under adversarial attacks in hierarchical federated learning. High-Confidence Computing, 2025, 5(4): 100321 DOI:10.1016/j.hcc.2025.100321

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Duaa S. Alqattan: Writing - review & editing, Writing - original draft, Visualization, Validation, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Vaclav Snasel: Writing - review & editing, Conceptualization. Rajiv Ranjan: Supervision. Varun Ojha: Writing - review & editing, Supervision, Project administration, Conceptualization.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

This research was supported by the Technical and Vocational Training Corporation (TVTC) through the Saudi Arabian Culture Bureau (SACB) in the United Kingdom and the EPSRC-funded project National Edge AI Hub for Real Data: Edge Intelligence for Cyber-disturbances and Data Quality (EP/Y028813/1).

References

[1]

P. Kairouz, H.B. McMahan, B. Avent, A. Bellet, M. Bennis, A.N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al., Advances and open problems in federated learning, Found. Trends® Mach. Learn. 14 (1-2) (2021) 1-210.

[2]

C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, Y. Gao, A survey on federated learning, Knowl.-Based Syst. 216 (2021) 106775.

[3]

Z. Zheng, Y. Zhou, Y. Sun, Z. Wang, B. Liu, K. Li, Applications of federated learning in smart cities: recent advances, taxonomy, and open challenges, Connect. Sci. 34 (1) (2022) 1-28.

[4]

R.T. Hameed, O.A. Mohamad, Federated learning in IoT: A survey on distributed decision making, Babylon. J. Internet Things 2023 (2023) 1-7.

[5]

O. Rana, T. Spyridopoulos, N. Hudson, M. Baughman, K. Chard, I. Foster, A. Khan,Hierarchical and decentralised federated learning, in: 2022 Cloud Continuum, IEEE, 2022, pp. 1-9.

[6]

S. Luo, X. Chen, Q. Wu, Z. Zhou, S. Yu, HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning, IEEE Trans. Wirel. Commun. 19 (10) (2020) 6535-6548.

[7]

L. Liu, J. Zhang, S. Song, K.B. Letaief, Client-edge-cloud hierarchical federated learning, in: ICC 2020-2020 IEEE International Conference on Communications, ICC, IEEE, 2020, pp. 1-6.

[8]

M.P.-L. Ooi, S. Sohail, V.G. Huang, N. Hudson, M. Baughman, O. Rana, A. Hinze, K. Chard, R. Chard, I. Foster, et al., Measurement and applications: Exploring the challenges and opportunities of hierarchical federated learning in sensor applications, IEEE Instrum. Meas. Mag. 26 (9) (2023) 21-31.

[9]

G. Xia, J. Chen, C. Yu, J. Ma, Poisoning attacks in federated learning: A survey, IEEE Access 11 (2023) 10708-10722.

[10]

D.S. Alqattan, R. Sun, H. Liang, G. Nicosia, V. Snasel, R. Ranjan, V. Ojha, Security assessment of hierarchical federated deep learning, in: International Conference on Artificial Neural Networks, Springer, 2024, pp. 202-217.

[11]

Q. Han, S. Lu, W. Wang, H. Qu, J. Li, Y. Gao, Privacy preserving and secure robust federated learning: A survey, Concurr. Comput.: Pr. Exp. (2024) e8084.

[12]

X. Cao, M. Fang, J. Liu, N.Z. Gong, Fltrust: Byzantine-robust federated learning via trust bootstrapping, 2020, arXiv preprint arXiv:2012.13995.

[13]

E. Isik-Polat, G. Polat, A. Kocyigit, ARFED: Attack-resistant federated averaging based on outlier elimination, Future Gener. Comput. Syst. 141 (2023) 626-650.

[14]

S. Lee, A.K. Sahu, C. He, S. Avestimehr, Partial model averaging in federated learning: Performance guarantees and benefits, Neurocomputing 556 (2023) 126647.

[15]

H. Kasyap, S. Tripathy, Sine: Similarity is not enough for mitigating local model poisoning attacks in federated learning, IEEE Trans. Depend. Secur. Comput. (2024).

[16]

K. Kumari, P. Rieger, H. Fereidooni, M. Jadliwala, A.-R. Sadeghi, Baybfed: Bayesian backdoor defense for federated learning, in: 2023 IEEE Symposium on Security and Privacy, SP, IEEE, 2023, pp. 737-754.

[17]

X. Liu, H. Li, G. Xu, Z. Chen, X. Huang, R. Lu, Privacy-enhanced federated learning against poisoning adversaries, IEEE Trans. Inf. Forensics Secur. 16 (2021) 4574-4588.

[18]

S. Li, E. Ngai, T. Voigt, Byzantine-robust aggregation in federated learning empowered industrial IoT, IEEE Trans. Ind. Informatics 19 (2) (2023) 1165-1175, http://dx.doi.org/10.1109/TII.2021.3128164.

[19]

Y. Zhou, R. Wang, X. Mo, Z. Li, T. Tang, Robust hierarchical federated learning with anomaly detection in cloud-edge-end cooperation networks, Electronics 12 (1) (2022) 112.

[20]

Y. Zhao, Y. Cao, J. Zhang, H. Huang, Y. Liu, FlexibleFL: Mitigating poisoning attacks with contributions in cloud-edge federated learning systems, Inform. Sci. 664 (2024) 120350.

[21]

H. Zhou, Y. Zheng, H. Huang, J. Shu, X. Jia, Toward robust hierarchical federated learning in internet of vehicles, IEEE Trans. Intell. Transp. Syst. (2023).

[22]

X. Xie, C. Hu, H. Ren, J. Deng, A survey on vulnerability of federated learning: A learning algorithm perspective, Neurocomputing (2024) 127225.

[23]

B. McMahan, E. Moore, D. Ramage, S. Hampson, B.A. y Arcas, Communication-efficient learning of deep networks from decentralized data, in: Artificial Intelligence and Statistics, PMLR, 2017, pp. 1273-1282.

[24]

C. Yin, Q. Zeng, Defending against data poisoning attack in federated learning with non-IID data, IEEE Trans. Comput. Soc. Syst. (2023).

[25]

W. Guo, B. Tondi, M. Barni, An overview of backdoor attacks against deep neural networks and possible defences, IEEE Open J. Signal Process. 3 (2022) 261-287.

[26]

E. Rosenfeld, E. Winston, P. Ravikumar, Z. Kolter, Certified robustness to label-flipping attacks via randomized smoothing, in:International Conference on Machine Learning, PMLR, 2020, pp. 8230-8241.

[27]

A. Yazdinejad, A. Dehghantanha, H. Karimipour, G. Srivastava, R.M. Parizi, A robust privacy-preserving federated learning model against model poisoning attacks, IEEE Trans. Inf. Forensics Secur. (2024).

[28]

A. Gupta, T. Luo, M.V. Ngo, S.K. Das, Long-short history of gradients is all you need: Detecting malicious and unreliable clients in federated learning, in: European Symposium on Research in Computer Security, Springer, 2022, pp. 445-465.

[29]

T. Lin, L. Kong, S.U. Stich, M. Jaggi, Ensemble distillation for robust model fusion in federated learning, Adv. Neural Inf. Process. Syst. 33 (2020) 2351-2363.

AI Summary AI Mindmap
PDF

44

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/