Gradient purification: defense against data poisoning attack in decentralized federated learning
Bin LI , Xiaoye MIAO , Yan ZHANG , Jianwei YIN
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (9) : 2009352
Gradient purification: defense against data poisoning attack in decentralized federated learning
Decentralized federated learning (DFL) is inherently vulnerable to data poisoning attacks, as malicious clients can transmit manipulated gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after excluding all malicious clients. They all neglect the potential benefits that may exist within contributions from malicious clients. In this paper, we propose a novel gradient purification defense, termed , to defend against data poisoning attacks in DFL. It aims to separately mitigate the harm in gradients and retain benefits embedded in model weights, thereby enhancing overall model accuracy. For each benign client in , a recording variable is designed to track historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and mitigate all aggregated malicious gradients at once. Upon mitigation, benign clients optimize model weights using purified gradients. This optimization not only retains previously beneficial components from malicious clients but also exploits canonical contributions from benign clients. We analyze the convergence of , as well as its ability to harvest high accuracy. Extensive experiments demonstrate that, is capable of mitigating data poisoning attacks under both iid and non-iid data distributions. It also significantly outperforms state-of-the-art defense methods in terms of model accuracy.
decentralized federated learning / data poisoning attack / security protocol
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
Fang M, Zhang Z, Hairi, Khanduri P, Liu J, Lu S, Liu Y, Gong N. Byzantine-robust decentralized federated learning. In: Proceedings of 2024 on ACM SIGSAC Conference on Computer and Communications Security. 2024, 2874−2888 |
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
Yuan D, Miao Y, Gong N Z, Yang Z, Li Q, Song D, Wang Q, Liang X. Detecting fake accounts in online social networks at the time of registrations. In: Proceedings of 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019, 1423−1438 |
| [20] |
|
| [21] |
Cao X, Jia J, Zhang Z, Gong N Z. FedRecover: recovering from poisoning attacks in federated learning using historical information. In: Proceedings of 2023 IEEE Symposium on Security and Privacy. 2023, 1366−1383 |
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
Kumari K, Rieger P, Fereidooni H, Jadliwala M, Sadeghi A R. BayBFed: Bayesian backdoor defense for federated learning. In: Proceedings of 2023 IEEE Symposium on Security and Privacy. 2023, 737−754 |
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
Li Q, Diao Y, Chen Q, He B. Federated learning on non-IID data silos: an experimental study. In: Proceedings of the 38th IEEE International Conference on Data Engineering (ICDE). 2022, 965−978 |
Higher Education Press
/
| 〈 |
|
〉 |