Defense against data poisoning attacks in robot vision systems based on adversarial example detection
Ruiqing CHU , Xiao FU , Bin LUO , Jin SHI , Xiaoyang ZHOU
Front. Comput. Sci. ›› 2026, Vol. 20 ›› Issue (7) : 2007335
Defense against data poisoning attacks in robot vision systems based on adversarial example detection
Robot vision systems are integral to the autonomous functioning of robots, enabling tasks such as object recognition, navigation, and interaction with the environment. Nonetheless, these systems are highly prone to data poisoning and adversarial attacks, which can undermine their effectiveness and reliability. This paper investigates the relationship between these two types of attacks, with a particular focus on their similarities in feature space distribution and sensitivity to mutations in robot vision models. By enhancing existing adversarial example detection methods, we make them more effective at defending against data poisoning attacks in robot vision systems. Experimental results show that our improved defense methods not only protect against various types of data poisoning attacks but often outperform techniques specifically designed for such attacks, significantly enhancing the robustness and security of robot vision systems in real-world scenarios.
poisoning attacks / adversarial example detection / adversarial attacks / robotic vision systems / artificial intelligence
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
Luong M T, Pham H, Manning C D. Effective approaches to attention-based neural machine translation. In: Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing. 2015, 1412–1421 |
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
Xie C, Wang J, Zhang Z, Zhou Y, Xie L, Yuille A. Adversarial examples for semantic segmentation and object detection. Proceedings of the IEEE International Conference on Computer Vision. 2017, 1369–1378 |
| [13] |
|
| [14] |
|
| [15] |
Wang J, Dong G, Sun J, Wang X, Zhang P. Adversarial sample detection for deep neural network through model mutation testing. In: Proceedings of the 41st IEEE/ACM International Conference on Software Engineering. 2019, 1245–1256 |
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
Zhang Y, Zhang Y, Zhang Z, Bai H, Zhong T, Song M. Evaluation of data poisoning attacks on federated learning-based network intrusion detection system. In: Proceedings of the 24th IEEE International Conference on High Performance Computing & Communications; 8th International Conference on Data Science & Systems; 20th International Conference on Smart City; 8th International Conference on Dependability in Sensor, Cloud & Big Data Systems & Application. 2022, 2235–2242 |
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
Ma L, Zhang F, Sun J, Xue M, Li B, Juefei-Xu F, Xie C, Li L, Liu Y, Zhao J, Wang Y. DeepMutation: mutation testing of deep learning systems. In: Proceedings of the 29th IEEE International Symposium on Software Reliability Engineering. 2018, 100–111 |
| [49] |
Shi Y, Yin B, Zheng Z, Li T. An empirical study on test case prioritization metrics for deep neural networks. In: Proceedings of the 21st IEEE International Conference on Software Quality, Reliability and Security. 2021, 157–166 |
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
|
Higher Education Press
/
| 〈 |
|
〉 |