Adversarial robustness analysis of LiDAR-included models in autonomous driving

Bo Yang , Zizhi Jin , Yushi Cheng , Xiaoyu Ji , Wenyuan Xu

High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100203

PDF (1835KB)
High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100203 DOI: 10.1016/j.hcc.2024.100203
Research Articles
research-article

Adversarial robustness analysis of LiDAR-included models in autonomous driving

Author information +
History +
PDF (1835KB)

Abstract

In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR-camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.

Keywords

Autonomous driving / LiDAR / Adversarial example / Defense

Cite this article

Download citation ▾
Bo Yang, Zizhi Jin, Yushi Cheng, Xiaoyu Ji, Wenyuan Xu. Adversarial robustness analysis of LiDAR-included models in autonomous driving. High-Confidence Computing, 2024, 4(1): 100203 DOI:10.1016/j.hcc.2024.100203

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This paper is supported by the NSFC (62271280, 62222114, 61925109 and 62071428), China.

References

[1]

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, Intriguing properties of neural networks, 2013, arXiv preprint arXiv:1312.6199.

[2]

N. Carlini, D. Wagner, Towards evaluating the robustness of neural networks, in: 2017 Ieee Symposium on Security and Privacy, Sp, IEEE, 2017, pp. 39-57.

[3]

I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples, 2014, arXiv preprint arXiv:1412.6572.

[4]

K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song, Robust physical-world attacks on deep learning visual classification, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1625-1634.

[5]

T. Sato, J. Shen, N. Wang, Y. Jia, X. Lin, Q.A. Chen, Dirty road can attack: Security of deep learning based automated lane centering under Physical-World attack,in:30th USENIX Security Symposium (USENIX Security 21), USENIX Association, 2021, pp. 3309-3326, URL: https://www.usenix.org/conference/usenixsecurity21/presentation/sato.

[6]

Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q.A. Chen, K. Fu, Z.M. Mao, Adversarial sensor attack on lidar-based perception in autonomous driving, in:Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 2267-2281.

[7]

J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, R. Urtasun, Physically realizable adversarial examples for lidar object detection, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 13716-13725.

[8]

X. Wang, M. Cai, F. Sohel, N. Sang, Z. Chang, Adversarial point cloud perturbations against 3D object detection in autonomous driving systems, Neurocomputing 466 (2021) 27-36.

[9]

J. Sun, Y. Cao, Q.A. Chen, Z.M. Mao, Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures,in:29th {USENIX} Security Symposium ({USENIX} Security 20), 2020, pp. 877-894.

[10]

Z. Jin, X. Ji, Y. Cheng, B. Yang, C. Yan, W. Xu, Pla-lidar: Physical laser attacks against lidar-based 3d object detection in autonomous vehicle, in: 2023 IEEE Symposium on Security and Privacy, SP, IEEE, 2023, pp. 1822-1839.

[11]

K. Yang, T. Tsai, H. Yu, M. Panoff, T.-Y. Ho, Y. Jin, Robust roadside physical adversarial attack against deep learning in lidar perception modules, in:Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021, pp. 349-362.

[12]

Y. Zhu, C. Miao, T. Zheng, F. Hajiaghajani, L. Su, C. Qiao, Can we use arbitrary objects to attack LiDAR perception in autonomous driving? in:Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 1945-1960.

[13]

W. Park, N. Liu, Q.A. Chen, Z.M. Mao, Sensor adversarial traits: Analyzing robustness of 3d object detection sensor fusion models, in: 2021 IEEE International Conference on Image Processing, ICIP, IEEE, 2021, pp. 484-488.

[14]

R.S. Hallyburton, Y. Liu, Y. Cao, Z.M. Mao, M. Pajic, Security analysis of {camera-lidAR} fusion against {black-box} attacks on autonomous vehicles, in:31st USENIX Security Symposium, USENIX Security 22, 2022, pp. 1903-1920.

[15]

J. Tu, H. Li, X. Yan, M. Ren, Y. Chen, M. Liang, E. Bitar, E. Yumer, R. Urtasun, Exploring adversarial robustness of multi-sensor perception systems in self driving, 2021, arXiv preprint arXiv:2101.06784.

[16]

M. Abdelfattah, K. Yuan, Z.J. Wang, R. Ward, Adversarial attacks on camera-lidar models for 3d car detection, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2021, pp. 2189-2194.

[17]

Y. Cao, N. Wang, C. Xiao, D. Yang, J. Fang, R. Yang, Q.A. Chen, M. Liu, B. Li, Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks, in: 2021 IEEE Symposium on Security and Privacy, SP, IEEE, 2021, pp. 176-194.

[18]

A.H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, O. Beijbom, Pointpillars: Fast encoders for object detection from point clouds,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 12697-12705.

[19]

Z. Yang, Y. Sun, S. Liu, J. Jia, 3dssd: Point-based 3d single stage object detector,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11040-11048.

[20]

S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, H. Li, Pv-rcnn: Pointvoxel feature set abstraction for 3d object detection,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10529-10538.

[21]

V.A. Sindagi, Y. Zhou, O. Tuzel, Mvx-net: Multimodal voxelnet for 3d object detection, in: 2019 International Conference on Robotics and Automation, ICRA, IEEE, 2019, pp. 7276-7282.

[22]

Y. Yan, Y. Mao, B. Li, Second: Sparsely embedded convolutional detection, Sensors 18 (10) (2018) 3337.

[23]

S. Shi, Z. Wang, X. Wang, H. Li, Part-a 2 net: 3 d part-aware and aggregation neural network for object detection from point cloud, 2, (3) 2019, arXiv preprint arXiv:1907.03670.

[24]

T. Yin, X. Zhou, P. Krahenbuhl, Center-based 3d object detection and tracking, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11784-11793.

[25]

S. Shi, X. Wang, H. Li, Pointrcnn: 3d object proposal generation and detection from point cloud,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 770-779.

[26]

Y. Zhang, Q. Hu, G. Xu, Y. Ma, J. Wan, Y. Guo, Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 18953-18962.

[27]

C. He, H. Zeng, J. Huang, X.-S. Hua, L. Zhang, Structure aware single-stage 3d object detection from point cloud, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11873-11882.

[28]

P. Wu, L. Gu, X. Yan, H. Xie, F.L. Wang, G. Cheng, M. Wei, PV-RCNN++: semantical point-voxel feature interaction for 3D object detection, Vis. Comput. 39 (6) (2023) 2425-2440.

[29]

Apollo, 2023, URL: https://www.apollo.auto/.

[30]

C.R. Qi, W. Liu, C. Wu, H. Su, L.J. Guibas, Frustum pointnets for 3d object detection from rgb-d data, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 918-927.

[31]

Z. Wang, K. Jia, Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection, in: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2019, pp. 1742-1749.

[32]

T. Huang, Z. Liu, X. Chen, X. Bai, Epnet:Enhancing point features with image semantics for 3d object detection, in: Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV 16, Springer, 2020, pp. 35-52.

[33]

J. Ku, M. Mozifian, J. Lee, A. Harakeh, S.L. Waslander, Joint 3d proposal generation and object detection from view aggregation, in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2018, pp. 1-8.

[34]

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu, Object removal attacks on lidar-based 3d object detectors, 2021, arXiv preprint arXiv:2102.03722.

[35]

B. Yang, X. Ji, Z. Jin, Y. Cheng, W. Xu, Exploring adversarial robustness of LiDAR-camera fusion model in autonomous driving, in: 2023 IEEE Conference on Energy Internet and Energy System Integration, 2023.

[36]

A. Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The KITTI dataset, Int. J. Robot. Res. (IJRR) (2013).

AI Summary AI Mindmap
PDF (1835KB)

430

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/