An end-to-end object detection method for autonomous driving in multiple adverse weather conditions

Shi Yin , Hui Liu

Journal of Central South University ›› : 1 -14.

PDF
Journal of Central South University ›› :1 -14. DOI: 10.1007/s11771-026-6301-7
Research Article
research-article
An end-to-end object detection method for autonomous driving in multiple adverse weather conditions
Author information +
History +
PDF

Abstract

Autonomous driving systems impose stringent requirements on the real-time performance and computational efficiency of visual perception tasks, particularly under complex and diverse adverse weather conditions. To address these challenges, a highly robust object detection method called Adverse-Det is proposed, targeting multiple harsh weather scenarios. This model introduces the visual state-space modeling module and the frequency-aware feature fusion module to achieve dual enhancement in global spatial structure modeling and local detail recovery, effectively mitigating the performance degradation caused by image quality deterioration in challenging environments such as rain, fog, sandstorms, and snow. Experimental results on the public DAWN dataset demonstrate that Adverse-Det achieves high detection accuracy across various scenes and weather conditions. Compared with baseline models, Adverse-Det improves the mean Average Precision at intersection over union thresholds from 0.5 to 0.95 (mAP50:95) by an average of 18.7%, achieving an mAP50:95 of 0.455 under snowy conditions. In addition, on the self-constructed real-world rainy weather driving dataset Rain-Drive, Adverse-Det achieves a 4.56% improvement in mAP50:95. These results fully verify the effectiveness and strong generalization capability of the proposed method in complex real-world weather environments, providing solid technical support for the safe and reliable operation of autonomous driving systems under adverse weather conditions.

Keywords

object detection / autonomous driving / adverse weather

Cite this article

Download citation ▾
Shi Yin, Hui Liu. An end-to-end object detection method for autonomous driving in multiple adverse weather conditions. Journal of Central South University 1-14 DOI:10.1007/s11771-026-6301-7

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Shi W-j, Alawieh M B, Li Xet al. . Algorithm and hardware implementation for visual perception system in autonomous vehicle: A survey [J]. Integration. 2017, 59: 148-156.

[2]

Zhao Z-q, Zheng P, Xu S-tet al. . Object detection with deep learning: A review [J]. IEEE Transactions on Neural Networks and Learning Systems. 2019, 30(11): 3212-3232.

[3]

Yu H-s, Yang Z-g, Tan Let al. . Methods and datasets on semantic segmentation: A review [J]. Neurocomputing. 2018, 304: 82-103.

[4]

Girshick R, Donahue J, Darrell Tet al. . Rich feature hierarchies for accurate object detection and semantic segmentation [C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition. June 23–28, 2014. 2014, Columbus, OH, USA, IEEE580587.

[5]

Ren S-q, He K-m, Girshick Ret al. . Faster R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017, 39(6): 1137-1149.

[6]

He K, Gkioxari G, Dollar Pet al. . Mask R-CNN. 201729612969

[7]

Redmon J, Divvala S, Girshick Ret al. . You only look once: Unified, real-time object detection [C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 27–30, 2016. 2016, Las Vegas, NV, USA, IEEE779788.

[8]

REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [PP/OL]. [2025-12-01]. http://pjreddie.com/yolo9000/. arXiv:1612.08242v1.

[9]

Farhadi A, Redmon J. Yolov3: An incremental improvement. Comput. Vis. Pattern Reogncit., vol. 1804. 2018, Heidelberg, Germany, Springer Berlin16

[10]

BOCHKOVSKIY A, WANG C Y, LIAO H M. YOLOv4: Optimal speed and accuracy of object detection [EB/OL]. 2020: arXiv: 2004.10934. https://arxiv.org/abs/2004.10934

[11]

Liu W, Anguelov D, Erhan Det al. . SSD: Single shot MultiBox detector [M]. Computer Vision–ECCV 2016. 2016, Cham, Springer International Publishing2137.

[12]

Lin T-Y, Goyal P, Girshick Ret al. . Focal loss for dense object detection. 201729802988

[13]

Qiu Y-s, Lu Y-y, Wang Y-tet al. . Visual perception challenges in adverse weather for autonomous vehicles: A review of rain and fog impacts [C]. 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). September 20–22, 2024. 2024, Chongqing, China, IEEE13421348

[14]

Cavallo V. Perceptual distortions when driving in fog [C]. International Conference on Traffic and Transportation Studies (ICTTS), Guilin, China. 2002965972

[15]

Tan J-hua. Impact of risk illusions on traffic flow in fog weather [J]. Physica A: Statistical Mechanics and its Applications. 2019, 525: 216-222.

[16]

Liu W-y, Ren G-f, Yu R-set al. . Image-adaptive YOLO for object detection in adverse weather conditions [J]. Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(2): 1792-1800.

[17]

Sindagi V A, Oza P, Yasarla Ret al. . Prior-based domain adaptive object detection for hazy and rainy conditions [M]. Computer Vision–ECCV 2020. 2020, Cham, Springer International Publishing763780.

[18]

Wang H, Xu Y-s, He Y-get al. . YOLOv5-fog: A multiobjective visual detection algorithm for fog driving scenes based on improved YOLOv5 [J]. IEEE Transactions on Instrumentation and Measurement. 2022, 71: 2515612

[19]

Kumar D, Muhammad N. Object detection in adverse weather for autonomous driving through data merging and YOLOv8 [J]. Sensors. 2023, 23(20): 8471.

[20]

LI Hao-yuan, HU Qi, ZHOU Bin-jia, et al. CFMW: Cross-modality fusion mamba for robust object detection under adverse weather [PP/OL]. [2025-12-01] https://arxiv.org/abs/2404.16302. arXiv:2404.16302.

[21]

Chen Y Y, Jhong S Y, Lin H Cet al. . Vision–language-guided adaptive cross-modal fusion for multispectral object detection under adverse weather conditions [J]. IEEE MultiMedia. 2025, 32222-32.

[22]

KHANAM R, HUSSAIN M. YOLOv11: An overview of the key architectural enhancements [PP/OL]. [2025-12-01] https://arxiv.org/abs/2410.17725. arXiv:2410.17725.

[23]

Sohan M, Sai Ram T, Rami Reddy C V. A review on YOLOv8 and its advancements [M]. Data Intelligence and Cognitive Informatics. 2024, Singapore, Springer Nature Singapore529-545.

[24]

LIU Yue, TIAN Yun-jie, ZHAO Yu-zhong, et al. VMamba: Visual state space model [PP/OL]. [2025-12-01] https://arxiv.org/abs/2401.10166. arXiv:2401.10166.

[25]

Gu A. Modeling sequences with structured state spaces [D]. 2023

[26]

GU A, DAO T. Mamba: Linear-time sequence modeling with selective state spaces [PP/OL]. [2025-12-01] https://arxiv.org/abs/2312.00752v2. arXiv:2312.00752v2.

[27]

WANG Zi-yang, ZHENG Jian-qing, ZHANG Yi-chi, et al. Mamba-UNet: UNet-like pure visual mamba for medical image segmentation [EB/OL]. 2024: arXiv: 2402.05079. https://arxiv.org/abs/2402.05079

[28]

Chen L-w, Fu Y, Gu Let al. . Frequency-aware feature fusion for dense image prediction [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2024, 46(12): 10763-10780.

[29]

Zheng Z-h, Wang P, Liu Wet al. . Distance-IoU loss: Faster and better learning for bounding box regression [J]. Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(7): 12993-13000.

[30]

LI Xiang, WANG Wen-hai, WU Li-jun, et al. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection [EB/OL]. 2020: arXiv: 2006.04388. https://arxiv.org/abs/2006.04388

[31]

KENK M A, HASSABALLAH M. DAWN: Vehicle detection in adverse weather nature dataset 2020 [DS/OL]. Medndely Data (2020) [2025-12-01]. DOI: https://doi.org/10.17632/766ygrbt8y.3.

[32]

Wang C Y, Yeh I H, Mark Liao H Y. YOLOv9: Learning what you want to learn using programmable gradient information [M]. Computer Vision–ECCV 2024. 2024, Cham, Springer Nature Switzerland121

[33]

Chen H, Chen K, Ding G-get al. . YOLOv10: Real-time end-to-end object detection [C]. Advances in Neural Information Processing Systems 37. December 10–15, 2024. 2024, Vancouver, BC, Canada, Neural Information Processing Systems Foundation, Inc. (NeurIPS)107984108011

RIGHTS & PERMISSIONS

Central South University

PDF

9

Accesses

0

Citation

Detail

Sections
Recommended

/