An end-to-end object detection method for autonomous driving in multiple adverse weather conditions
Shi Yin , Hui Liu
Journal of Central South University ›› : 1 -14.
Autonomous driving systems impose stringent requirements on the real-time performance and computational efficiency of visual perception tasks, particularly under complex and diverse adverse weather conditions. To address these challenges, a highly robust object detection method called Adverse-Det is proposed, targeting multiple harsh weather scenarios. This model introduces the visual state-space modeling module and the frequency-aware feature fusion module to achieve dual enhancement in global spatial structure modeling and local detail recovery, effectively mitigating the performance degradation caused by image quality deterioration in challenging environments such as rain, fog, sandstorms, and snow. Experimental results on the public DAWN dataset demonstrate that Adverse-Det achieves high detection accuracy across various scenes and weather conditions. Compared with baseline models, Adverse-Det improves the mean Average Precision at intersection over union thresholds from 0.5 to 0.95 (mAP50:95) by an average of 18.7%, achieving an mAP50:95 of 0.455 under snowy conditions. In addition, on the self-constructed real-world rainy weather driving dataset Rain-Drive, Adverse-Det achieves a 4.56% improvement in mAP50:95. These results fully verify the effectiveness and strong generalization capability of the proposed method in complex real-world weather environments, providing solid technical support for the safe and reliable operation of autonomous driving systems under adverse weather conditions.
object detection / autonomous driving / adverse weather
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [PP/OL]. [2025-12-01]. http://pjreddie.com/yolo9000/. arXiv:1612.08242v1. |
| [9] |
|
| [10] |
BOCHKOVSKIY A, WANG C Y, LIAO H M. YOLOv4: Optimal speed and accuracy of object detection [EB/OL]. 2020: arXiv: 2004.10934. https://arxiv.org/abs/2004.10934 |
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
LI Hao-yuan, HU Qi, ZHOU Bin-jia, et al. CFMW: Cross-modality fusion mamba for robust object detection under adverse weather [PP/OL]. [2025-12-01] https://arxiv.org/abs/2404.16302. arXiv:2404.16302. |
| [21] |
|
| [22] |
KHANAM R, HUSSAIN M. YOLOv11: An overview of the key architectural enhancements [PP/OL]. [2025-12-01] https://arxiv.org/abs/2410.17725. arXiv:2410.17725. |
| [23] |
|
| [24] |
LIU Yue, TIAN Yun-jie, ZHAO Yu-zhong, et al. VMamba: Visual state space model [PP/OL]. [2025-12-01] https://arxiv.org/abs/2401.10166. arXiv:2401.10166. |
| [25] |
|
| [26] |
GU A, DAO T. Mamba: Linear-time sequence modeling with selective state spaces [PP/OL]. [2025-12-01] https://arxiv.org/abs/2312.00752v2. arXiv:2312.00752v2. |
| [27] |
WANG Zi-yang, ZHENG Jian-qing, ZHANG Yi-chi, et al. Mamba-UNet: UNet-like pure visual mamba for medical image segmentation [EB/OL]. 2024: arXiv: 2402.05079. https://arxiv.org/abs/2402.05079 |
| [28] |
|
| [29] |
|
| [30] |
LI Xiang, WANG Wen-hai, WU Li-jun, et al. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection [EB/OL]. 2020: arXiv: 2006.04388. https://arxiv.org/abs/2006.04388 |
| [31] |
KENK M A, HASSABALLAH M. DAWN: Vehicle detection in adverse weather nature dataset 2020 [DS/OL]. Medndely Data (2020) [2025-12-01]. DOI: https://doi.org/10.17632/766ygrbt8y.3. |
| [32] |
|
| [33] |
|
Central South University
/
| 〈 |
|
〉 |