MLGIA:Recognition of Traffic Panel Information Based on PaddlePaddle

Yukai JI , Huayong GE , Yaqun MENG , Sisi LI

Journal of Donghua University(English Edition) ›› 2025, Vol. 42 ›› Issue (5) : 494 -502.

PDF (10004KB)
Journal of Donghua University(English Edition) ›› 2025, Vol. 42 ›› Issue (5) :494 -502. DOI: 10.19884/j.1672-5220.202409001
Information Technology and Artificial Intelligence
research-article

MLGIA:Recognition of Traffic Panel Information Based on PaddlePaddle

Author information +
History +
PDF (10004KB)

Abstract

To address the challenge of recognizing small target information on traffic panels, a model named MLGIA is proposed based on Paddle Paddle. MLGIA is composed of Mobilenet V3 with lightweight Ghost Block(LGB) and an improved augmented feature pyramid network(IAFPN). In this model, LGB improves Mobilenet V3 by optimizing the convolutional structure and employing linear transformations to extract sufficient feature maps; IAFPN enhances feature representation through pruning techniques and channel-reduction convolutions. Additionally, knowledge distillation compresses the model and improves its accuracy, while the match category information(MCI) method further optimizes the processing of the detected category information. Experimental results demonstrate that MLGIA outperforms Mobilenet V3. MLGIA achieves a detection accuracy comparable to YOLOv8n, with significantly lower resource consumption. Therefore, MLGIA is a strong complement in the traffic panel information recognition domain.

Keywords

convolutional neural network / object detection / feature fusion / knowledge distillation / lightweight

Cite this article

Download citation ▾
Yukai JI, Huayong GE, Yaqun MENG, Sisi LI. MLGIA:Recognition of Traffic Panel Information Based on PaddlePaddle. Journal of Donghua University(English Edition), 2025, 42(5): 494-502 DOI:10.19884/j.1672-5220.202409001

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

LI Y, IBANEZ-GUZMAN J. Lidar for autonomous driving:the principles,challenges,and trends for automotive lidar and perception systems[J]. IEEE Signal Processing Magazine, 2020, 37(4):50-61.

[2]

DAI X R, YUAN X, WEI X Y. TIRNet:object detection in thermal infrared images for autonomous driving[J]. Applied Intelligence, 2021, 51(3):1244-1261.

[3]

TABASSUM A, VAIDEHI K. Traffic sign recognition for an intelligent vehicle:a review[J]. Turkish Online Journal of Qualitative Inquiry, 2021, 12(6):6209-6215.

[4]

TIAN Y, GELERNTER J, WANG X, et al. Traffic sign detection using a multi-scale recurrent attention network[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(12):4466-4475.

[5]

HAQUE W A, AREFIN S, SHIHAVUDDIN A S M, et al. DeepThin:a novel lightweight CNN architecture for traffic sign recognition without GPU requirements[J]. Expert Systems with Applications, 2021,168:114481.

[6]

DINESH N, MATHI S. Optical character recognition-based signboard detection[C]// Disruptive Technologies for Big Data and Cloud Applications. Singapore: Springer Nature Singapore, 2022:447-455.

[7]

GUO Y F, FENG W, YIN F, et al. SignParser:an end-to-end framework for traffic sign understanding[J]. International Journal of Computer Vision, 2024, 132(3):805-821.

[8]

KHALID S, SHAH J H, SHARIF M, et al. A robust intelligent system for text-based traffic signs detection and recognition in challenging weather conditions[J]. IEEE Access, 2024,12:78261-78274.

[9]

HOWARD A, SANDLER M, CHEN B, et al.Searching for MobileNetV3[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2019:1314-1324.

[10]

Hinton G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL].(2015-3-9)[2024-08-16]. https://arxiv.org/abs/1503.02531.

[11]

HAN K, WANG Y H, TIAN Q, et al. GhostNet:more features from cheap operations[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020:1577-1586.

[12]

ELFWING S, UCHIBE E, DOYA K. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning[J]. Neural Networks, 2018,107:3-11.

[13]

HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8):2011-2023.

[14]

GUO C X, FAN B, ZHANG Q, et al. AugFPN:improving multi-scale feature learning for object detection[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2020:12592-12601.

[15]

HE K M, ZHANG X Y, REN S Q, et al.Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2016:770-778.

[16]

DU Y N, LI C X, GUO R Y, et al. PP-OCR: a practical ultra lightweight OCR system[EB/OL].(2020-10-15)[2024-08-16]. https://arxiv.org/abs/2009.09941.

[17]

BALLARD D H. Generalizing the Hough transform to detect arbitrary shapes[J]. Pattern Recognition, 1981, 13(2):111-122.

[18]

LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO:common objects in context[C]//Computer Vision-ECCV 2014. Cham: Springer International Publishing, 2014:740-755.

[19]

XU S L, WANG X X, LV W Y, et al. PP-YOLOE: an evolved version of YOLO[EB/OL].(2022-12-12)[2024-08-16]. https://arxiv.org/abs/2203.16250.

Funding

National Natural Science Foundation of China(62372100)

PDF (10004KB)

52

Accesses

0

Citation

Detail

Sections
Recommended

/