Parking Space Detection Using a Machine Learning-Enhanced Unmanned Aerial Vehicle in a Virtual Environment

Akhil Giddaluri , Alex Jiang , Nikhil Giddaluri , Audrey Liang , Thomas Li , Yu Liang , Dalei Wu

Drones Auton. Veh. ›› 2025, Vol. 2 ›› Issue (4) : 10020

PDF (1269KB)
Drones Auton. Veh. ›› 2025, Vol. 2 ›› Issue (4) :10020 DOI: 10.70322/dav.2025.10020
Article
research-article
Parking Space Detection Using a Machine Learning-Enhanced Unmanned Aerial Vehicle in a Virtual Environment
Author information +
History +
PDF (1269KB)

Abstract

Unmanned aerial vehicles (UAVs) have increased in popularity for several diverse applications over the past few years. Parking, especially in crowded parking lots, can be very time-consuming, as a driver must manually search for vacant spaces among many occupied ones. In this work, reinforcement learning—a category of machine learning in which an agent receives inputs from the environment while outputting actions in order to maximize reward—was utilized in tandem with AirSim, a drone simulator developed by Microsoft, to automate a virtual UAV’s movement. A convolutional neural network (CNN) was then utilized to detect both vacant and filled parking spots, which achieved 98% recall and 93% accuracy. Unreal Engine was used to create a custom environment that resembled a parking lot, and the virtual drone was trained using a Deep Q-Network (DQN). The DQN achieved a mean reward of 394.5 in training and 460.4 in evaluation. A pre-trained CNN integrated with the DQN enables the real-time classification of vacant/occupied parking spaces from drone imagery. Results validate the effectiveness of combining reinforcement learning navigation with CNN image classification, demonstrating deployment-ready performance for real-world congested parking applications.

Keywords

Unmanned aerial vehicle / Parking space detection / Deep-Q network / Convolutional neural network / AirSim / Unreal Engine

Cite this article

Download citation ▾
Akhil Giddaluri, Alex Jiang, Nikhil Giddaluri, Audrey Liang, Thomas Li, Yu Liang, Dalei Wu. Parking Space Detection Using a Machine Learning-Enhanced Unmanned Aerial Vehicle in a Virtual Environment. Drones Auton. Veh., 2025, 2(4): 10020 DOI:10.70322/dav.2025.10020

登录浏览全文

4963

注册一个新账户 忘记密码

Author Contributions

Conceptualization, A.G., N.G., A.J., D.W. and Y.L.; Methodology, A.G., N.G., A.J., D.W., Y.L., T.L. and A.L.; Software, A.G. and A.J.; Validation, D.W. and Y.L.; Formal Analysis, A.G.; Investigation, A.G. and A.J.; Resources, A.G., N.G., A.J., D.W., Y.L., T.L. and A.L.; Data Curation, A.G. and A.J.; Writing—Original Draft Preparation, A.G.; Writing—Review & Editing, A.G., N.G., A.J., D.W., Y.L., T.L. and A.L.; Visualization, A.G., N.G., A.J., D.W., Y.L., T.L. and A.L.; Supervision, A.G., D.W. and Y.L.; Project Administration, A.G., D.W. and Y.L.

Ethics Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used by the convolutional neural network in this investigation was sourced from a Github dataset created by Dimeji Oladepo.

Funding

This research received no external funding.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

[1]

Mohsan SAH, Othman NQH, Li Y, Alsharif MH, Khan MA. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109-137.

[2]

Fan J, Saadeghvaziri M. Applications of Drone in Infrastructure: Challenges and Opportunities. Int. J. Mech. Mechatron. Eng. 2019, 13, 649-655.

[3]

Laghari AA, Jumani AK, Laghari RA, Nawaz H. Unmanned aerial vehicles: A review. Cogn. Robot. 2023, 3, 8-22.

[4]

Shakya AK, Pillai G, Chakrabarty S. Reinforcement learning algorithms: A brief survey. Expert Syst. Appl. 2023, 231, 120495.

[5]

Azar AT, Koubaa A, Mohamed NA, Ibrahim HA, Ibrahim ZF, Kazim M, et al. Drone Deep Reinforcement Learning: A Review. Electronics 2021, 10, 999.

[6]

Peng C. Drone-Based Vacant Parking Space Detection. In Proceedings of the 32nd International Conference on Advanced Information Networking and Applications Workshop, Krakow, Poland, 16-18 May 2018. pp. 618-622.

[7]

Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, et al. Review of deep learning: Concepts, CNN architectures, challenges, applications, and future directions. J. Big Data 2021, 8, 53.

[8]

Chan JH, Liu K, Chen Y, Sagar ASMS, Kim Y. Reinforcement learning-based drone simulators: Survey, practice, and challenge. Artif. Intell. Rev. 2024, 57, 281.

[9]

AirSim. Available online: https://microsoft.github.io/AirSim/ (accessed on 11 August 2025).

[10]

Boda VK, Nasipuri A, Howitt I. Design Considerations for a Wireless Sensor Network for Locating Parking Spaces. In Proceedings of the Proceedings 2007 IEEE SoutheastCon, Richmond, VA, USA, 22-25 March 2007. doi:10.1109/SECON.2007.342990.

[11]

Hsieh M, Lin Y, Hsu W. Drone-Based Object Counting by Spatially Regularized Regional Proposal Network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22-29 October 2017; pp. 4145-4153.

[12]

Ko T, Park J, Choi S, Shim J. Autonomous Flight of UAV in Complex Multi-Obstacle Environment Using Data-Driven and Vision-Based Deep Reinforcement Learning and AirSim. In Proceedings of the AIAA Aviation Forum and Ascend 2025, Las Vegas, NV, USA, 21-25 July 2025. doi:10.2514/6.2025-3686.

[13]

Park J, Farkhodov K, Lee S, Kwon K. Deep Reinforcement Learning-Based DQN Agent Algorithm for Visual Object Tracking in a Virtual Environmental Simulation. Appl. Sci. 2022, 12, 3220. doi:10.3390/app12073220.

[14]

Murani S. Automated Car Parking Space Detection Using Deep Learning. Ph.D. Thesis, University of Nairobi, Nairobi, Kenya, 2021.

[15]

Tirado GB, Semwal SK. Autonomous Parking Spot Detection System for Mobile Phones Using Drones and Deep Learning. Available online: file:///C:/Users/April/Downloads/I19.pdf (accessed on 11 August 2025).

[16]

Unreal Engine. Available online: https://www.unrealengine.com/en-US (accessed on 11 August 2025).

[17]

Lin M, Shi S, Guo Y, Chalaki B, Tadinarthi V, Pari EM, et al. Navigating Noisy Feedback: Enhancing Reinforcement Learning with Error-Prone Language Models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, Miami, FL, USA, 12-16 November 2024.

[18]

Ying X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. doi:10.1088/1742-6596/1168/2/022022.

[19]

Zhang X, Srinivasan P, Mahadevan S. Sequential Deep Learning from NTSB Reports for Aviation Safety Prognosis. Saf. Sci. 2021, 142, 105390. doi:10.1016/j.ssci.2021.105390.

[20]

Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929-1958.

[21]

Fränti P, Mariescu-Istodor R. Soft precision and recall. Pattern Recognit. Lett. 2023, 167, 115-121. doi:10.1016/j.patrec.2023.02.005.

[22]

Rainio O, Teuho J, Klén R. Evaluation metrics and statistical tests for machine learning. Sci. Rep. 2024, 14, 6086. doi:10.1038/s41598-024-56706-x.

PDF (1269KB)

12

Accesses

0

Citation

Detail

Sections
Recommended

/