A motion planning method for a car based on environmental feature-adaptive polynomial fitting and intelligent obstacle avoidance strategies

Yunqian Xu

Smart Construction and Sustainable Cities ›› 2025, Vol. 3 ›› Issue (1) : 1

PDF
Smart Construction and Sustainable Cities ›› 2025, Vol. 3 ›› Issue (1) : 1 DOI: 10.1007/s44268-024-00047-1
Research

A motion planning method for a car based on environmental feature-adaptive polynomial fitting and intelligent obstacle avoidance strategies

Author information +
History +
PDF

Abstract

This paper proposes an improved method for car motion planning aimed at addressing the limitations of traditional path planning and obstacle avoidance algorithms in complex environments. The study utilizes Bi-RRT* and polynomial fitting for path planning, incorporating an environment-adaptive polynomial fitting technique based on obstacle density to enhance path precision in areas with high obstacle density. In the local planning phase, intelligent switching of the car’s obstacle avoidance strategies is implemented, allowing the car to use reverse motion or lateral avoidance in high-density regions to prevent stalling. Furthermore, problem decomposition and approximation methods are applied to large-scale quadratic programming (QP) problems in path tracking, improving the efficiency of the MPC algorithm. Experimental results demonstrate that the proposed method significantly enhances the car’s motion performance and stability in complex environments.

Cite this article

Download citation ▾
Yunqian Xu. A motion planning method for a car based on environmental feature-adaptive polynomial fitting and intelligent obstacle avoidance strategies. Smart Construction and Sustainable Cities, 2025, 3(1): 1 DOI:10.1007/s44268-024-00047-1

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Sánchez-Ibá˜nez JR, Pérez-del Pulgar CJ, García-Cerezo A. Path planning for autonomous mobile robots: A review Sensors, 2021, 21(23): 7898.

[2]

Aggarwal S, Kumar N. Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges Comput Commun, 2020, 149: 270-299.

[3]

Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, et al.. Human-level control through deep reinforcement learning Nature, 2015, 518(7540): 529-533.

[4]

Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, et al.. Mastering the game of go with deep neural networks and tree search Nature, 2016, 529(7587): 484-489.

[5]

Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, et al.. Mastering the game of go without human knowledge Nature, 2017, 550(7676): 354-359.

[6]

Paden B, Čáp M, Yong SZ, Yershov D, Frazzoli E. A survey of motion planning and control techniques for self-driving urban vehicles IEEE Trans Intell Veh, 2016, 1(1): 33-55.

[7]

Bojarski M, Yeres P, Choromanska A, Choromanski K, Firner B, Jackel L, Muller U (2017) Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911

[8]

Wei J, Snider JM, Gu T, Dolan JM, Litkouhi B (2014) A behavioral planning framework for autonomous driving. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE, pp 458–464

[9]

Wang B, Ju D, Xu F, Feng C. Bi-rrt*: An improved bidirectional rrt* path planner for robot in two-dimensional space IEEJ Trans Electr Electron Eng, 2023, 18(10): 1639-1652.

[10]

Zhou B, Ye H. A study of polynomial fit-based methods for qualitative trend analysis J Process Control, 2016, 37: 21-33.

[11]

Ma G, Duan Y, Li M, Xie Z, Zhu J. A probability smoothing bi-rrt path planning algorithm for indoor robot Futur Gener Comput Syst, 2023, 143: 349-360.

[12]

Van Den Berg J, Guy SJ, Lin M, Manocha D (2011) Reciprocal nbody collision avoidance. In: Robotics Research: The 14th International Symposium ISRR. Springer, pp 3–19

[13]

Garcia CE, Morari M (1982) Internal model control. a unifying review and some new results. Ind Eng Chem Process Des Dev 21(2):308–323

[14]

Richalet J. Industrial applications of model based predictive control Automatica, 1993, 29(5): 1251-1274.

[15]

Kouzoupis D, Zanelli A, Peyrl H, Ferreau HJ (2015) Towards proper assessment of qp algorithms for embedded model predictive control. In: 2015 European Control Conference (ECC). IEEE, pp 2609–2616

[16]

Fiorini P, Shiller Z (1993) Motion planning in dynamic environments using the relative velocity paradigm. In: [1993] Proceedings IEEE International Conference on Robotics and Automation, vol 1. IEEE, pp 560–565

[17]

Zarei M, Kashi N, Kalhor A, Masouleh MT. Experimental study on shared-control of a mobile robot via a haptic device with an optimal velocity obstacle based receding horizon control approach J Intell Robot Syst, 2020, 97(2): 357-372.

[18]

Jenie YI, van Kampen E-J, de Visser CC, Ellerbroek J, Hoekstra JM. Three-dimensional velocity obstacle method for uncoordinated avoidance maneuvers of unmanned aerial vehicles J Guid Control Dyn, 2016, 39(10): 2312-2323.

[19]

Huang Y, Chen L, van Gelder PHAJM. Generalized velocity obstacle algorithm for preventing ship collisions at sea Ocean Eng, 2019, 173: 142-156.

[20]

Niu H, Ma C, Han P. Directional optimal reciprocal collision avoidance Robot Auton Syst, 2021, 136: 103705.

[21]

Quirynen R, Houska B, Vallerio M, Telen D, Logist F, Van Impe J, Diehl M (2014) Symmetric algorithmic differentiation based exact hessian sqp method and software for economic mpc. In: 53rd IEEE Conference on Decision and Control. IEEE, pp 2752-2757

[22]

Kothare MV, Balakrishnan V, Morari M. Robust constrained model predictive control using linear matrix inequalities Automatica, 1996, 32(10): 1361-1379.

[23]

Ang KH, Chong G, Li Y. Pid control system analysis, design, and technology IEEE Trans Control Syst Technol, 2005, 13(4): 559-576.

[24]

Yang Y, Juntao L, Lingling P. Multi-robot path planning based on a deep reinforcement learning DQN algorithm CAAI Trans Intell Technol, 2020, 5(3): 177-183.

[25]

Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347

[26]

Zanghirati G, Zanni L. A parallel solver for large quadratic programs in training support vector machines Parallel Comput, 2003, 29(4): 535-551.

[27]

Nocedal J, Wright SJ Numerical optimization, 1999 Springer.

RIGHTS & PERMISSIONS

The Author(s)

AI Summary AI Mindmap
PDF

130

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/