A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration

Weinan Chen , Wenzheng Chi , Sehua Ji , Hanjing Ye , Jie Liu , Yunjie Jia , Jiajie Yu , Jiyu Cheng

Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (2) : 100203

PDF (3253KB)
Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (2) : 100203 DOI: 10.1016/j.birob.2024.100203
Review

A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration

Author information +
History +
PDF (3253KB)

Abstract

The development of autonomous robots and the wide range of communication resources hold significant potential for enhancing multi-robot collaboration and its applications. Over the past decades, there has been a growing interest in autonomous navigation and multi-robot collaboration. Consequently, a comprehensive review of current trends in this field has become crucial for both novice and experienced researchers. This paper focuses on automation systems and multi-robot navigation to support their operations. The review is structured around three potential benefits: perception, planning, and collaboration. This review has systematically explored a broad spectrum of autonomous robots and multi-robot navigation strategies with over 170 references. Also, we point out the challenges of the existing work, as well as the development direction. We believe that this review can build a bridge between autonomous robots and their applications.

Keywords

Autonomous robots / Planning / Perception / Collaboration

Cite this article

Download citation ▾
Weinan Chen, Wenzheng Chi, Sehua Ji, Hanjing Ye, Jie Liu, Yunjie Jia, Jiajie Yu, Jiyu Cheng. A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration. Biomimetic Intelligence and Robotics, 2025, 5(2): 100203 DOI:10.1016/j.birob.2024.100203

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Weinan Chen: Writing - original draft, Investigation. Wenzheng Chi: Writing - original draft, Investigation. Sehua Ji: Writing - original draft, Investigation. Hanjing Ye: Writing - original draft, Investigation. Jie Liu: Writing - original draft, Investigation. Yunjie Jia: Writing - original draft, Investigation. Jiajie Yu: Writing - original draft, Investigation. Jiyu Cheng: Writing - original draft, Investigation.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The work in this paper is supported by the National Natural Science Foundation of China (62103179, 62273246, and U23A20339).

References

[1]

J. Ni, Y. Chen, G. Tang, J. Shi, W. Cao, P. Shi, Deep learning-based scene understanding for autonomous robots: a survey, Intell. Robot. (2023).

[2]

M. Kim, M. Zhou, S. Lee, H. Lee, Development of an autonomous mobile robot in the outdoor environments with a comparative survey of LiDAR SLAM, in: 2022 22nd International Conference on Control, Automation and Systems, ICCAS, 2022, pp. 1990-1995.

[3]

R.K. Raj, A. Kos, A comprehensive study of mobile robot: History, developments, applications, and future research perspectives, Appl. Sci.(2022).

[4]

L. Antonyshyn, J. Silveira, S.N. Givigi, J.A. Marshall, Multiple mobile robot task and motion planning: A survey, ACM Comput. Surv. 55 (2022) 1-35.

[5]

S. Wang, Y. Wang, D. Li, Q. Zhao, Distributed relative localization algo-rithms for multi-robot networks: A survey, Sensors (Basel, Switzerland) 23 (2023).

[6]

B. Wu, C.S. Suh, State-of-the-art in robot learning for multi-robot collaboration: A comprehensive survey, 2024, ArXiv, arXiv:2408.11822.

[7]

J. Orr, A. Dutta, Multi-agent deep reinforcement learning for multi-robot applications: A survey, Sensors (Basel, Switzerland) 23 (2023).

[8]

E. Rosten, T. Drummond, Machine learning for high-speed corner de-tection, in: Computer Vision-ECCV 2006: 9 th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, Springer, 2006, pp. 430-443.

[9]

M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, P. Fua, BRIEF: Computing a local binary descriptor very fast, IEEE Trans. Pattern Anal. Mach. Intell. 34 (7) (2011) 1281-1298.

[10]

C. Mei, G. Sibley, M. Cummins, P. Newman, I. Reid, A constant-time efficient stereo slam system, in:Proceedings of the British Machine Vision Conference, vol. 1, (no. 2009) BMVA Press, 2009.

[11]

D.G. Lowe, Object recognition from local scale-invariant features, in:Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, Ieee, 1999, pp. 1150-1157.

[12]

D. DeTone, T. Malisiewicz, A. Rabinovich, Superpoint: Self-supervised interest point detection and description,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 224-236.

[13]

M. Tyszkiewicz, P. Fua, E. Trulls, DISK: Learning Local Features with Policy Gradient, vol. 33, 2020, pp. 14254-14265.

[14]

P.-E. Sarlin, D. DeTone, T. Malisiewicz, A. Rabinovich, Superglue: Learning feature matching with graph neural networks, 2020, pp. 4938-4947.

[15]

J. Sun, Z. Shen, Y. Wang, H. Bao, X. Zhou, LoFTR: Detector-free local feature matching with transformers, 2021, pp. 8922-8931.

[16]

J. Edstedt, I. Athanasiadis, M.a. Wadenbäck, M. Felsberg, DKM: Dense kernelized feature matching for geometry estimation, in: IEEE Conference on Computer Vision and Pattern Recognition, 2023.

[17]

Y. Zhang, X. Zhao, MESA: Matching everything by segmenting anything,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20217-20226.

[18]

H. Jégou, M. Douze, C. Schmid, P. Pérez, Aggregating local descriptors into a compact image representation, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, 2010, pp. 3304-3311.

[19]

R. Arandjelovic, A. Zisserman, All about VLAD, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1578-1585.

[20]

H. Jégou, F. Perronnin, M. Douze, J. Sánchez, P. Pérez, C. Schmid, Aggre-gating local image descriptors into compact codes, IEEE Trans. Pattern Anal. Mach. Intell. 34 (9) (2011) 1704-1716.

[21]

R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, J. Sivic, NetVLAD: CNN architecture for weakly supervised place recognition,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5297-5307.

[22]

W. Chen, H. Ye, L. Zhu, C. Tang, C. Fu, Y. Chen, H. Zhang, Keyframe selection with information occupancy grid model for long-term data association, in: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2022, pp. 2786-2793.

[23]

H. Ye, W. Chen, J. Yu, L. He, Y. Guan, H. Zhang, Condition-invariant and compact visual place description by convolutional autoencoder, Robotica 41 (6) (2023) 1718-1732.

[24]

Y. Ge, H. Wang, F. Zhu, R. Zhao, H. Li, Self-supervising fine-grained region similarities for large-scale image localization, in: Computer Vision-ECCV 2020: 16 th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16, Springer, 2020, pp. 369-386.

[25]

F. Lu, X. Lan, L. Zhang, D. Jiang, Y. Wang, C. Yuan, CricaVPR: Cross-image correlation-aware representation learning for visual place recognition,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 16772-16782.

[26]

C. Godard, O. Mac Aodha, G.J. Brostow, Unsupervised monocular depth estimation with left-right consistency, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 270-279.

[27]

C. Godard, O. Mac Aodha, M. Firman, G.J. Brostow, Digging into self-supervised monocular depth estimation, in:Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3828-3838.

[28]

R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, V. Koltun, Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer, IEEE Trans. Pattern Anal. Mach. Intell. 44 (3) (2020) 1623-1637.

[29]

L. Yang, B. Kang, Z. Huang, X. Xu, J. Feng, H. Zhao, Depth anything: Unleashing the power of large-scale unlabeled data,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 10371-10381.

[30]

L. Piccinelli, Y.-H. Yang, C. Sakaridis, M. Segu, S. Li, L. Van Gool, F. Yu, UniDepth: Universal monocular metric depth estimation,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 10106-10116.

[31]

J.R. Uijlings, K.E. Van De Sande, T. Gevers, A.W. Smeulders, Selective search for object recognition, Int. J. Comput. Vis. 104 (2013) 154-171.

[32]

R. Girshick, Fast r-cnn, in:Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440-1448.

[33]

S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-time object detection with region proposal networks, in: Advances in Neural Information Processing Systems, vol. 28, 2015.

[34]

J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779-788.

[35]

J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263-7271.

[36]

J. Redmon, A. Farhadi,Yolov3: An incremental improvement, 2018, arXiv preprint arXiv:1804.02767.

[37]

N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-end object detection with transformers, in: European Conference on Computer Vision, Springer, 2020, pp. 213-229.

[38]

X. Zhu, W. Su, L. Lu, B. Li, X. Wang, J. Dai, Deformable DETR: Deformable transformers for end-to-end object detection,in: International Conference on Learning Representations, 2021.

[39]

L.H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y. Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang, et al., Grounded language-image pre-training,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10965-10975.

[40]

S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al., Grounding dino: Marrying dino with grounded pre-training for open-set object detection, 2023, arXiv preprint arXiv:2303.05499.

[41]

T. Cheng, L. Song, Y. Ge, W. Liu, X. Wang, Y. Shan, Yolo-world: Real-time open-vocabulary object detection,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 16901-16911.

[42]

A. Bewley, Z. Ge, L. Ott, F. Ramos, B. Upcroft, Simple online and realtime tracking, in: 2016 IEEE International Conference on Image Processing, ICIP, IEEE, 2016, pp. 3464-3468.

[43]

N. Wojke, A. Bewley, D. Paulus, Simple online and realtime tracking with a deep association metric, in: 2017 IEEE International Conference on Image Processing, ICIP, IEEE, 2017, pp. 3645-3649.

[44]

Y. Zhang, C. Wang, X. Wang, W. Zeng, W. Liu, Fairmot: On the fairness of detection and re-identification in multiple object tracking, Int. J. Comput. Vis. 129 (2021) 3069-3087.

[45]

J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell, F. Yu, Quasi-dense similarity learning for multiple object tracking, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 164-173.

[46]

Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, X. Wang, Bytetrack: Multi-object tracking by associating every detection box,in: European Conference on Computer Vision, Springer, 2022, pp. 1-21.

[47]

O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation,in:Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18 th International Con-ference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234-241.

[48]

L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 801-818.

[49]

S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P.H. Torr, L. Zhang, Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers, in: CVPR, 2021.

[50]

R. Strudel, R. Garcia, I. Laptev, C. Schmid, Segmenter: Transformer for semantic segmentation,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7262-7272.

[51]

A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A.C. Berg, W.-Y. Lo, et al., Segment anything,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4015-4026.

[52]

N. Ravi, V. Gabeur, Y.-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. Rädle, C. Rolland, L. Gustafson, E. Mintun, J. Pan, K.V. Alwala, N. Carion, C.-Y. Wu, R. Girshick, P. Dollár, C. Feichtenhofer, SAM 2: Segment anything in images and videos, 2024, arXiv preprint.

[53]

C. Campos, R. Elvira, J.J.G. Rodríguez, J.M. M. Montiel, J. D. Tardós, ORB-SLAM3: An accurate open-source library for visual, visual-Inertial, and multimap SLAM, IEEE Trans. Robot. 37 (6) (2021) 1874-1890.

[54]

S. Xu, S. Chen, R. Xu, C. Wang, P. Lu, L. Guo, Local feature matching using deep learning: A survey, Inf. Fusion 107 (2024) 102344.

[55]

T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft COCO: Common objects in context, in: D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars (Eds.), Computer Vision - ECCV 2014, Springer International Publishing, Cham, 2014, pp. 740-755.

[56]

Z. Li, N. Snavely, Megadepth: Learning single-view depth prediction from internet photos,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2041-2050.

[57]

A. Dai, A.X. Chang, M. Savva, M. Halber, T. Funkhouser, M. Nießner, Scan-net: Richly-annotated 3d reconstructions of indoor scenes,in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5828-5839.

[58]

S. Lowry, N. Sünderhauf, P. Newman, J.J. Leonard, D. Cox, P. Corke, M.J. Milford, Visual place recognition: A survey, ieee Trans. Robot. 32 (1)(2015) 1-19.

[59]

P. Yin, J. Jiao, S. Zhao, L. Xu, G. Huang, H. Choset, S. Scherer, J. Han, General place recognition survey: Towards real-world autonomy, 2024, arXiv preprint arXiv:2405.04812.

[60]

J. Luiten, A. Osep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixe, B. Leibe, HOTA: A higher order metric for evaluating multi-object tracking, Int. J. Comput. Vis. (IJCV) (2020).

[61]

M. Oquab, T. Darcet, T. Moutakanni, H.V. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, R. Howes, P.-Y. Huang, H. Xu, V. Sharma, S.-W. Li, W. Galuba, M. Rabbat, M. Assran, N. Ballas, G. Synnaeve, I. Misra, H. Jegou, J. Mairal, P. Labatut, A. Joulin, P. Bojanowski, DINOv2: Learning robust visual features without supervision, 2023, arXiv: 2304.07193.

[62]

X. Lin, J. Ruan, Y. Yang, L. He, Y. Guan, H. Zhang, Robust data association against detection deficiency for semantic SLAM, IEEE Trans. Autom. Sci. Eng. 21 (1) (2023) 868-880.

[63]

A. Radford, J.W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., Learning transferable visual models from natural language supervision,in:International Conference on Machine Learning, PMLR, 2021, pp. 8748-8763.

[64]

M. Ibrahim, O. Moselhi, IMU-based indoor localization for construction applications, in: ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, vol. 32, 2015, p. 1.

[65]

Y. Wang, H. Cheng, M.Q.-H. Meng, Spatiotemporal co-attention hybrid neural network for pedestrian localization based on 6D IMU, IEEE Trans. Autom. Sci. Eng. 20 (2023) 636-648.

[66]

M. Brossard, A. Barrau, S. Bonnabel, AI-IMU dead-reckoning, IEEE Trans. Intell. Veh 5 (2019) 585-595.

[67]

L.V. Nguyen, H.M. La, A human foot motion localization algorithm using IMU, in: 2016 American Control Conference, ACC, 2016, pp. 4379-4384.

[68]

T.-N. Do, R. Liu, C. Yuen, U.-X. Tan, Design of an infrastructureless in-door localization device using an IMU sensor, in: 2015 IEEE International Conference on Robotics and Biomimetics, ROBIO, 2015, pp. 2115-2120.

[69]

A. Mandow, J.L. Martínez, J. Morales, J.-L. Blanco, A.J. García-Cerezo, J. González, Experimental kinematics for wheeled skid-steer mobile robots, in: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2007, pp. 1222-1227.

[70]

Y. Wu, T. Wang, J. Liang, J. Chen, Q. Zhao, X. Yang, C. Han, Experimental kinematics modeling estimation for wheeled skid-steering mobile robots, in: 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO, 2013, pp. 268-273.

[71]

J.L. Martínez, A. Mandow, J. Morales, S. Pedraza, A.J. García-Cerezo, Approximating kinematics for tracked mobile robots, Int. J. Robot. Res. 24 (2005) 867-878.

[72]

M. Kotaru, K. Joshi, D. Bharadia, S. Katti, Spotfi: Decimeter level localiza-tion using wifi,in: Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, 2015, pp. 269-282.

[73]

J. Biswas, M.M. Veloso, WiFi localization and navigation for autonomous indoor mobile robots, in: 2010 IEEE International Conference on Robotics and Automation, ICRA, 2010, pp. 4379-4384.

[74]

P. Kriz, F. Maly, T. Kozel, Improving indoor localization using bluetooth low energy beacons, Mob. Inform. Syst. 2016 (1) (2016) 2083094.

[75]

M. Ridolfi, A. Kaya, R. Berkvens, M. Weyn, W. Joseph, E.D. Poorter, Self-calibration and collaborative localization for UWB positioning systems: A survey and future research directions, ACM Comput. Surv. 54 (4) (2021) 1-27.

[76]

K. Yu, K. Wen, Y. Li, S. Zhang, K. Zhang, A novel NLOS mitigation algorithm for UWB localization in harsh indoor environments, IEEE Trans. Veh. Technol. 68 (1) (2018) 686-699.

[77]

N.M. Drawil, H.M. Amar, O.A. Basir, GPS localization accuracy classifica-tion: A context-based approach, IEEE Trans. Intell. Transp. Syst. 14 (2013) 262-273.

[78]

E. Zhang, N. Masoud, Increasing GPS localization accuracy with re-inforcement learning, IEEE Trans. Intell. Transp. Syst. 22 (2020) 2615-2626.

[79]

L. Douadi, Y. Dupuis, P. Vasseur, Stable keypoints selection for 2D LiDAR based place recognition with map data reduction, Robotica 40 (11) (2022) 3786-3810.

[80]

J. Zhang, S. Singh, Low-drift and real-time lidar odometry and mapping, Auton. Robots 41 (2017) 401-416.

[81]

A. Pfrunder, P.V. Borges, A.R. Romero, G. Catt, A. Elfes, Real-time au-tonomous ground vehicle navigation in heterogeneous environments using a 3D lidar, in: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2017, pp. 2601-2608.

[82]

L. He, W. Li, Y. Guan, H. Zhang, IGICP: Intensity and geometry enhanced LiDAR odometry, IEEE Trans. Intell. Veh. 9 (2024) 541-554.

[83]

Y. Wu, F. Tang, H. Li, Image-based camera localization: an overview, Vis. Comput. Ind. Biomed. Art 1 (2018) 1-13.

[84]

E. Brachmann, C. Rother, Learning less is more - 6D camera localization via 3D surface regression, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2017, pp. 4654-4662.

[85]

K. Guan, L. Ma, X. Tan, S. Guo, Vision-based indoor localization ap-proach based on SURF and landmark, in: 2016 International Wireless Communications and Mobile Computing Conference, IWCMC, 2016, pp. 655-659.

[86]

Y. Wu, J. Kuang, X. Niu, Wheel-INS2: Multiple MEMS IMU-based dead reckoning system with different configurations for wheeled robots, IEEE Trans. Intell. Transp. Syst. 24 (2020) 3064-3077.

[87]

Y. Wu, X. Niu, J. Kuang, A comparison of three measurement models for the wheel-mounted MEMS IMU-based dead reckoning system, IEEE Trans. Veh. Technol. 70 (2020) 11193-11203.

[88]

S. Zhao, Y. Chen, J.A. Farrell, High-precision vehicle navigation in urban environments using an MEM’s IMU and single-frequency GPS receiver, IEEE Trans. Intell. Transp. Syst. 17 (2016) 2854-2867.

[89]

J.J. Patoliya, H.K. Mewada, M. Hassaballah, M.A. Khan, S. Kadry, A ro-bust autonomous navigation and mapping system based on GPS and LiDAR data for unconstraint environment, Earth Sci. Inform. 15 (2022) 2703-2715.

[90]

H. Li, F. Nashashibi, G. Toulminet, Localization for intelligent vehicle by fusing mono-camera, low-cost GPS and map data, in: 13th Interna-tional IEEE Conference on Intelligent Transportation Systems, 2010, pp. 1657-1662.

[91]

J.A. Hesch, D.G. Kottas, S.L. Bowman, S.I. Roumeliotis, Camera-IMU-based localization: Observability analysis and consistency improvement, Int. J. Robot. Res. 33 (2014) 182-201.

[92]

K. Li, Z. Ouyang, L. Hu, D. Hao, L. Kneip, Robust SRIF-based LiDAR-IMU localization for autonomous vehicles, in: 2021 IEEE International Conference on Robotics and Automation, ICRA, 2021, pp. 5381-5387.

[93]

X. Liu, S. Wen, Z. Jiang, W. Tian, T.Z. Qiu, K.M. Othman, A multisensor fusion with automatic vision-LiDAR calibration based on factor graph joint optimization for SLAM, IEEE Trans. Instrum. Meas. 72 (2023) 1-9.

[94]

X. Liu, S. Wen, J. Zhao, T.Z. Qiu, H. Zhang, Edge-assisted multi-robot visual-inertial SLAM with efficient communication, IEEE Trans. Autom. Sci. Eng. (2024).

[95]

Z. Zhu, X. Lin, Z. Su, S. Mao, H. Zhu, X. Zhou, Disturbance-resistant camera-LiDAR fusion for robust three-dimentional object detection, in: IEEE International Conference on Robotics and Biomimetics, ROBIO, 2022, pp. 705-710.

[96]

H. Yin, Z. Lin, J.K. Yeoh, Semantic localization on BIM-generated maps using a 3D LiDAR sensor, Autom. Constr. 146 (2023) 104641.

[97]

Y. Shen, Z. Jiao, A novel self-positioning based on feature map cre-ation and laser location method for RBPF-slam, J. Robot. 2021 (2021) 9988916:1-9988916:11.

[98]

A. Das, S.L. Waslander, Scan registration using segmented region growing NDT, Int. J. Robot. Res. 33 (2014) 1645-1663.

[99]

J. Gaffuri, Toward web mapping with vector data, in:International Conference Geographic Information Science, 2012, pp. 87-101.

[100]

L. Liu, H. Li, Y. Dai, Efficient global 2D-3D matching for camera localization in a large-scale 3D map, in: 2017 IEEE International Conference on Computer Vision, ICCV, 2017, pp. 2391-2400.

[101]

M. Wang, M. Cong, Y. Du, D. Liu, X. Tian, Multi-robot raster map fusion without initial relative position, Robot. Intell. Autom. 43 (5) (2023) 498-508.

[102]

J. Chen, Y. Wu, J. Tan, H. Ma, Y. Furukawa, MapTracker: Tracking with strided memory fusion for consistent vector HD mapping, 2024, ArXiv, arXiv:2403.15951.

[103]

Z. Wang, W. Zhan, M. Tomizuka, Fusing bird’s eye view LIDAR point cloud and front view camera image for 3D object detection, in: 2018 IEEE Intelligent Vehicles Symposium, IV, 2018, pp. 1-6.

[104]

K. Kim, S. Cho, W. Chung, HD map update for autonomous driving with crowdsourced data, IEEE Robot. Autom. Lett. 6 (2021) 1895-1901.

[105]

Q. Zou, M. Sester, Incremental map refinement of building information using LiDAR point clouds, Int. Arch. Photogramm. Rem. Sens. Spatial Inform. Sci. 43 (2021) 277-282.

[106]

S. Song, H. Lim, A.J. Lee, H. Myung, DynaVINS: A visual-inertial SLAM for dynamic environments, IEEE Robot. Autom. Lett. 7 (2022) 11523-11530.

[107]

S. Ji, W. Chen, Z. Su, Y. Guan, J. Li, H. Zhang, H. Zhu, A point-to-distribution degeneracy detection factor for LiDAR SLAM using local geometric models, in: IEEE International Conference on Robotics and Automation, ICRA, 2024, pp. 12283-12289.

[108]

H. Cho, S. Yeon, H. Choi, N.L. Doh, Detection and compensation of degeneracy cases for IMU-kinect integrated continuous SLAM with plane features † Sensors (Basel, Switzerland) 18 (2018).

[109]

Z. Zhang, J. Zhao, C. Huang, L. Li, Learning visual semantic map-matching for loosely multi-sensor fusion localization of autonomous vehicles, IEEE Trans. Intell. Veh. 8 (2023) 358-367.

[110]

Z. Zhang, J. Yu, J. Tang, Y. Xu, Y. Wang, MR-TopoMap: Multi-robot exploration based on topological map in communication restricted environment, IEEE Robot. Autom. Lett. 7 (2022) 10794-10801.

[111]

G. He, Q. Zhang, Y. Zhuang, Online semantic-assisted topological map building with LiDAR in large-scale outdoor environments: Toward robust place recognition, IEEE Trans. Instrum. Meas. 71 (2022) 1-12.

[112]

H.C. Thomas, E.L. Charles, L.R. Ronald, S. Clifford, Section 24.3: Dijkstra’s algorithm, Introd. Algorithms (2001) 595-601.

[113]

X. Bai, W. Yan, M. Cao, D. Xue, Distributed multi-vehicle task assignment in a time-invariant drift field with obstacles, IET Control Theory Appl. 13 (17) (2019) 2886-2893.

[114]

S.M. LaValle, Planning Algorithms, Cambridge University Press, 2006.

[115]

A. Erokhin, V. Erokhin, S. Sotnikov, A. Gogolevsky, Optimal multi-robot path finding algorithm based on a, in: Intelligent Systems in Cybernetics and Automation Control Theory 2, Springer, 2019, pp. 172-182.

[116]

G. Sun, R. Zhou, B. Di, Z. Dong, Y. Wang, A novel cooperative path planning for multi-robot persistent coverage with obstacles and coverage period constraints, Sensors 19 (9) (2019) 1994.

[117]

S.M. LaValle, Rapidly-exploring random trees : a new tool for path plan-ning, Ann. Res. Rep. (1998) URL https://api.semanticscholar.org/CorpusID:14744621.

[118]

S. Lavalle, J. Kuffner, Rapidly-exploring random trees: Progress and prospects, in: Algorithmic and computational robotics: New directions, 2000.

[119]

W. Wang, X. Xu, Y. Li, J. Song, H. He, Triple RRTs: An effective method for path planning in narrow passages, Adv. Robot. 24 (7) (2010) 943-962.

[120]

G. Kang, Y.B. Kim, W.S. You, Y.H. Lee, H.S. Oh, H. Moon, H.R. Choi, Sampling-based path planning with goal oriented sampling, in: 2016 IEEE International Conference on Advanced Intelligent Mechatronics, AIM, 2016, pp. 1285-1290.

[121]

F. Peng, Y. Zhao, Random triangle sampling path planning of assem-bly/disassembly in environment with dangerzones, in:2010 International Conference on Measuring Technology and Mechatronics Automation, 2010, pp. 972-976.

[122]

R. Motwani, M. Motwani, F.C. Harris, Uniform and efficient explo-ration of state space using kinodynamic sampling-based planners, in: Computational Kinematics, 2014, pp. 67-74.

[123]

W. Chi, Z. Ding, J. Wang, G. Chen, L. Sun, A generalized voronoi diagram-based efficient heuristic path planning method for RRTs in mobile robots, IEEE Trans. Ind. Electron. 69 (5) (2022) 4926-4937.

[124]

P. Wan, J. Wen, C. Wu, A discriminating method of driving anger based on sample entropy of eeg and BVP, in: 2015 International Conference on Transportation Information and Safety, ICTIS, 2015, pp. 156-161.

[125]

Z. Littlefield, Y. Li, K.E. Bekris, Efficient sampling-based motion planning with asymptotic near-optimality guarantees for systems with dynamics, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2013, pp. 1779-1785.

[126]

J.D. Gammell, S.S. Srinivasa, T.D. Barfoot, Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admis-sible ellipsoidal heuristic,in: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 2997-3004.

[127]

A.H. Qureshi, M.C. Yip, Deeply informed neural sampling for robot motion planning, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2018, pp. 6582-6588.

[128]

S. Rifai, P. Vincent, X. Muller, X. Glorot, Y. Bengio, Contractive auto-encoders: explicit invariance during feature extraction,in:International Conference on Machine Learning, ICML, 2011, pp. 833-840.

[129]

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15 (2014) 1929-1958.

[130]

L. Chen, Y. Shan, W. Tian, B. Li, D. Cao, A fast and efficient double-tree RRT∗-like sampling-based planner applying on mobile robotic systems, IEEE/ASME Trans. Mechatronics 23 (6) (2018) 2568-2578.

[131]

S. Karaman, E. Frazzoli, Optimal kinodynamic motion planning using incremental sampling-based methods, in: IEEE Conference on Decision and Control, 2010, pp. 7681-7687.

[132]

J. Bruce, M. Veloso, Real-time randomized path planning for robot navi-gation, in:Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2002, pp. 2383-2388.

[133]

D. Ferguson, N. Kalra, A. Stentz, Replanning with RRTs, in: IEEE International Conference on Robotics and Automation, 2006, pp. 1243-1248.

[134]

M. Zucker, J. Kuffner, M. Branicky, Multipartite RRTs for rapid replanning in dynamic environments, in: IEEE International Conference on Robotics and Automation, 2007, pp. 1603-1609.

[135]

A. Stentz, Optimal and efficient path planning for partially-known envi-ronments, in: Proceedings of the 1994 IEEE International Conference on Robotics and Automation, IEEE, 1994, pp. 3310-3317.

[136]

Z. Ren, S. Rathinam, M. Likhachev, H. Choset, Multi-objective path-based d* lite, IEEE Robot. Autom. Lett. 7 (2) (2022) 3318-3325.

[137]

O. Adiyatov, H.A. Varol, A novel RRT*-based algorithm for motion plan-ning in dynamic environments, in: IEEE International Conference on Mechatronics and Automation, 2017, pp. 1416-1421.

[138]

Z. Du, S. Liu, Asymptotical RRT-based path planning for mobile robots in dynamic environments, in: 2018 37th Chinese Control Conference, CCC, 2018, pp. 5281-5286.

[139]

M. Otte, E. Frazzoli, RRTX: Asymptotically optimal single-query sampling-based motion planning with quick replanning, Int. J. Robot. Res. 35 (7)(2015) 797-822.

[140]

A.H. Qureshi, S. Mumtaz, W. Khan, A.A.A. Sheikh, K.F. Iqbal, Y. Ayaz, O. Hasan, Augmenting RRT*-planner with local trees for motion planning in complex dynamic environments, in: 2014 19th International Conference on Methods and Models in Automation and Robotics, MMAR, 2014, pp. 657-662.

[141]

Y. Chen, J. Yu, X. Su, G. Luo, Path planning for multi-UAV formation, J. Intell. Robot. Syst. 77 (2015) 229-246.

[142]

X. Wang, A. Sahin, S. Bhattacharya, Coordination-free multi-robot path planning for congestion reduction using topological reasoning, J. Intell. Robot. Syst. 108 (3) (2023) 50.

[143]

C. He, Y. Wan, Y. Gu, F.L. Lewis, Integral reinforcement learning-based multi-robot minimum time-energy path planning subject to collision avoidance and unknown environmental disturbances, IEEE Control Syst. Lett. 5 (3) (2020) 983-988.

[144]

D. Connell, H.M. La, Dynamic path planning and replanning for mobile robots using RRT, in: 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC, 2017, pp. 1429-1434.

[145]

B. Chandler, M.A. Goodrich, Online RRT* and online FMT*: Rapid replan-ning with dynamic cost,in:2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2017, pp. 6313-6318.

[146]

K. Naderi, RT-RRT*:a real-time path planning algorithm based on RRT*,in: Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games, 2015, pp. 113-118.

[147]

C. Fulgenzi, C. Tay, A. Spalanzani, C. Laugier, Probabilistic navigation in dynamic environment using rapidly-exploring random trees and Gaussian processes, in:IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp. 1056-1062.

[148]

W. Chi, M.Q.-H. Meng, Risk-RRT*: A robot motion planning algorithm for the human robot coexisting environment,in: 2017 18th International Conference on Advanced Robotics, 2017, pp. 583-588.

[149]

W. Mechlih, Trajectory planning for mobile robots in a dynamic environ-ment, in: Proceedings of VNIS ’93 - Vehicle Navigation and Information Systems Conference, 1993, pp. 551-554.

[150]

R. Firoozi, A. Mir, G.S. Camps, M. Schwager, Occlusion-aware mpc for guaranteed safe robot navigation with unseen dynamic obstacles, ArXiv abs/2211. 09156 (2022) http://dx.doi.org/10.48550/arXiv.2211.09156.

[151]

M. Garzón, E.P. Fotiadis, A. Barrientos, A. Spalanzani, RiskRRT-based planning for interception of moving objects in complex environments, in:ROBOT2013: First Iberian Robotics Conference, 2014, pp. 489-503.

[152]

C. Fulgenzi, A. Spalanzani, C. Laugier, C. Tay, Risk based motion planning and navigation in uncertain dynamic environment, Res. Rep. (2010) 14.

[153]

W. Chi, H. Kono, Y. Tamura, A. Yamashita, H. Asama, M.Q.-H. Meng, A human-friendly robot navigation algorithm using the risk-RRT approach, in: 2016 IEEE International Conference on Real-Time Computing and Robotics, RCAR, 2016, pp. 227-232.

[154]

C. Tay, C. Laugier, Modelling smooth paths using Gaussian processes, Springer Tracts Adv. Robot. 42 (2007) 381-390.

[155]

J. Rios-Martinez, A. Spalanzani, C. Laugier, Understanding human interac-tion for probabilistic autonomous navigation using risk-RRT approach, in:2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 2014-2019.

[156]

W. Chi, H. Kono, Y. Tamura, A. Yamashita, A. Hajime, M.Q.-H. Meng, A human-friendly robot navigation algorithm using the risk-RRT approach, in: 2016 IEEE International Conference on Real-Time Computing and Robotics, RCAR, 2016, pp. 227-232.

[157]

W. Chi, C. Wang, J. Wang, M.Q.-H. Meng, Risk-DTRRT-based optimal motion planning algorithm for mobile robots, IEEE Trans. Autom. Sci. Eng. 16 (3) (2019) 1271-1288.

[158]

B.D. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration, Robot. Auton. Syst. 57 (5) (2009) 469-483.

[159]

S.S. Samsani, H. Mutahira, M.S. Muhammad, Memory-based crowd-aware robot navigation using deep reinforcement learning, Complex Intell. Syst. 9 (2) (2023) 2147-2158.

[160]

J. Qin, J. Qin, J. Qiu, Q. Liu, M. Li, Q. Ma, SRL-ORCA: A socially aware multi-agent mapless navigation algorithm in complex dynamic scenes, IEEE Robot. Autom. Lett. 9 (1) (2023) 143-150.

[161]

E. Sisbot, L.F. Marin-Urias, R. Alami, T. Siméon, A human aware mobile robot motion planner, IEEE Trans. Robot. 23 (5) (2007) 874-883.

[162]

N. Pérez-Higueras, F. Caballero, M. Luis, Learning human-aware path planning with fully convolutional networks, in: 2018 IEEE International Conference on Robotics and Automation, ICRA, 2018, pp. 5897-5902.

[163]

A.Y. Ng, S. Russell, Algorithms for inverse reinforcement learning, in:International Conference on Machine Learning, 2000, pp. 663-670.

[164]

N. Pérez-Higueras, F. Caballero, M. Luis, Teaching robot navigation behaviors to optimal RRT planners, Int. J. Soc. Robot. 10 (2018) 235-249.

[165]

N. Pérez-Higueras, F. Caballero, L. Merino, Learning robot navigation behaviors by demonstration using a RRT* planner, in:International Conference on Social Robotics, 2016, pp. 1-10.

[166]

B.D. Ziebart, A.L. Maas, J.A. Bagnell, A.K. Dey, Maximum entropy in-verse reinforcement learning, in:Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008, pp. 1433-1438.

[167]

K. Shiarlis, J. Messias, S. Whiteson, Rapidly exploring learning trees, in: 2017 IEEE International Conference on Robotics and Automation, ICRA, 2017.

[168]

Z. Chen, H. Wu, Y. Chen, L. Cheng, B. Zhang, Patrol robot path planning in nuclear power plant using an interval multi-objective particle swarm optimization algorithm, Appl. Soft Comput. 116 (2022) 108192.

[169]

Y. Chen, S. Ren, Z. Chen, M. Chen, H. Wu, Path planning for vehicle-borne system consisting of multi air-ground robots, Robotica 38 (3) (2020) 493-511.

[170]

B. Tang, K. Xiang, M. Pang, Z. Zhanxia, Multi-robot path planning using an improved self-adaptive particle swarm optimization, Int. J. Adv. Robot. Syst. 17 (5) (2020) 1729881420936154.

[171]

B. Sahu, P.K. Das, M. ranjan Kabat, Multi-robot cooperation and path planning for stick transporting using improved Q-learning and democratic robotics PSO, J. Comput. Sci. 60 (2022) 101637.

[172]

R.A. Saeed, D. Reforgiato Recupero, P. Remagnino, The boundary node method for multi-robot multi-goal path planning problems, Expert Syst. 38 (6) (2021) e12691.

[173]

W. Xu, Q. Wang, M. Yu, D. Zhao, Path planning for multi-AGV systems based on two-stage scheduling, Int. J. Performabil. Eng. 13 (8) (2017) 1347.

[174]

H. Huang, T. Zhuo, Multi-model cooperative task assignment and path planning of multiple UCAV formation, Multimedia Tools Appl. 78 (2019) 415-436.

[175]

Y.F. Chen, M. Liu, M. Everett, J.P. How, Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning, in: 2017 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2017, pp. 285-292.

[176]

D. Wang, T. Fan, T. Han, J. Pan, A two-stage reinforcement learning approach for multi-UAV collision avoidance under imperfect sensing, IEEE Robot. Autom. Lett. 5 (2) (2020) 3098-3105.

[177]

H. Qie, D. Shi, T. Shen, X. Xu, Y. Li, L. Wang, Joint optimization of multi-UAV target assignment and path planning based on multi-agent reinforcement learning, IEEE Access 7 (2019) 146264-146272.

[178]

Y. Jia, Y. Song, B. Xiong, J. Cheng, W. Zhang, S.X. Yang, S. Kwong, Hierarchical perception-improving for decentralized multi-robot motion planning in complex scenarios, IEEE Trans. Intell. Transp. Syst. 25 (7)(2024) 6486-6500.

[179]

W. Ou, B. Luo, X. Xu, Y. Feng, Y. Zhao, Reinforcement learned multi-agent cooperative navigation in hybrid environment with relational graph learning, IEEE Trans. Artif. Intell. (2024).

[180]

J. Zhao, H. Ye, Y. Zhan, H. Zhang, Human orientation estimation under partial occlusion, 2024, arXiv preprint arXiv:2404.14139.

[181]

H. Ye, J. Zhao, Y. Pan, W. Cherr, L. He, H. Zhang, Robot person follow-ing under partial occlusion, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, 2023, pp. 7591-7597.

[182]

Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, S. Sanner, Online continual learning in image classification: An empirical survey, Neurocomputing 469 (2022) 28-51.

[183]

H. Ye, J. Zhao, Y. Zhan, W. Chen, L. He, H. Zhang, Person re-identification for robot person following with online continual learning, IEEE Robot. Autom. Lett. (2024).

AI Summary AI Mindmap
PDF (3253KB)

1475

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/