Improved vision-only localization method for mobile robots in indoor environments

Gang Huang , Liangzhu Lu , Yifan Zhang , Gangfu Cao , Zhe Zhou

Autonomous Intelligent Systems ›› 2024, Vol. 4 ›› Issue (1) : 18

PDF
Autonomous Intelligent Systems ›› 2024, Vol. 4 ›› Issue (1) : 18 DOI: 10.1007/s43684-024-00075-9
Original Article

Improved vision-only localization method for mobile robots in indoor environments

Author information +
History +
PDF

Abstract

To solve the problem of mobile robots needing to adjust their pose for accurate operation after reaching the target point in the indoor environment, a localization method based on scene modeling and recognition has been designed. Firstly, the offline scene model is created by both handcrafted feature and semantic feature. Then, the scene recognition and location calculation are performed online based on the offline scene model. To improve the accuracy of recognition and location calculation, this paper proposes a method that integrates both semantic features matching and handcrafted features matching. Based on the results of scene recognition, the accurate location is obtained through metric calculation with 3D information. The experimental results show that the accuracy of scene recognition is over 90%, and the average localization error is less than 1 meter. Experimental results demonstrate that the localization has a better performance after using the proposed improved method.

Cite this article

Download citation ▾
Gang Huang, Liangzhu Lu, Yifan Zhang, Gangfu Cao, Zhe Zhou. Improved vision-only localization method for mobile robots in indoor environments. Autonomous Intelligent Systems, 2024, 4(1): 18 DOI:10.1007/s43684-024-00075-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

Funding

Natural Science Foundation of Hubei Province,(2024AFB273)

Hubei Key Laboratory of Power System Design and Test for Electrical Vehicle,(ZDSYS202425)

AI Summary AI Mindmap
PDF

195

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/