A marker-based method for visual-inertial initialization

Kang An , Hao Fan , Junyu Dong

Intelligent Marine Technology and Systems ›› 2024, Vol. 2 ›› Issue (1)

PDF
Intelligent Marine Technology and Systems ›› 2024, Vol. 2 ›› Issue (1) DOI: 10.1007/s44295-024-00041-4
Research Paper

A marker-based method for visual-inertial initialization

Author information +
History +
PDF

Abstract

Accurate and robust initialization is significant for visual-inertial simultaneous localization and mapping (VI-SLAM). Existing methods solve VI-SLAM initialization based on visual information. However inertial measurement unit (IMU) parameter estimation performed underwater is subject to two major limitations. First, IMU preintegration error accumulates over time, resulting in reduced accuracy. Second, it is difficult for robots to achieve sufficient movement underwater, which affects the reliability of initialization results. For a better balance between the efficiency and accuracy of VI-SLAM initialization, this study proposes a VI-SLAM initialization method using a designed marker calibration device. First, we utilize both marker points and ORB feature points for a fast and robust visual trajectory estimation with real motion scale, and we estimate the gravity direction using the marker calibration device. Second, the IMU trajectory is aligned with the trajectory, and the IMU parameters are solved using the initial gravity direction. Experiments verify the effectiveness of our developed method for improving the accuracy and efficiency of the VI-SLAM initialization. The code is available at https://gitee.com/litseaak/mmorb.

Keywords

SLAM / Initialization / VI-SLAM / VIO / Marker

Cite this article

Download citation ▾
Kang An, Hao Fan, Junyu Dong. A marker-based method for visual-inertial initialization. Intelligent Marine Technology and Systems, 2024, 2(1): DOI:10.1007/s44295-024-00041-4

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, et al.. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans Robot, 2016, 32(6): 1309-1332,

[2]

Campos C, Elvira R, Rodríguez JJG, Montiel JMM, Tardós JD. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans Robot, 2021, 37(6): 1874-1890,

[3]

Campos C, Montiel JM, Tardós JD (2020) Inertial-only optimization for visual-inertial initialization. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, pp 51–57

[4]

Forster C, Carlone L, Dellaert F, Scaramuzza D (2020) IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. In: 11th Conference on Robotics–Science and Systems, Rome, pp 1–10

[5]

Garrido-Jurado S, Muñoz-Salinas R, Madrid-Cuevas FJ, Marín-Jiménez MJ. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit, 2014, 47(6): 2280-2292,

[6]

Huang WB, Liu H (2018) Online initialization and automatic camera-IMU extrinsic calibration for monocular visual-inertial SLAM. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, pp 5182–5189

[7]

Kaiser J, Martinelli A, Fontana F, Scaramuzza D. Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation. IEEE Robot Autom Lett, 2016, 2(1): 18-25,

[8]

Kümmerle R, Grisetti G, Strasdat H, Konolige K, Burgard W (2011) g 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\text{g}^2$$\end{document}o: a general framework for graph optimization. In: 2011 IEEE International Conference on Robotics and Automation, Shanghai, pp 3607–3613

[9]

Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P. Keyframe-based visual-inertial odometry using nonlinear optimization. Int J Robot Res, 2015, 34(3): 314-334,

[10]

Li JY, Bao HJ, Zhang GF (2019) Rapid and robust monocular visual-inertial initialization with gravity estimation via vertical edges. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, pp 6230–6236

[11]

Li PL, Qin T, Hu BT, Zhu FY, Shen SJ (2017) Monocular visual-inertial state estimation for mobile augmented reality. In: 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Nantes, pp 11–21

[12]

Liu H, Qiu JY, Huang WB (2022) Integrating point and line features for visual-inertial initialization. In: 2022 IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, pp 9470–9476

[13]

Muñoz-Salinas R, Marín-Jimenez MJ, Medina-Carnicer R. SPM-SLAM: simultaneous localization and mapping with squared planar markers. Pattern Recognit, 2019, 86: 156-171,

[14]

Muñoz-Salinas R, Marín-Jimenez MJ, Yeguas-Bolivar E, Medina-Carnicer R. Mapping and localization from planar markers. Pattern Recognit, 2018, 73: 158-171,

[15]

Muñoz-Salinas R, Medina-Carnicer R. UcoSLAM: simultaneous localization and mapping by fusion of keypoints and squared planar markers. Pattern Recognit, 2020, 101: 107193,

[16]

Mur-Artal R, Montiel JMM, Tardós JD. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot, 2015, 31(5): 1147-1163,

[17]

Mur-Artal R, Tardós JD. Visual-inertial monocular slam with map reuse. IEEE Robot Autom Lett, 2017, 2(2): 796-803,

[18]

Qin T, Li PL, Shen SJ. VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans Robot, 2018, 34(4): 1004-1020,

[19]

Ram K, Kharyal C, Harithas SS, Krishna KM (2021) RP-VIO: robust plane-based visual-inertial odometry for dynamic environments. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, pp 9198–9205

[20]

Romero-Ramirez FJ, Muñoz-Salinas R, Marín-Jimenez MJ, Cazorla M, Medina-Carnicer R. sSLAM: Speeded-up visual slam mixing artificial markers and temporary keypoints. Sensors, 2023, 23(4): 2210,

[21]

Schönberger JL, Frahm JM (2016) Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, pp 4104–4113

[22]

Yang ZF, Gao F, Shen SJ (2017) Real-time monocular dense mapping on aerial robots using visual-inertial fusion. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, pp 4552–4559

Funding

Innovative Research Group Project of the National Natural Science Foundation of China(42106193, 41927805)

AI Summary AI Mindmap
PDF

232

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/