Fast and accurate visual odometry from a monocular camera

Xin YANG, Tangli XUE, Hongcheng LUO, Jiabin GUO

PDF(801 KB)
PDF(801 KB)
Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (6) : 1326-1336. DOI: 10.1007/s11704-018-6600-8
RESEARCH ARTICLE

Fast and accurate visual odometry from a monocular camera

Author information +
History +

Abstract

This paper aims at a semi-dense visual odometry system that is accurate, robust, and able to run realtime on mobile devices, such as smartphones, AR glasses and small drones. The key contributions of our system include: 1) the modified pyramidal Lucas-Kanade algorithm which incorporates spatial and depth constraints for fast and accurate camera pose estimation; 2) adaptive image resizing based on inertial sensors for greatly accelerating tracking speed with little accuracy degradation; and 3) an ultrafast binary feature description based directly on intensities of a resized and smoothed image patch around each pixel that is sufficiently effective for relocalization. A quantitative evaluation on public datasets demonstrates that our system achieves better tracking accuracy and up to about 2X faster tracking speed comparing to the state-of-the-art monocular SLAM system: LSD-SLAM. For the relocalization task, our system is 2.0X∼4.6X faster than DBoW2 and achieves a similar accuracy.

Keywords

visual odometry / mobile devices / direct tracking / relocalization / inertial sensing / binary feature

Cite this article

Download citation ▾
Xin YANG, Tangli XUE, Hongcheng LUO, Jiabin GUO. Fast and accurate visual odometry from a monocular camera. Front. Comput. Sci., 2019, 13(6): 1326‒1336 https://doi.org/10.1007/s11704-018-6600-8

References

[1]
Gálvez-López D, Tardos J D. Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics, 2012, 28(5): 1188–1197
CrossRef Google scholar
[2]
Mur-Artal R, Montiel J M M, Tardós J D. ORB-SLAM: a versatile and accurate monocular slam system. IEEE Transactions on Robotics, 2015, 31(5): 1147–1163
CrossRef Google scholar
[3]
Klein G, Murray D. Parallel tracking and mapping for small ar workspaces. In: Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. 2007, 225–234
CrossRef Google scholar
[4]
Engel J, Schöps T, Cremers D. LSD-SLAM: large-scale direct monocular slam. In: Proceedings of the European Conference on Computer Vision. 2014, 834–849
CrossRef Google scholar
[5]
Engel J, Sturm J, Cremers D. Semi-dense visual odometry for a monocular camera. In: Proceedings of the IEEE International Conference on Computer Vision. 2013, 1449–1456
CrossRef Google scholar
[6]
Newcombe R A, Lovegrove S J, Davison A J. DTAM: dense tracking and mapping in real-time. In: Proceedings of the 2011 IEEE International Conference on Computer Vision. 2011, 2320–2327
CrossRef Google scholar
[7]
Gauglitz S, Sweeney C, Ventura J, Turk M, Höllerer T. Live tracking and mapping from both general and rotation-only camera motion. In: Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality. 2012, 13–22
CrossRef Google scholar
[8]
Gauglitz S, Sweeney C, Ventura J, Turk M, Höllerer T. Model estimation and selection towards unconstrained real-time tracking and mapping. IEEE Transactions on Visualization and Computer Graphics, 2014, 20(6): 825–838
CrossRef Google scholar
[9]
Mur-Artal R, Tardós J D. Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM. Robotics: Science and Systems, 2015
[10]
Schöps T, Engel J, Cremers D. Semi-dense visual odometry for ar on a smartphone. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality. 2014, 145–150
CrossRef Google scholar
[11]
Forster C, Pizzoli M, Scaramuzza D. SVO: fast semi-direct monocular visual odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation. 2014, 15–22
CrossRef Google scholar
[12]
Bouguet J Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corporation, 2001, 5(1–10): 4
[13]
Rublee E, Rabaud V, Konolige K, Bradski G. OOB: an efficient alternative to sift or surf. In: Proceedings of the International Conference on Computer Vision. 2011, 2564–2571
[14]
Sturm J, Engelhard N, Endres F, Burgard W, Cremers D. A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012, 573–580
[15]
Smith M, Baldwin I, Churchill W, Paul R, Newman P. The new college vision and laser data set. The International Journal of Robotics Research, 2009, 28(5): 595–599
CrossRef Google scholar
[16]
Blanco J L, Moreno F A, Gonzalez J. A collection of outdoor robotic datasets with centimeter-accuracy ground truth. Autonomous Robots, 2009, 27(4): 327–351
CrossRef Google scholar

RIGHTS & PERMISSIONS

2018 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(801 KB)

Accesses

Citations

Detail

Sections
Recommended

/