Design of an enhanced visual odometry by building and matching compressive panoramic landmarks online

Wei LU, Zhi-yu XIANG, Ji-lin LIU

PDF(2220 KB)
PDF(2220 KB)
Front. Inform. Technol. Electron. Eng ›› 2015, Vol. 16 ›› Issue (2) : 152-165. DOI: 10.1631/FITEE.1400139
Orginal Article

Design of an enhanced visual odometry by building and matching compressive panoramic landmarks online

Author information +
History +

Abstract

Efficient and precise localization is a prerequisite for the intelligent navigation of mobile robots. Traditional visual localization systems, such as visual odometry (VO) and simultaneous localization and mapping (SLAM), suffer from two shortcomings: a drift problem caused by accumulated localization error, and erroneous motion estimation due to illumination variation and moving objects. In this paper, we propose an enhanced VO by introducing a panoramic camera into the traditional stereo-only VO system. Benefiting from the 360° field of view, the panoramic camera is responsible for three tasks: (1) detecting road junctions and building a landmark library online; (2) correcting the robot’s position when the landmarks are revisited with any orientation; (3) working as a panoramic compass when the stereo VO cannot provide reliable positioning results. To use the large-sized panoramic images efficiently, the concept of compressed sensing is introduced into the solution and an adaptive compressive feature is presented. Combined with our previous two-stage local binocular bundle adjustment (TLBBA) stereo VO, the new system can obtain reliable positioning results in quasi-real time. Experimental results of challenging long-range tests show that our enhanced VO is much more accurate and robust than the traditional VO, thanks to the compressive panoramic landmarks built online.

Keywords

Visual odometry / Panoramic landmark / Landmark matching / Compressed sensing / Adaptive compressive feature

Cite this article

Download citation ▾
Wei LU, Zhi-yu XIANG, Ji-lin LIU. Design of an enhanced visual odometry by building and matching compressive panoramic landmarks online. Front. Inform. Technol. Electron. Eng, 2015, 16(2): 152‒165 https://doi.org/10.1631/FITEE.1400139

References

[1]
Bay, H., Tuytelaars, T., van Gool, L., 2006. SURF: speeded up robust features. Proc. 9th European Conf. on Computer Vision, p.404−417. [
CrossRef Google scholar
[2]
Cai, X., Zhang, Z., Zhang, H., , 2014. Soft consistency reconstruction: a robust 1-bit compressive sensing algorithm. arXiv:1402.5475 (preprint).
[3]
Candes, E.J., Tao, T., 2005. Decoding by linear programming. IEEE Trans. Inform. Theory, 51(12): 4203−4215. [
CrossRef Google scholar
[4]
Donoho, D.L., 2006. Compressed sensing. IEEE Trans. Inform. Theory, 52(4): 1289−1306. [
CrossRef Google scholar
[5]
Durrant-Whyte, H., Bailey, T., 2006. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag., 13(2): 99−110. [
CrossRef Google scholar
[6]
Fischler, M.A., Bolles, R.C., 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6): 381−395. [
CrossRef Google scholar
[7]
Fraundorfer, F., Scaramuzza, D., 2012. Visual odometry: part II. Matching, robustness, optimization, and applications. IEEE Robot. Autom. Mag., 19(2): 78−90. [
CrossRef Google scholar
[8]
Galvez-López, D., Tardos, J.D., 2012. Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot., 28(5): 1188−1197. [
CrossRef Google scholar
[9]
Geiger, A., Lenz, P., Urtasun, R., 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3354−3361. [
CrossRef Google scholar
[10]
Horn, B.K.P., 1987. Closed-form solution of absolute orientation using unit quaternions. JOSA A, 4(4): 629−642. [
CrossRef Google scholar
[11]
Konolige, K., Agrawal, M., Solà, J., 2011. Large-scale visual odometry for rough terrain. Proc. 13th Int. Symp. on Robotics Research, p.201−212. [
CrossRef Google scholar
[12]
Liu, Y., Zhang, H., 2012. Visual loop closure detection with a compact image descriptor. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.1051−1056. [
CrossRef Google scholar
[13]
Lu, W., Xiang, Z., Liu, J., 2013. High-performance visual odometry with two-stage local binocular BA and GPU. Proc. IEEE Intelligent Vehicles Symp., p.1107−1112. [
CrossRef Google scholar
[14]
Munguia, R., Grau, A., 2007. Monocular SLAM for visual odometry. Proc. IEEE Int. Symp. on Intelligent Signal Processing, p.1−6. [
CrossRef Google scholar
[15]
Nistér, D., Naroditsky, O., Bergen, J., 2004. Visual odometry. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.652−659. [
CrossRef Google scholar
[16]
Scaramuzza, D., Fraundorfer, F., 2011. Visual odometry (tutorial). IEEE Robot. Autom. Mag., 18(4): 80−92. [
CrossRef Google scholar
[17]
Se, S., Lowe, D., Little, J., 2002. Global localization using distinctive visual features. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.226−231. [
CrossRef Google scholar
[18]
Singh, G., Košechá, J., 2010. Visual loop closing using gist descriptors in Manhattan world. ICRA Omnidirectional Vision Workshop.
[19]
Sivic, J., Zisserman, A., 2003. Video Google: a text retrieval approach to object matching in videos. Proc. 9th IEEE Int. Conf. on Computer Vision, p.1470−1477. [
CrossRef Google scholar
[20]
Stewénius, H., Engels, C., Nistér, D., 2006. Recent developments on direct relative orientation. ISPRS J. Photogr. Remote Sens., 60(4): 284−294. [
CrossRef Google scholar
[21]
Sünderhauf, N., Protzel, P., 2011. BRIEF-Gist—closing the loop by simple means. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.1234−1241. [
CrossRef Google scholar
[22]
Wang, Y., 2013. Navigational Road Modeling Based on Omnidirectional Multi-camera System. PhD Thesis, Zhejiang University, Hangzhou, China (in Chinese).
[23]
Wright, J., Yang, A.Y., Ganesh, A., , 2009. Robust face recognition via sparse representation. IEEE Trans. Patt. Anal. Mach. Intell., 31(2): 210−227. [
CrossRef Google scholar
[24]
Wu, C., 2007. SiftGPU: a GPU Implementation of Scale Invariant Feature Transform (SIFT). Available from https://www.researchgate.net/publication/319770614_SiftGPU_A_GPU_Implementation_of_Scale_Invariant_Feature_Transform_SIFT
[25]
Zhang, K., Zhang, L., Yang, M.H., 2012. Real-time compressive tracking. Proc. 12th European Conf. on Computer Vision, p.864−877. [
CrossRef Google scholar

RIGHTS & PERMISSIONS

2015 Higher Education Press
PDF(2220 KB)

Accesses

Citations

Detail

Sections
Recommended

/