Detection of loop closure in visual SLAM: a stacked assorted auto-encoder based approach

Yuan Luo , Yuting Xiao , Yi Zhang , Nianwen Zeng

Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (6) : 354 -360.

PDF
Optoelectronics Letters ›› 2021, Vol. 17 ›› Issue (6) : 354 -360. DOI: 10.1007/s11801-021-0156-9
Article

Detection of loop closure in visual SLAM: a stacked assorted auto-encoder based approach

Author information +
History +
PDF

Abstract

The current mainstream methods of loop closure detection in visual simultaneous localization and mapping (SLAM) are based on bag-of-words (BoW). However, traditional BoW-based approaches are strongly affected by changes in the appearance of the scene, which leads to poor robustness and low precision. In order to improve the precision and robustness of loop closure detection, a novel approach based on stacked assorted auto-encoder (SAAE) is proposed. The traditional stacked auto-encoder is made up of multiple layers of the same autoencoder. Compared with the visual BoW model, although it can better extract the features of the scene image, the output feature dimension is high. The proposed SAAE is composed of multiple layers of denoising auto-encoder, convolutional auto-encoder and sparse auto-encoder, it uses denoising auto-encoder to improve the robustness of image features, convolutional auto-encoder to preserve the spatial information of the image, and sparse auto-encoder to reduce the dimensionality of image features. It is capable of extracting low to high dimensional features of the scene image and preserving the spatial local characteristics of the image, which makes the output features more robust. The performance of SAAE is evaluated by a comparison study using data from new college dataset and city centre dataset. The methodology proposed in this paper can effectively improve the precision and robustness of loop closure detection in visual SLAM.

Cite this article

Download citation ▾
Yuan Luo, Yuting Xiao, Yi Zhang, Nianwen Zeng. Detection of loop closure in visual SLAM: a stacked assorted auto-encoder based approach. Optoelectronics Letters, 2021, 17(6): 354-360 DOI:10.1007/s11801-021-0156-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

DurrantWhyte HughF, BaileyT. Simultaneous Localization and Mapping. IEEE Robotics & Amp. Automation Magazine, 2006, 13: 99

[2]

JorgeF-P, JoséR-A, Juan ManuelR-M. Artificial Intelligence Review, 2015, 43: 55

[3]

LabbeM, MichaudF. IEEE Transactions on Robotics, 2013, 29: 734

[4]

Shekhar R and Jawahar C V, Word Image Retrieval Using Bag of Visual Words, IEEE 10th IAPR International Workshop on Document Analysis Systems (DAS), 297 (2012).

[5]

CumminsM, NewmanP. International Journal of Robotics Research, 2011, 30: 1100

[6]

Galvez-LópezD, TardosJ D. IEEE Transactions on Robotics, 2012, 28: 1188

[7]

Garcia-FidalgoE, OrtizA. IEEE Robotics and Automation Letters, 2018, 3: 3051

[8]

Liu Y and Zhang H, Visual Loop Closure Detection with a Compact Image Descriptor, IEEE/RSJ International Conference on Intelligent Robots and Systems, 1051 (2012).

[9]

Zhang G, Lilly M J and Vela P A, Learning Binary Features Online from Motion Dynamics for Incremental Loop-Closure Detection and Place Recognition, IEEE International Conference on Robotics and Automation (ICRA), 765 (2016).

[10]

BampisL, AmanatiadisA, GasteratosA. The International Journal of Robotics Research, 2018, 37: 62

[11]

G. Zhang, X. Yan and Y. Ye, Loop Closure Detection Via Maximization of Mutual Information. IEEE Access, 124217 (2019).

[12]

LiuQ, DuanF. Intelligent Service Robotics, 2019, 12: 303

[13]

MemonA R, WangH, HussainA. Robotics and Autonomous Systems, 2020, 126: 103470

[14]

Gomez-Ojeda R, Lopez-Antequera M, Petkov N and Gonzalez-Jimenez J, Training a Convolutional Neural Network for Appearance-Invariant Place Recognition. Computer Science, 1505 (2015).

[15]

GaoX, ZhangT. Autonomous Robots, 2017, 41: 1

[16]

Merrill N and Huang G Q, Lightweight Unsupervised Deep Loop Closure, arXiv:1805.07703, 2018.

[17]

BurgueraA, Bonin-FontF. Journal of Intelligent & Robotic Systems, 2020, 100: 1157

[18]

WangF, RuanX, HuangJ. IOP Conference Series: Materials Science and Engineering, 2019, 563: 052082

[19]

MukherjeeA, ChakrabortyS, SahaS K. Applied Soft Computing, 2019, 80: 650

[20]

ChenB, YuanD, LiuC. Applied Sciences, 2019, 9: 1120

[21]

XiangG, TaoZ. Fourteen Lectures on Visual SLAM: From Theory to Practice, 2019, 2nd EditionBeijing, Publishing House of Electronics Industry: 302

[22]

S. Lange and M. Riedmiller, Deep Auto-Encoder Neural Networks in Reinforcement Learning, The 2010 International Joint Conference on Neural Networks, 1 (2010).

[23]

Vincent P, Larochelle H and Bengio Y, Extracting and Composing Robust Features with Denoising Autoencoders, Machine Learning, Proceedings of the Twenty-Fifth International Conference, 1096 (2008).

[24]

Masci J, Meier U and Ciresan D, Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction. Artificial Neural Networks and Machine Learning (ICANN), International Conference on Artificial Neural Networks. 52 (2011).

[25]

ZhangL, LuY, WangB. Neural Process Lett, 2018, 47: 829

[26]

Jiang X, Zhang Y, Zhang W and Xiao X, A Novel Sparse Auto-Encoder for Deep Unsupervised Learning, Sixth International Conference on Advanced Computational Intelligence (ICACI), 256 (2013).

[27]

PontiM, KittlerJ, RivaM, de CamposT, ZorC. Pattern Recognition, 2017, 61: 470

[28]

VincentP, LarochelleH, LajoieI. Journal of Machine Learning Research, 2010, 11: 3371

[29]

BordesA, Bottou, Léon, GallinariP. Journal of Machine Learning Research, 2009, 10: 1737

[30]

CumminsM, NewmanP. International Journal of Robotics Research, 2008, 27: 647

[31]

KejriwalN, KumarS, ShibataT. Robotics and Autonomous Systems, 2016, 77: 55

[32]

ZhouB, LapedrizaA, KhoslaA, OlivaA, TorralbaA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40: 1452

AI Summary AI Mindmap
PDF

183

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/