Avision-centeredmulti-sensor fusing approach to self-localization and obstacle perception for robotic cars

Jian-ru XUE, Di WANG, Shao-yi DU, Di-xiao CUI, Yong HUANG, Nan-ning ZHENG

PDF(1530 KB)
PDF(1530 KB)
Front. Inform. Technol. Electron. Eng ›› 2017, Vol. 18 ›› Issue (1) : 122-138. DOI: 10.1631/FITEE.1601873
Article
Article

Avision-centeredmulti-sensor fusing approach to self-localization and obstacle perception for robotic cars

Author information +
History +

Abstract

Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient selflocalization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

Keywords

Visual perception / Self-localization / Mapping / Motion planning / Robotic car

Cite this article

Download citation ▾
Jian-ru XUE, Di WANG, Shao-yi DU, Di-xiao CUI, Yong HUANG, Nan-ning ZHENG. Avision-centeredmulti-sensor fusing approach to self-localization and obstacle perception for robotic cars. Front. Inform. Technol. Electron. Eng, 2017, 18(1): 122‒138 https://doi.org/10.1631/FITEE.1601873

References

[1]
Aeberhard, M., Rauch, S., Bahram, M., , 2015. Experience, results and lessons learned from automated driving on Germany’s highways. IEEE Intell. Transp. Syst. Mag., 7(1):42–57. http://dx.doi.org/10.1109/MITS.2014.2360306
[2]
Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2007. A new approach for large-scale localization and mapping: hybrid metric-topological SLAM. Proc. IEEE Int. Conf. on Robotics and Automation, p.2061–2067. http://dx.doi.org/10.1109/ROBOT.2007.363625
[3]
Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2008. Toward a unified Bayesian approach to hybrid metrictopological SLAM. IEEE Trans. Robot., 24(2):259–270. http://dx.doi.org/10.1109/TRO.2008.918049
[4]
Brubaker, M.A., Geiger, A., Urtasun, R., 2016. Map-based probabilistic visual self-localization. IEEE Trans. Patt. Anal. Mach. Intell., 38(4):652–665. http://dx.doi.org/10.1109/TPAMI.2015.2453975
[5]
Buehler, M., Iagnemma, K., Singh, S., 2009. The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer.http://dx.doi.org/10.1007/978-3-642-03991-1
[6]
Cho, H., Seo, Y.W., Kumar, B.V.K.V., , 2014. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. IEEE Int. Conf. on Robotics and Automation, p.1836–1843. http://dx.doi.org/10.1109/ICRA.2014.6907100
[7]
Cui, D., Xue, J., Du, S., , 2014. Real-time global localization of intelligent road vehicles in lane-level via lane marking detection and shape registration. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.4958–4964. http://dx.doi.org/10.1109/IROS.2014.6943267
[8]
Cui, D.X., Xue, J.R., Zheng, N.N., 2016. Real-time global localization of robotic cars in lane level via lane marking detection and shape registration. IEEE Trans. Intell. Transp. Syst., 17(4):1039–1050. http://dx.doi.org/10.1109/TITS.2015.2492019
[9]
Darms, M., Rybski, P., Urmson, C., 2008. Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments. IEEE Intelligent Vehicles Symp., p.1197–1202. http://dx.doi.org/10.1109/IVS.2008.4621259
[10]
Darms, M., Rybski, P., Baker, C., , 2009. Obstacle detection and tracking for the urban challenge. IEEE Trans. Intell. Transp. Syst., 10(3):475–485. http://dx.doi.org/10.1109/TITS.2009.2018319
[11]
Davison, A.J., Reid, I.D., Molton, N.D., , 2007. MonoSLAM: real-time single camera SLAM. IEEE Trans. Patt. Anal. Mach. Intell., 29(6):1052–1067. http://dx.doi.org/10.1109/TPAMI.2007.1049
[12]
Dissanayake, M.W.M.G., Newman, P., Clark, S., , 2001. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom., 17(3):229–241. http://dx.doi.org/10.1109/70.938381
[13]
Dollár, P., Appel, R., Belongie, S., , 2014. Fast feature pyramids for object detection. IEEE Trans. Patt. Anal. Mach. Intell., 36(8):1532–1545. http://dx.doi.org/10.1109/TPAMI.2014.2300479
[14]
Douillard, B., Fox, D., Ramos, F., 2009. Laser and vision based outdoor object mapping. Robotics: Science and Systems IV, p.9–16.
[15]
Du, S.Y., Zheng, N.N., Xiong, L., , 2010. Scaling iterative closest point algorithm for registration of m-D point sets. J. Vis. Commun. Image Represent., 21(5-6):442–452. http://dx.doi.org/10.1016/j.jvcir.2010.02.005
[16]
Durrant-Whyte, H., Bailey, T., 2006. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. , 13(2):99–110. http://dx.doi.org/10.1109/MRA.2006.1638022
[17]
Ess, A., Schindler, K., Leibe, B., , 2010. Object detection and tracking for autonomous navigation in dynamic environments. Int. J. Robot. Res., 29(14):1707–1725. http://dx.doi.org/10.1177/0278364910365417
[18]
Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M., 2015. Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev., 43(1):55–81. http://dx.doi.org/10.1007/s10462-012-9365-8
[19]
Grisetti, G., Kummerle, R., Stachniss, C., , 2010. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. , 2(4):31–43. http://dx.doi.org/10.1109/MITS.2010.939925
[20]
Hartley, R.I., Zisserman, A., 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.
[21]
Held, D., Levinson, J., Thrun, S., , 2016. Robust realtime tracking combining 3D shape, color, and motion. Int. J. Robot. Res., 35(1-3):30–49. http://dx.doi.org/10.1177/0278364915593399
[22]
Hillel, A.B., Lerner, R., Levi, D., , 2014. Recent progress in road and lane detection: a survey. Mach. Vis. Appl. , 25(3):727–745. http://dx.doi.org/10.1007/s00138-011-0404-2
[23]
Hoiem, D., Hays, J., Xiao, J.X., , 2015. Guest editorial: scene understanding. Int. J. Comput. Vis. , 112(2):131–132. http://dx.doi.org/10.1007/s11263-015-0807-z
[24]
Konolige, K., Marder-Eppstein, E., Marthi, B., 2011. Navigation in hybrid metric-topological maps. IEEE Int. Conf. on Robotics and Automation, p.3041–3047. http://dx.doi.org/10.1109/ICRA.2011.5980074
[25]
Li, Q., Zheng, N.N., Cheng, H., 2004. Springrobot: a prototype autonomous vehicle and its algorithms for lane detection. IEEE Trans. Intell. Transp. Syst., 5(4):300–308. http://dx.doi.org/10.1109/TITS.2004.838220
[26]
Mertz, C., Navarro-Serment, L.E., MacLachlan, R.A., , 2013. Moving object detection with laser scanners. J. Field Robot. , 30(1):17–43. http://dx.doi.org/10.1002/rob.21430
[27]
Montemerlo, M., Thrun, S., Koller, D., , 2002. Fast-SLAM: a factored solution to the simultaneous localization and mapping problem. 8th National Conf. on Artificial Intelligence, p.593–598.
[28]
Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018
[29]
Schueler, K., Weiherer, T., Bouzouraa, E., , 2012. 360 degree multi sensor fusion for static and dynamic obstacles. IEEE Intelligent Vehicles Symp., p.692–697. http://dx.doi.org/10.1109/IVS.2012.6232253
[30]
Thrun, S., Leonard, J.J., 2008. Simultaneous localization and mapping. Int. Conf. on Artificial Intelligence, p.871–889. http://dx.doi.org/10.1007/978-3-540-30301-5_38
[31]
Ulrich, L., 2016. 2016’s Top Ten Tech Cars. http://spectrum.ieee.org/transportation/advanced-cars/2016s-top-ten-tech-cars
[32]
Xue, J., Zheng, N.N., Geng, J., , 2008. Tracking multiple visual targets via particle-based belief propagation. IEEE Trans. Syst. Man Cybern. B, 38(1):196–209. http://dx.doi.org/10.1109/TSMCB.2007.910533
[33]
Zhang, Z., 2000. A flexible new technique for camera calibration. IEEE Trans. Patt. Anal. Mach. Intell., 22(11):1330–1334. http://dx.doi.org/10.1109/34.888718
[34]
Zheng, N.N., Liu, Z.Y., Ren, P.J., , 2017. Hybridaugmented intelligence: collaboration and cognition. Front. Inform. Technol. Electron. Eng. , in press. http://dx.doi.org/10.1631/FITEE.1700053

RIGHTS & PERMISSIONS

2017 Zhejiang University and Springer-Verlag Berlin Heidelberg
PDF(1530 KB)

Accesses

Citations

Detail

Sections
Recommended

/