A survey on the visual perception of humanoid robot

Teng Bin , Hanming Yan , Ning Wang , Milutin N. Nikolić , Jianming Yao , Tianwei Zhang

Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (1) : 100197 -100197.

PDF (2628KB)
Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (1) : 100197 -100197. DOI: 10.1016/j.birob.2024.100197
Review
research-article

A survey on the visual perception of humanoid robot

Author information +
History +
PDF (2628KB)

Abstract

In recent years, humanoid robots have gained significant attention due to their potential to revolutionize various industries, from healthcare to manufacturing. A key factor driving this transformation is the advancement of visual perception systems, which are crucial for making humanoid robots more intelligent and autonomous. Despite the progress, the full potential of vision-based technologies in humanoid robots has yet to be fully realized. This review aims to provide a comprehensive overview of recent advancements in visual perception applied to humanoid robots, specifically focusing on applications in state estimation and environmental interaction. By summarizing key developments and analyzing the challenges and opportunities in these areas, this paper seeks to inspire future research that can unlock new capabilities for humanoid robots, enabling them to better navigate complex environments, perform intricate tasks, and interact seamlessly with humans.

Cite this article

Download citation ▾
Teng Bin, Hanming Yan, Ning Wang, Milutin N. Nikolić, Jianming Yao, Tianwei Zhang. A survey on the visual perception of humanoid robot. Biomimetic Intelligence and Robotics, 2025, 5(1): 100197-100197 DOI:10.1016/j.birob.2024.100197

登录浏览全文

4963

注册一个新账户 忘记密码

1 CRediT authorship contribution statement

Teng Bin: Writing - original draft, Visualization, Validation, Supervision, Software, Conceptualization. Hanming Yan: Data curation. Ning Wang: Formal analysis. Milutin N. Nikolić: Writing - review & editing. Jianming Yao: Data curation. Tianwei Zhang: Writing - review & editing, Writing - original draft, Project administration, Funding acquisition, Formal analysis, Data curation.

2 Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

3 Acknowledgments

This work was supported by the National Natural Science Foundation of China (62306185), the Guangdong Basic and Applied Basic Research Foundation, China (2024A1515012065), and the Shenzhen Science and Technology Program, China (JSGGKQTD 20221101115656029 and KJZD20230923113801004).

References

[1]

S. Ahn, S. Yoon, S. Hyung, N. Kwak, K.S. Roh, On-board odometry estimation for 3D vision-based SLAM of humanoid robot, in: 2012 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems, 2012, pp. 4006-4012, http://dx.doi.org/10.1109/IROS.2012.6385743.

[2]

S. Piperakis, M. Koskinopoulou, P. Trahanias, Nonlinear state estimation for humanoid robot walking, IEEE Robot. Autom. Lett. 3 (4) (2018) 3347-3354, http://dx.doi.org/10.1109/LRA.2018.2852788.

[3]

W.-J. Baek, C. Pohl, P. Pelcz, T. Kröger, T. Asfour, Improving humanoid grasp success rate based on uncertainty-aware metrics and sensitivity optimization, in: 2022 IEEE-RAS 21st International Conference on Hu-manoid Robots (Humanoids), 2022, pp. 786-793, http://dx.doi.org/10.1109/Humanoids53995.2022.10000206.

[4]

D.J. Agravante, G. Claudio, F. Spindler, F. Chaumette, Visual servoing in an optimization framework for the whole-body control of humanoid robots, IEEE Robot. Autom. Lett. 2 (2) (2017) 608-615, http://dx.doi.org/10.1109/LRA.2016.2645512.

[5]

A. Bolotnikova, S. Courtois, A. Kheddar, Autonomous initiation of human physical assistance by a humanoid, in: 2020 29th IEEE International Con-ference on Robot and Human Interactive Communication (RO-MAN), 2020, pp. 857-862, http://dx.doi.org/10.1109/RO-MAN47096.2020.9223519.

[6]

R. Mur-Artal, J.M.M. Montiel, J.D. Tardós, ORB-SLAM: A versatile and accu-rate monocular SLAM system, IEEE Trans. Robot. 31 (5) (2015) 1147-1163, http://dx.doi.org/10.1109/TRO.2015.2463671.

[7]

R. Mur-Artal, J.D. Tardós, ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras, IEEE Trans. Robot. 33 (5) (2017) 1255-1262, http://dx.doi.org/10.1109/TRO.2017.2705103.

[8]

C. Campos, R. Elvira, J.J.G. Rodríguez, J.M. M. Montiel, J. D. Tardós, ORB-SLAM3: An accurate open-source library for visual, visual-Inertial, and multimap SLAM, IEEE Trans. Robot. 37 (6) (2021) 1874-1890, http://dx.doi.org/10.1109/TRO.2021.3075644.

[9]

G. Klein, D. Murray, Parallel tracking and mapping for small AR workspaces, in: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007, pp. 225-234, http://dx.doi.org/10.1109/ISMAR.2007.4538852.

[10]

S. Sumikura, M. Shibuya, K. Sakurada, OpenVSLAM: A versatile visual SLAM framework, in: Proceedings of the 27th ACM International Conference on Multimedia, MM ’19, Association for Computing Machinery, New York, NY, USA, ISBN: 9781450368896, 2019, pp. 2292-2295, http://dx.doi.org/10.1145/3343031.3350539, URL

[11]

T. Whelan, S. Leutenegger, R.F. Salas-Moreno, B. Glocker, A.J. Davison, ElasticFusion: Dense SLAM without a pose graph, in: Robotics: Sci. Syst, 11, Rome, Italy, 2015, p. 3.

[12]

C. Forster, M. Pizzoli, D. Scaramuzza, SVO: Fast semi-direct monocular visual odometry, in: 2014 IEEE International Conference on Robotics and Automation, ICRA, 2014, pp. 15-22, http://dx.doi.org/10.1109/ICRA.2014.6906584.

[13]

J. Engel, T. Schöps, D. Cremers, LSD-SLAM: Large-scale direct monocular SLAM, in: D. Fleet, T. Pajdla, B. Schiele, T. Tuytelaars (Eds.), Computer Vision - ECCV 2014, Springer International Publishing, Cham, ISBN:978-3-319-10605-2, 2014, pp. 834-849.

[14]

O. Stasse, A.J. Davison, R. Sellaouti, K. Yokoi, Real-time 3D SLAM for hu-manoid robot considering pattern generator information, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 348-355, http://dx.doi.org/10.1109/IROS.2006.281645.

[15]

R. Scona, S. Nobili, Y.R. Petillot, M. Fallon, Direct visual SLAM fusing proprioception for a humanoid robot, in: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2017, pp. 1419-1426, http://dx.doi.org/10.1109/IROS.2017.8205943.

[16]

R. Sheikh, S. OBwald, M. Bennewitz, A combined RGB and depth descriptor for SLAM with humanoids, in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2018, pp. 1718-1724, http://dx.doi.org/10.1109/IROS.2018.8593768.

[17]

A. Vedadi, A. Yousefi-Koma, P. Yazdankhah, A. Mozayyan, Comparative evaluation of RGB-D SLAM methods for humanoid robot localization and mapping, in: 2023 11th RSI International Conference on Robotics and Mechatronics (ICRoM), 2023, pp. 807-812, http://dx.doi.org/10.1109/ICRoM60803.2023.10412425.

[18]

M. Labbé, F. Michaud, RTAB-map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation, J. Field Robotics 36 (2) (2019) 416-446, http://dx.doi.org/10.1002/rob.21831, URL https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21831.

[19]

K. Chappellet, M. Murooka, G. Caron, F. Kanehiro, A. Kheddar, Humanoid loco-manipulations using combined fast dense 3D tracking and SLAM with wide-angle depth-images, IEEE Trans. Autom. Sci. Eng. 21 (3) (2024) 3691-3704, http://dx.doi.org/10.1109/TASE.2023.3283497.

[20]

M. Tsuru, A. Escande, A. Tanguy, K. Chappellet, K. Harad, Online object searching by a humanoid robot in an unknown environment, IEEE Robot. Autom. Lett. 6 (2) (2021) 2862-2869, http://dx.doi.org/10.1109/LRA.2021.3061383.

[21]

T. Zhang, Y. Nakamura, Humanoid robot rgb-d slam in the dynamic human environment, Int. J. Humanoid Robot 17 (02) (2020) 2050009.

[22]

T. Zhang, E. Uchiyama, Y. Nakamura, Dense RGB-D SLAM for humanoid robots in the dynamic humans environment, in: 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), 2018, pp. 270-276, http://dx.doi.org/10.1109/HUMANOIDS.2018.8625019.

[23]

T. Zhang, H. Zhang, Y. Li, Y. Nakamura, L. Zhang, FlowFusion: Dynamic dense RGB-D SLAM based on optical flow, in: 2020 IEEE International Conference on Robotics and Automation, ICRA, 2020, pp. 7322-7328, http://dx.doi.org/10.1109/ICRA40945.2020.9197349.

[24]

C. Zhang, R. Zhang, S. Jin, X. Yi, PFD-SLAM: A new RGB-D SLAM for dynamic indoor environments based on non-prior semantic segmentation, Remote Sens. (ISSN: 2072-4292) 14 (10) (2022) http://dx.doi.org/10.3390/rs14102445, URL https://www.mdpi.com/2072-4292/14/10/2445.

[25]

D. Wahrmann, A.-C. Hildebrandt, T. Bates, R. Wittmann, F. Sygulla, P. Seiwald, D. Rixen, Vision-based 3d modeling of unknown dynamic envi-ronments for real-time humanoid navigation, Int. J. Humanoid Robot 16(01)(2019) 1950002.

[26]

T. Zhang, H. Zhang, X. Li, Vision-audio fusion SLAM in dynamic environments, CAAI Trans. Intell. Technol 8 (4) (2023) 1364-1373.

[27]

R. Long, C. Rauch, T. Zhang, V. Ivan, S. Vijayakumar, RigidFusion: Robot localisation and mapping in environments with large dynamic rigid ob-jects, IEEE Robot. Autom. Lett. 6 (2) (2021) 3703-3710, http://dx.doi.org/10.1109/LRA.2021.3066375.

[28]

R. Long, C. Rauch, T. Zhang, V. Ivan, T.L. Lam, S. Vijayakumar, RGB-D SLAM in indoor planar environments with multiple large dynamic objects, IEEE Robot. Autom. Lett. 7 (3) (2022) 8209-8216, http://dx.doi.org/10.1109/LRA.2022.3186091.

[29]

S. Song, H. Lim, A.J. Lee, H. Myung, DynaVINS: A visual-inertial SLAM for dynamic environments, IEEE Robot. Autom. Lett. 7 (4) (2022) 11523-11530, http://dx.doi.org/10.1109/LRA.2022.3203231.

[30]

M. Mutlu, A. Saranli, U. Saranli, A real-time inertial motion blur metric: Application to frame triggering based motion blur minimization, in: 2014 IEEE International Conference on Robotics and Automation, ICRA, 2014, pp. 671-676, http://dx.doi.org/10.1109/ICRA.2014.6906926.

[31]

G. Fan, J. Huang, D. Yang, L. Rao, Sampling visual SLAM with a wide-angle camera for legged mobile robots, IET Cyber-Syst. Robot 4 (4) (2022) 356-375.

[32]

G.K. Gultekin, A. Saranli, Multi-frame motion deblurring of video using the natural oscillatory motion of dexterous legged robots, IET Image Process. 13 (9) (2019) 1502-1508.

[33]

P. Liu, X. Zuo, V. Larsson, M. Pollefeys, MBA-VO: Motion blur aware visual odometry, in: 2021 IEEE/CVF International Conference on Computer Vision, ICCV, 2021, pp. 5530-5539, http://dx.doi.org/10.1109/ICCV48922.2021.00550.

[34]

Z.-J. Du, S.-S. Huang, T.-J. Mu, Q. Zhao, R.R. Martin, K. Xu, Accurate dynamic SLAM using CRF-based long-term consistency, IEEE Trans. Vis. Comput. Graphics 28 (4) (2022) 1745-1757, http://dx.doi.org/10.1109/TVCG.2020.3028218.

[35]

Y. Wang, K. Xu, Y. Tian, X. Ding, DRG-SLAM: A semantic RGB-D SLAM using geometric features for indoor dynamic scene, in: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2022, pp. 1352-1359, http://dx.doi.org/10.1109/IROS47612.2022.9981238.

[36]

E. Hourdakis, S. Piperakis, P. Trahanias, Roboslam: Dense RGB-D SLAM for humanoid robots, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2021, pp. 2224-2231, http://dx.doi.org/10.1109/IROS51168.2021.9636044.

[37]

X. Tao, H. Gao, X. Shen, J. Wang, J. Jia,Scale-recurrent network for deep image deblurring, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8174-8182.

[38]

O. Kupyn, T. Martyniuk, J. Wu, Z. Wang, Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8878-8887.

[39]

G. Oriolo, A. Paolillo, L. Rosa, M. Vendittelli, Humanoid odometric localiza-tion integrating kinematic, inertial and visual information, Auton. Robots 40 (2016) 867-879.

[40]

X. Leng, S. Piao, S. Wang, L. Chang, Z. Zhu, An improved method for odometry estimation based on EKF and temporal convolutional network, Phys. Commun. (ISSN: 1874-4907) 43 (2020) 101178, http://dx.doi.org/10.1016/j.phycom.2020.101178, URL https://www.sciencedirect.com/science/article/pii/S187449072030255X.

[41]

V. Dhédin, H. Li, S. Khorshidi, L. Mack, A.K.C. Ravi, A. Meduri, P. Shah, F. Grimminger, L. Righetti, M. Khadiv, J. Stueckler, Visual-inertial and leg odometry fusion for dynamic locomotion, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, 2023, pp. 9966-9972, http://dx.doi.org/10.1109/ICRA48891.2023.10160898.

[42]

J. Kang, H. Kim, K.-S. Kim, VIEW: Visual-inertial external wrench estimator for legged robot, IEEE Robot. Autom. Lett. 8 (12) (2023) 8366-8373, http://dx.doi.org/10.1109/LRA.2023.3322646.

[43]

C. Houseago, M. Bloesch, S. Leutenegger, KO-fusion: Dense visual SLAM with tightly-coupled kinematic and odometric tracking, in: 2019 Interna-tional Conference on Robotics and Automation, ICRA, 2019, pp. 4054-4060, http://dx.doi.org/10.1109/ICRA.2019.8793471.

[44]

A. Roychoudhury, M. Missura, M. Bennewitz, 3D polygonal mapping for humanoid robot navigation, in: 2022 IEEE-RAS 21st International Confer-ence on Humanoid Robots (Humanoids), 2022, pp. 171-177, http://dx.doi.org/10.1109/Humanoids53995.2022.10000101.

[45]

S. Bertrand, I. Lee, B. Mishra, D. Calvert, J. Pratt, R. Griffin, Detecting Usable Planar Regions for legged robot locomotion, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2020, pp. 4736-4742, http://dx.doi.org/10.1109/IROS45743.2020.9341000.

[46]

T. Miki, L. Wellhausen, R. Grandia, F. Jenelten, T. Homberger, M. Hutter, Elevation mapping for locomotion and navigation using GPU, in: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2022, pp. 2273-2280, http://dx.doi.org/10.1109/IROS47612.2022.9981507.

[47]

G. Erni, J. Frey, T. Miki, M. Mattamala, M. Hutter, MEM: Multi-modal elevation mapping for robotics and learning, in: 2023 IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems, IROS, 2023, pp. 11011-11018, http://dx.doi.org/10.1109/IROS55552.2023.10342108.

[48]

C. Pohl, K. Hitzler, R. Grimm, A. Zea, U.D. Hanebeck, T. Asfour, Affordance-based grasping and manipulation in real world applications, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2020, pp. 9569-9576, http://dx.doi.org/10.1109/IROS45743.2020.9341482.

[49]

G. Claudio, F. Spindler, F. Chaumette, Vision-based manipulation with the humanoid robot romeo, in: 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016, pp. 286-293, http://dx.doi.org/10.1109/HUMANOIDS.2016.7803290.

[50]

A. Paolillo, K. Chappellet, A. Bolotnikova, A. Kheddar, Interlinked visual tracking and robotic manipulation of articulated objects, IEEE Robot. Autom. Lett. 3 (4) (2018) 2746-2753, http://dx.doi.org/10.1109/LRA.2018.2835515.

[51]

E.M. Hoffman, A. Paolillo, Exploiting visual servoing and centroidal mo-mentum for whole-body motion control of humanoid robots in absence of contacts and gravity, in: 2021 IEEE International Conference on Robotics and Automation, ICRA, 2021, pp. 2979-2985, http://dx.doi.org/10.1109/ICRA48506.2021.9560739.

[52]

A. Kheddar, S. Caron, P. Gergondet, A. Comport, A. Tanguy, C. Ott, B. Henze, G. Mesesan, J. Englsberger, M.A. Roa, et al., Humanoid robots in aircraft manufacturing: The airbus use cases, IEEE Robot. Autom. Mag. 26(4)(2019) 30-45.

[53]

C. Schenck, D. Fox, Visual closed-loop control for pouring liquids, in: 2017 IEEE International Conference on Robotics and Automation, ICRA, 2017, pp. 2629-2636, http://dx.doi.org/10.1109/ICRA.2017.7989307.

[54]

C. Do, W. Burgard,Accurate pouring with an autonomous robot using an rgb-d camera, in: Intelligent Autonomous Systems 15: Proceedings of the 15th International Conference IAS-15, Springer, 2019, pp. 210-221.

[55]

L.Y. Chen, B. Shi, D. Seita, R. Cheng, T. Kollar, D. Held, K. Goldberg, AutoBag: Learning to open plastic bags and insert objects, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, 2023, pp. 3918-3925, http://dx.doi.org/10.1109/ICRA48891.2023.10161402.

[56]

J. Chen, W.J. Fitzgerald, Continuous multi-modal human interest detection for a domestic companion humanoid robot, in: 2013 16th International Conference on Advanced Robotics, ICAR, 2013, pp. 1-6, http://dx.doi.org/10.1109/ICAR.2013.6766469.

[57]

R. Stiefelhagen, H.K. Ekenel, C. Fugen, P. Gieselmann, H. Holzapfel, F. Kraft, K. Nickel, M. Voit, A. Waibel, Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot, IEEE Trans. Robot. 23 (5)(2007) 840-851, http://dx.doi.org/10.1109/TRO.2007.907484.

[58]

V. Lorentz, M. Weiss, K. Hildebrand, I. Boblan, Pointing gestures for human-robot interaction with the humanoid robot digit, in: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2023, pp. 1886-1892, http://dx.doi.org/10.1109/RO-MAN57019.2023.10309407.

[59]

S. Potdar, A. Sawarkar, F. Kazi, Learning by demonstration from multi-ple agents in humanoid robots, in: 2016 IEEE Students’ Conference on Electrical, Electronics and Computer Science, SCEECS, 2016, pp. 1-6, http://dx.doi.org/10.1109/SCEECS.2016.7509324.

[60]

M.S. Yasar, T. Iqbal, VADER: Vector-quantized generative adversarial net-work for motion prediction, in: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2023, pp. 3827-3834, http://dx.doi.org/10.1109/IROS55552.2023.10342324.

[61]

Y. Chen, L. Sun, M. Benallegue, R. Cisneros-Limón, R.P. Singh, K. Kaneko, A. Tanguy, G. Caron, K. Suzuki, A. Kheddar, F. Kanehiro, Enhanced visual feedback with decoupled viewpoint control in immersive humanoid robot teleoperation using SLAM, in: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), 2022, pp. 306-313, http://dx.doi.org/10.1109/Humanoids53995.2022.9999740.

[62]

R. Cisneros-Limón, A. Dallard, M. Benallegue, K. Kaneko, H. Kaminaga, P. Gergondet, A. Tanguy, R.P. Singh, L. Sun, Y. Chen, et al., A cybernetic avatar system to embody human telepresence for connectivity, exploration, and skill transfer, Int. J. Social Robot (2024) 1-28.

[63]

H. Song, G. Bronfman, Y. Zhang, Q. Sun, J.H. Kim, Mixed reality interface for whole-body balancing and manipulation of humanoid robot, in: 2024 21st International Conference on Ubiquitous Robots, UR, 2024, pp. 642-647, http://dx.doi.org/10.1109/UR61395.2024.10597520.

[64]

B. Mishra, D. Calvert, S. Bertrand, J. Pratt, H.E. Sevil, R. Griffin, Efficient ter-rain map Using Planar Regions for footstep planning on humanoid robots, in: 2024 IEEE International Conference on Robotics and Automation, ICRA, 2024, pp. 8044-8050, http://dx.doi.org/10.1109/ICRA57147.2024.10610879.

[65]

T. Bin, J. Yao, T.L. Lam, T. Zhang,Real-time polygonal semantic mapping for humanoid robot stair climbing, 2024, URL https://arxiv.org/abs/2411.01919.

[66]

C. Feng, Y. Taguchi, V.R. Kamat, Fast plane extraction in organized point clouds using agglomerative hierarchical clustering, in: 2014 IEEE Interna-tional Conference on Robotics and Automation, ICRA, 2014, pp. 6218-6225, http://dx.doi.org/10.1109/ICRA.2014.6907776.

[67]

A. Roychoudhury, M. Missura, M. Bennewitz, Plane segmentation using depth-dependent flood fill, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2021, pp. 2210-2216, http://dx.doi.org/10.1109/IROS51168.2021.9635930.

[68]

P. Fankhauser, M. Bloesch, M. Hutter, Probabilistic terrain mapping for mobile robots with uncertain localization, IEEE Robot. Autom. Lett. 3 (4)(2018) 3019-3026, http://dx.doi.org/10.1109/LRA.2018.2849506.

[69]

A. Tanevska,Towards a Cognitive Architecture for Socially Adaptive Human-Robot Interaction. (Ph.D. thesis), University of Genoa, Italy, 2020.

[70]

J. Wang, M. Mattamala, C. Kassab, L. Zhang, M. Fallon,Exosense: A vision-centric scene understanding system for safe exoskeleton navigation, 2024, URL https://arxiv.org/abs/2403.14320.

[71]

H. Farazi, P. Allgeuer, G. Ficht, A. Brandenburger, D. Pavlichenko, M. Schreiber, S. Behnke,RoboCup 2016 humanoid TeenSize winner nimbro: Robust visual perception and soccer behaviors, in: RoboCup 2016: Robot World Cup XX 20, Springer, 2017, pp. 478-490.

[72]

D. Pavlichenko, G. Ficht, A. Amini, M. Hosseini, R. Memmesheimer, A. Villar-Corrales, S.M. Schulz, M. Missura, M. Bennewitz, S. Behnke, RoboCup 2022 AdultSize winner nimbro: Upgraded perception, capture steps gait and phase-based in-walk kicks, in: A. Eguchi, N. Lau, M. Paetzel-Prüsmann, T. Wanichanon (Eds.), RoboCup 2022: Robot World Cup XXV, Springer International Publishing, Cham,ISBN: 978-3-031-28469-4, 2023, pp. 240-252.

[73]

D. Rodriguez, H. Farazi, G. Ficht, D. Pavlichenko, A. Brandenburger, M. Hosseini, O. Kosenko, M. Schreiber, M. Missura, S. Behnke,RoboCup 2019 AdultSize winner nimbro: Deep learning perception, in-walk kick, push recovery, and team play capabilities,in: S. Chalup, T. Niemueller, J. Suthakorn, M.-A. Williams (Eds.), RoboCup 2019: Robot World Cup XXIII, Springer International Publishing, Cham, ISBN: 978-3-030-35699-6, 2019, pp. 631-645.

[74]

O. Avioz-Sarig, S. Olatunji, V. Sarne-Fleischmann, Y. Edan, Robotic system for physical training of older adults, Int. J. Social Robot 13 (5) (2021) 1109-1124.

[75]

B. Zitkovich, T. Yu, S. Xu, et al.RT-2: Vision-language-action models transfer web knowledge to robotic control,in: J. Tan, M. Toussaint, K. Darvish (Eds.), Proceedings of the 7th Conference on Robot Learning, in: Proceedings of Machine Learning Research, 229, PMLR, 2023, pp. 2165-2183, https://proceedings.mlr.press/v229/zitkovich23a.html.

[76]

X. Yu, W. He, Q. Li, Y. Li, B. Li, Human-robot co-carrying using visual and force sensing, IEEE Trans. Ind. Electron. 68 (9) (2021) 8657-8666, http://dx.doi.org/10.1109/TIE.2020.3016271.

AI Summary AI Mindmap
PDF (2628KB)

570

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/