Human-like dexterous manipulation for anthropomorphic five-fingered

Yayu Huang , Dongxuan Fan , Haonan Duan , Dashun Yan , Wen Qi , Jia Sun , Qian Liu , Peng Wang

Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (1) : 100212 -100212.

PDF (1891KB)
Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (1) : 100212 -100212. DOI: 10.1016/j.birob.2025.100212
Review
research-article

Human-like dexterous manipulation for anthropomorphic five-fingered

Author information +
History +
PDF (1891KB)

Abstract

Humans excel at dexterous manipulation; however, achieving human-level dexterity remains a significant challenge for robots. Technological breakthroughs in the design of anthropomorphic robotic hands, as well as advancements in visual and tactile perception, have demonstrated significant advantages in addressing this issue. However, coping with the inevitable uncertainty caused by unstructured and dynamic environments in human-like dexterous manipulation tasks, especially for anthropomorphic five-fingered hands, remains an open problem. In this paper, we present a focused review of human-like dexterous manipulation for anthropomorphic five-fingered hands. We begin by defining human-like dexterity and outlining the tasks associated with human-like robot dexterous manipulation. Subsequently, we delve into anthropomorphism and anthropomorphic five-fingered hands, covering definitions, robotic design, and evaluation criteria. Furthermore, we review the learning methods for achieving human-like dexterity in anthropomorphic five-fingered hands, including imitation learning, reinforcement learning and their integration. Finally, we discuss the existing challenges and propose future research directions. This review aims to stimulate interest in scientific research and future applications.

Keywords

Anthropomorphic five-fingered hands / Human-like dexterous manipulation / Robot learning / Review

Cite this article

Download citation ▾
Yayu Huang, Dongxuan Fan, Haonan Duan, Dashun Yan, Wen Qi, Jia Sun, Qian Liu, Peng Wang. Human-like dexterous manipulation for anthropomorphic five-fingered. Biomimetic Intelligence and Robotics, 2025, 5(1): 100212-100212 DOI:10.1016/j.birob.2025.100212

登录浏览全文

4963

注册一个新账户 忘记密码

1 CRediT authorship contribution statement

Yayu Huang: Writing - original draft, Visualization, Methodology, Conceptualization. Dongxuan Fan: Resources, Investigation, Data curation. Haonan Duan: Writing - review & editing. Dashun Yan: Writing - review & editing. Wen Qi: Writing - review & editing. Jia Sun: Visualization, Validation. Qian Liu: Visualization, Validation. Peng Wang: Writing - review & editing, Supervision, Project administration, Funding acquisition.

2 Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

3 Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (91748131, 62006229, and 61771471), in part by Young Scientists Fund of the National Natural Science Foundation of China (62303454), in part by the Strategic Priority Research Program of Chinese Academy of Science (XDB32050106), and in part by the InnoHK Project.

References

[1]

A. Baratta, A. Cimino, M.G. Gnoni, F. Longo,Human robot collaboration in industry 4.0: a literature review, Procedia Comput. Sci. 217 (2023) 1887-1895.

[2]

L. Yu, Y. Wang, X. Wei, C. Zeng, Towards low-carbon development: The role of industrial robots in decarbonization in Chinese cities, J. Environ. Manag. 330 (2023) 117216.

[3]

B. Liu, X. Yang, J. Zhang, Nonlinear effect of industrial robot applications on carbon emissions: evidence from China, Environ. Impact Assess. Rev. 104 (2024) 107297.

[4]

M. Zhang, G. Tian, Y. Cui, Y. Zhang, Z. Xia, Hierarchical semantic knowledge-based object search method for household robots, IEEE Trans. Emerg. Top. Comput. Intell. (2023).

[5]

H. Xiong, H. Fu, J. Zhang, C. Bao, Q. Zhang, Y. Huang, W. Xu, A. Garg, C. Lu, Robotube: Learning household manipulation from human videos with simulated twin environments,in:Conference on Robot Learning, PMLR, 2023, pp. 1-10.

[6]

A. Soni, S. Alla, S. Dodda, H. Volikatla, Advancing household robotics: Deep interactive reinforcement learning for efficient training and enhanced performance, 2024, arXiv preprint arXiv:2405.18687.

[7]

S. Poorghasem, Y. Bao, Review of robot-based automated measurement of vibration for civil engineering structures, Measurement 207 (2023) 112382.

[8]

B. Ma, Z. Jiang, Y. Liu, Z. Xie, Advances in space robots for on-orbit servicing: A comprehensive review, Adv. Intell. Syst. 5 (8) (2023) 2200397.

[9]

E. Tokgöz, M.A. Carro, Robotics applications in facial plastic surgeries, in: Cosmetic and Reconstructive Facial Plastic Surgery: A Review of Medical and Biomedical Engineering and Science Concepts, Springer, 2023, pp. 307-341.

[10]

S. Yang, J. Chen, A. Li, K. Deng, P. Li, S. Xu, Accuracy of autonomous robotic surgery for single-tooth implant placement: A case series, J. Dent. 132 (2023) 104451.

[11]

J.E. Knudsen, U. Ghaffar, R. Ma, A.J. Hung, Clinical applications of artificial intelligence in robotic surgery, J. Robot. Surg. 18 (1) (2024) 102.

[12]

A.M. Okamura, N. Smaby, M.R. Cutkosky,An overview of dexter-ous manipulation, in:Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol. 1, IEEE, 2000, pp. 255-262.

[13]

E. Mattar, A survey of bio-inspired robotics hands implementation: New directions in dexterous manipulation, Robot. Auton. Syst. 61 (5) (2013) 517-544.

[14]

E.N. Gama Melo, O.F. Aviles Sanchez, D. Amaya Hurtado, Anthropomorphic robotic hands: a review, Ing. y Desarro. 32 (2) (2014) 279-313.

[15]

C. Melchiorri, M. Kaneko, Robot hands, in: Springer Handbook of Robotics, Springer, 2016, pp. 463-480.

[16]

C. Piazza, G. Grioli, M. Catalano, A. Bicchi, A century of robotic hands, Annu. Rev. Control., Robot., Auton. Syst. 2 (2019) 1-32.

[17]

Y. Liu, Z. Li, H. Liu, Z. Kan, B. Xu, Bioinspired embodiment for intelligent sensing and dexterity in fine manipulation: A survey, IEEE Trans. Ind. Inform. 16 (7) (2020) 4308-4321.

[18]

O. Kroemer, S. Niekum, G. Konidaris, A review of robot learning for manipulation: Challenges, representations, and algorithms, J. Mach. Learn. Res. 22 (2021) 30-31.

[19]

J. Hua, L. Zeng, G. Li, Z. Ju, Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning, Sensors 21 (4) (2021) 1278.

[20]

H. Duan, P. Wang, Y. Huang, G. Xu, W. Wei, X. Shen, Robotics dexterous grasping: The methods based on point cloud and deep learning, Front. Neurorobotics (2021).

[21]

A.R. Sobinov, S.J. Bensmaia, The neural mechanisms of manual dexterity, Nature Rev. Neurosci. 22 (12) (2021) 741-757.

[22]

C. Yu, P. Wang, Dexterous manipulation for multi-fingered robotic hands with reinforcement learning: a review, Front. Neurorobotics 16 (2022) 861825.

[23]

Z. Xia, Z. Deng, B. Fang, Y. Yang, F. Sun, A review on sensory perception for dexterous robotic manipulation, Int. J. Adv. Robot. Syst. 19 (2) (2022) http://dx.doi.org/10.1177/17298806221095974.

[24]

S. Kadalagere Sampath, N. Wang, H. Wu, C. Yang, Review on human-like robot manipulation using dexterous hands, Cogn. Comput. Syst. 5 (1)(2023) 14-29.

[25]

J.R. Napier,The prehensile movements of the human hand, J. Bone Jt. Surg. Br. Vol. 38 (4) (1956) 902-913.

[26]

H. Yuan, D. Li, J. Wu, Efficient learning of grasp selection for five-finger dexterous hand, in: 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, CYBER, IEEE, 2017, pp. 1101-1106.

[27]

S. Brahmbhatt, A. Handa, J. Hays, D. Fox, Contactgrasp: Functional multi-finger grasp synthesis from contact, in: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2019, pp. 2386-2393.

[28]

Y.-R. Liu, M.-B. Huang, H.-P. Huang, Automated grasp planning and path planning for a robot hand-arm system, in: 2019 IEEE/SICE International Symposium on System Integration, SII, IEEE, 2019, pp. 92-97.

[29]

P. Song, Z. Fu, L. Liu, Grasp planning via hand-object geometric fitting, Vis. Comput. 34 (2) (2018) 257-270.

[30]

H. Jiang, S. Liu, J. Wang, X. Wang,Hand-object contact consistency reasoning for human grasps generation, in:Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 11107-11116.

[31]

P. Grady, C. Tang, C.D. Twigg, M. Vo, S. Brahmbhatt, C.C. Kemp, Con-tactOpt: Optimizing contact to improve grasps,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1471-1481.

[32]

P. Mandikal, K. Grauman, Learning dexterous grasping with object-centric visual affordances, in: 2021 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2021, pp. 6169-6176.

[33]

W. Wei, P. Wang, S. Wang, Y. Luo, W. Li, D. Li, Y. Huang, H. Duan, Learning human-like functional grasping for multi-finger hands from few demonstrations, IEEE Trans. Robot. (2024).

[34]

Z. Deng, G. Gao, S. Frintrop, F. Sun, C. Zhang, J. Zhang, Attention based visual analysis for fast grasp planning with a multi-fingered robotic hand, Front. Neurorobotics 13 (2019) 60.

[35]

F. Ficuciello, Hand-arm autonomous grasping: Synergistic motions to enhance the learning process, Intell. Serv. Robot. 12 (1) (2019) 17-25.

[36]

T. Osa, J. Peters, G. Neumann, Hierarchical reinforcement learning of multiple grasping strategies with human instructions, Adv. Robot. 32 (18)(2018) 955-968.

[37]

K. Higashi, K. Koyama, R. Ozawa, K. Nagata, W. Wan, K. Harada, Function-ally divided manipulation synergy for controlling multi-fingered hands, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2020, pp. 9190-9197.

[38]

R. Chauhan, B. Sebastian, P. Ben-Tzvi, Grasp prediction toward naturalistic exoskeleton glove control, IEEE Trans. Hum. -Mach. Syst. 50 (1) (2019) 22-31.

[39]

B. Rao, H. Li, K. Krishnan, E. Boldsaikhan, H. He, Knowledge-augmented dexterous grasping with incomplete sensing, 2020, arXiv preprint arXiv: 2011.08361.

[40]

Z. Deng, B. Fang, B. He, J. Zhang, An adaptive planning framework for dexterous robotic grasping with grasp type detection, Robot. Auton. Syst. 140 (2021) 103727.

[41]

Z. Deng, Y. Jonetzko, L. Zhang, J. Zhang, Grasping force control of multi-fingered robotic hands through tactile sensing for object stabilization, Sensors (Switzerland) 20 (2020) http://dx.doi.org/10.3390/s20041050.

[42]

F. Veiga, B.B. Edin, J. Peters, In-hand object stabilization by independent finger control, 2018, arXiv preprint arXiv:1806.05031.

[43]

D. Jain, A. Li, S. Singhal, A. Rajeswaran, V. Kumar, E. Todorov, Learning deep visuomotor policies for dexterous hand manipulation, in: 2019 International Conference on Robotics and Automation, ICRA, IEEE, 2019, pp. 3636-3643.

[44]

I. Radosavovic, X. Wang, L. Pinto, J. Malik, State-only imitation learning for dexterous manipulation, in: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2021, pp. 7865-7871.

[45]

A. Rajeswaran, V. Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, S. Levine, Learning complex dexterous manipulation with deep rein-forcement learning and demonstrations, 2017, arXiv preprint arXiv:1709. 10087.

[46]

O.A.M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, W. Zaremba, Learning dexterous in-hand manipulation, Int. J. Robot. Res. 39 (2020) 3-20, http://dx.doi.org/10.1177/0278364919887447..

[47]

Z. Deng, J.W. Zhang, Learning synergies based in-hand manipulation with reward shaping, CAAI Trans. Intell. Technol. 5 (2020) 141-149, http://dx.doi.org/10.1049/trit.2019.0094.

[48]

A. Nagabandi, K. Konolige, S. Levine, V. Kumar, Deep dynamics models for learning dexterous manipulation, in: Conference on Robot Learning, PMLR, 2020, pp. 1101-1112.

[49]

Y. Xue, L. Tang, Y.-B. Jia, Dynamic finger gaits via pivoting and adapting contact forces, in: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2023, pp. 8784-8791, http://dx.doi.org/10.1109/IROS55552.2023.10342156.

[50]

G. Khandate, M. Haas-Heger, M. Ciocarlie, On the feasibility of learning finger-gaiting in-hand manipulation with intrinsic sensing, in: 2022 International Conference on Robotics and Automation, ICRA, 2022, pp. 2752-2758, http://dx.doi.org/10.1109/ICRA46639.2022.9812212.

[51]

A.S. Morgan, K. Hang, B. Wen, K. Bekris, A.M. Dollar, Complex in-hand manipulation via compliance-enabled finger gaiting and multi-modal planning, IEEE Robot. Autom. Lett. 7 (2) (2022) 4821-4828, http://dx.doi.org/10.1109/LRA.2022.3145961.

[52]

Y. Toshimitsu, B. Forrai, B.G. Cangan, U. Steger, M. Knecht, S. Weirich, R.K. Katzschmann, Getting the ball rolling: Learning a dexterous policy for a biomimetic tendon-driven hand with rolling contact joints, in: 2023 IEEE-RAS 22nd International Conference on Humanoid Robots, Humanoids, 2023, pp. 1-7, http://dx.doi.org/10.1109/Humanoids57100. 2023.10375231.

[53]

J. Shi, J.Z. Woodruff, P.B. Umbanhowar, K.M. Lynch, Dynamic in-hand sliding manipulation, IEEE Trans. Robot. 33 (4) (2017) 778-795.

[54]

D. Pavlichenko, S. Behnke, Deep reinforcement learning of dexterous pre-grasp manipulation for human-like functional categorical grasping, in: 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE, IEEE, 2023, pp. 1-8.

[55]

T. Wu, Y. Gan, M. Wu, J. Cheng, Y. Yang, Y. Zhu, H. Dong, Unidexfpm: Uni-versal dexterous functional pre-grasp manipulation via diffusion policy, 2024, arXiv preprint arXiv:2403.12421.

[56]

S. Chen, A. Wu, C.K. Liu,Synthesizing dexterous nonprehensile pregrasp for ungraspable objects, in:ACM SIGGRAPH 2023 Conference Proceedings, 2023, pp. 1-10.

[57]

M. Qin, J. Brawer, B. Scassellati, Rapidly learning generalizable and robot-agnostic tool-use skills for a wide range of tasks, Front. Robot. AI 8 (2021) http://dx.doi.org/10.3389/frobt.2021.726463, URL https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2021.726463.

[58]

Y. Zhou, P. Zhou, S. Wang, Y. She,In-hand singulation and scooping manipulation with a 5 DOF tactile gripper, 2024, URL https://arxiv.org/abs/2408.00610.arXiv:2408.00610.

[59]

Y. Qin, Y.-H. Wu, S. Liu, H. Jiang, R. Yang, Y. Fu, X. Wang, Dexmv: Imitation learning for dexterous manipulation from human videos,in: European Conference on Computer Vision, Springer, 2022, pp. 570-587.

[60]

S. Katyara, F. Ficuciello, D.G. Caldwell, B. Siciliano, F. Chen, Leveraging kernelized synergies on shared subspace for precision grasping and dexterous manipulation, IEEE Trans. Cogn. Dev. Syst. (2021) http://dx.doi.org/10.1109/TCDS.2021.3110406,1-1.

[61]

A. Nagabandi, K. Konoglie, S. Levine, V. Kumar, Deep dynamics models for learning dexterous manipulation, 2019, pp. 1-12, arXiv.

[62]

L. Tang, Y.-B. Jia, Y. Xue, Robotic manipulation of hand tools: The case of screwdriving, in: 2024 IEEE International Conference on Robotics and Automation, ICRA, 2024, pp. 13883-13890, http://dx.doi.org/10.1109/ICRA57147.2024.10610831.

[63]

F. Krebs, T. Asfour, A bimanual manipulation taxonomy, IEEE Robot. Autom. Lett. 7 (4) (2022) 11031-11038.

[64]

R. Jeong, J.T. Springenberg, J. Kay, D. Zheng, Y. Zhou, A. Galashov, N. Heess, F. Nori, Learning dexterous manipulation from suboptimal experts, 2020, arXiv preprint arXiv:2010.08587.

[65]

M. Laghi, L. Raiano, F. Amadio, F. Rollo, A. Zunino, A. Ajoudani, A target-guided telemanipulation architecture for assisted grasping, IEEE Robot. Autom. Lett. 7 (4) (2022) 8759-8766.

[66]

K. Shaw, Y. Li, J. Yang, M.K. Srirama, R. Liu, H. Xiong, R. Mendonca, D. Pathak, Bimanual dexterity for complex tasks, 2024, arXiv preprint arXiv:2411.13677.

[67]

Z. Jiang, Y. Xie, K. Lin, Z. Xu, W. Wan, A. Mandlekar, L. Fan, Y. Zhu, DexMimicGen: Automated data generation for bimanual dexterous manipulation via imitation learning, 2024, arXiv preprint arXiv:2410. 24185.

[68]

Y. Chen, T. Wu, S. Wang, X. Feng, J. Jiang, Z. Lu, S. McAleer, H. Dong, S.-C. Zhu, Y. Yang,Towards human-level bimanual dexterous manipulation with reinforcement learning, Adv. Neural Inf. Process. Syst. 35 (2022) 5150-5163.

[69]

Y. Chen, Y. Geng, F. Zhong, J. Ji, J. Jiang, Z. Lu, H. Dong, Y. Yang, Bi-dexhands: Towards human-level bimanual dexterous manipulation, IEEE Trans. Pattern Anal. Mach. Intell. (2023).

[70]

J. Grannen, Y. Wu, B. Vu, D. Sadigh, Stabilize to act: Learning to coordinate for bimanual manipulation,in:Conference on Robot Learning, PMLR, 2023, pp. 563-576.

[71]

Y. Huang, Z. Wang, X. Shen, Q. Liu, P. Wang, Human-like dexterous manipulation for the anthropomorphic hand-arm robotic system via teleoperation, in: International Conference on Intelligent Robotics and Applications, Springer, 2023, pp. 309-321.

[72]

S. Li, X. Ma, H. Liang, M. Görner, P. Ruppel, B. Fang, F. Sun, J. Zhang, Vision-based teleoperation of shadow dexterous hand using end-to-end deep neural network, 2018, arXiv preprint arXiv:1809.06268. URL

[73]

Q. Gao, Z. Deng, Z. Ju, T. Zhang, Dual-hand motion capture by using biological inspiration for bionic bimanual robot teleoperation, Cyborg Bionic Syst. 4 (2023) 0052, http://dx.doi.org/10.34133/cbsystems.0052, URL https://spj.science.org/doi/abs/10.34133/cbsystems.0052.arXiv:https://spj.science.org/doi/pdf/10.34133/cbsystems.0052.

[74]

X. Cheng, J. Li, S. Yang, G. Yang, X. Wang, Open-television: Teleoperation with immersive active visual feedback, 2024, arXiv preprint arXiv:2407. 01512.

[75]

R. Ding, Y. Qin, J. Zhu, C. Jia, S. Yang, R. Yang, X. Qi, X. Wang, Bunny-visionpro: Real-time bimanual dexterous teleoperation for imitation learning, 2024, arXiv preprint arXiv:2407.03162.

[76]

L. Biagiotti, F. Lotti, C. Melchiorri, G. Vassura, How Far Is the Human Hand? A Review on Anthropomorphic Robotic End-Effectors, Tech. Rep., University of Bologna, 2004.

[77]

S. Puhlmann, J. Harris, O. Brock, RBO hand 3: A platform for soft dexterous manipulation, IEEE Trans. Robot. 38 (6) (2022) 3434-3449.

[78]

M.G. Catalano, G. Grioli, E. Farnioli, A. Serio, C. Piazza, A. Bicchi, Adaptive synergies for the design and control of the Pisa/IIT SoftHand, Int. J. Robot. Res. 33 (5) (2014) 768-782.

[79]

J.T. Belter, M.T. Leddy, K.D. Gemmell, A.M. Dollar, Comparative clinical evaluation of the yale multigrasp hand, in: 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics, BioRob, IEEE, 2016, pp. 528-535.

[80]

S. Takamuku, A. Fukuda, K. Hosoda, Repetitive grasping with anthropo-morphic skin-covered hand enables robust haptic recognition, in: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2008, pp. 3212-3217.

[81]

A.M. Dollar, R.D. Howe, The highly adaptive SDM hand: Design and performance evaluation, Int. J. Robot. Res. 29 (5) (2010) 585-597.

[82]

C. Melchiorri, G. Palli, G. Berselli, G. Vassura, Development of the ub hand iv: Overview of design solutions and enabling technologies, IEEE Robot. Autom. Mag. 20 (3) (2013) 72-81.

[83]

O. Shorthose, A. Albini, L. He, P. Maiolino, Design of a 3D-printed soft robotic hand with integrated distributed tactile sensing, IEEE Robot. Autom. Lett. 7 (2) (2022) 3945-3952.

[84]

T. Feix, J. Romero, H.-B. Schmiedmayer, A.M. Dollar, D. Kragic, The grasp taxonomy of human grasp types, IEEE Trans. Hum.-Mach. Syst.. 46 (1)(2015) 66-77.

[85]

H. Liu, K. Wu, P. Meusel, N. Seitz, G. Hirzinger, M. Jin, Y. Liu, S. Fan, T. Lan, Z. Chen, Multisensory five-finger dexterous hand: The DLR/HIT hand II, in: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2008, pp. 3692-3697.

[86]

L.B. Bridgwater, C.A. Ihrke, M.A. Diftler, M.E. Abdallah, N.A. Radford, J.M. Rogers, S. Yayathi, R.S. Askew, D.M. Linn, The robonaut 2 hand - designed to do work with tools, in: 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 3425-3430, http://dx.doi.org/10.1109/ICRA.2012.6224772.

[87]

SCHUNK 5-finger hand, 2012, https://schunk.com/de_en/gripping-systems/highlights/svh/

[88]

U. Kim, D. Jung, H. Jeong, J. Park, H.-M. Jung, J. Cheong, H.R. Choi, H. Do, C. Park, Integrated linkage-driven dexterous anthropomorphic robotic hand, Nat. Commun. 12 (1) (2021) 1-13.

[89]

Shadow dexterous hand e1 series,, 2013. http://www.shadowrobot.com/wpcontent/uploads/shadowdexteroushandtechnicalspecificationE120130101.pdf/.

[90]

H. Yang, G. Wei, L. Ren, Z. Qian, K. Wang, H. Xiu, W. Liang, A low-cost linkage-spring-tendon-integrated compliant anthropomorphic robotic hand: MCR-hand III, Mech. Mach. Theory 158 (2021) 104210.

[91]

W. Ryu, Y. Choi, Y.J. Choi, Y.G. Lee, S. Lee, Development of an anthro-pomorphic prosthetic hand with underactuated mechanism, Appl. Sci. 10(12)(2020) 4384.

[92]

L. Biagiotti, Advanced robotic hands: Design and control aspects, Dep. Electron., Comput. Sci. Syst. (2002).

[93]

M.V. Liarokapis, P.K. Artemiadis, K.J. Kyriakopoulos, Quantifying anthro-pomorphism of robot hands, in: 2013 IEEE International Conference on Robotics and Automation, IEEE, 2013, pp. 2041-2046.

[94]

Y. Liu, D. Yang, L. Jiang, H. Liu, A synthetic framework for evaluating the anthropomorphic characteristics of prosthetic hands, in: 2015 IEEE International Conference on Advanced Intelligent Mechatronics, AIM, IEEE, 2015, pp. 877-884. Y. Huang, D. Fan, H. Duan et al.

[95]

J.T. Belter, A.M. Dollar, Performance characteristics of anthropomorphic prosthetic hands, in: 2011 IEEE International Conference on Rehabilitation Robotics, IEEE, 2011, pp. 1-7.

[96]

J. Zhou, Y. Chen, D.C.F. Li, Y. Gao, Y. Li, S.S. Cheng, F. Chen, Y. Liu, 50 benchmarks for anthropomorphic hand function-based dexterity classifi-cation and kinematics-based hand design, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2020, pp. 9159-9165, http://dx.doi.org/10.1109/IROS45743.2020.9340982.

[97]

I. Llop-Harillo, A. Pérez-González, J. Starke, T. Asfour, The anthropomor-phic hand assessment protocol (AHAP), Robot. Auton. Syst. 121 (2019) 103259.

[98]

I. Llop-Harillo, J.L. Iserte, A. Pérez-González, Benchmarking anthropomor-phic hands through grasping simulations, J. Comput. Des. Eng. 9 (2) (2022) 330-342.

[99]

T. Feix, J. Romero, C.H. Ek, H.-B. Schmiedmayer, D. Kragic, A metric for comparing the anthropomorphic motion capability of artificial hands, IEEE Trans. Robot. 29 (1) (2012) 82-93.

[100]

M.Q. Mohammed, L.C. Kwek, S.C. Chua, A. Al-Dhaqm, S. Nahavandi, T.A.E. Eisa, M.F. Miskon, M.N. Al-Mhiqani, A. Ali, M. Abaker, et al., Review of learning-based robotic manipulation in cluttered environments, Sensors 22 (20) (2022) 7938.

[101]

V. Caggiano, S. Dasari, V. Kumar, Myodex: a generalizable prior for dex-terous manipulation,in:International Conference on Machine Learning, PMLR, 2023, pp. 3327-3346.

[102]

W. Li, W. Wei, P. Wang, Continual learning for anthropomorphic hand grasping, IEEE Trans. Cogn. Dev. Syst. 16 (2) (2023) 559-569.

[103]

A. Sahbani, S. El-Khoury, P. Bidaud, An overview of 3D object grasp synthesis algorithms, Robot. Auton. Syst. 60 (3) (2012) 326-336.

[104]

A.T. Miller, P.K. Allen, Graspit! a versatile simulator for robotic grasping, IEEE Robot. Autom. Mag. 11 (4) (2004) 110-122.

[105]

M. Malvezzi, G. Gioioso, G. Salvietti, D. Prattichizzo, A. Bicchi,SynGrasp: A MATLAB toolbox for grasp analysis of human and robotic hands, in: 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 1088-1093, http://dx.doi.org/10.1109/ICRA.2013.6630708.

[106]

B. Fang, S. Jia, D. Guo, M. Xu, S. Wen, F. Sun, Survey of imitation learning for robotic manipulation, Int. J. Intell. Robot. Appl. 3 (4) (2019) 362-369.

[107]

C. Gabellieri, M. Garabini, F. Angelini, V. Arapi, A. Palleschi, M.G. Catalano, G. Grioli, L. Pallottino, A. Bicchi, M. Bianchi, Grasp it like a pro: Grasp of unknown objects with robotic hands based on skilled human expertise, IEEE Robot. Autom. Lett. 5 (2020) 2808-2815, http://dx.doi.org/10.1109/LRA.2020.2974391.

[108]

F. Song, Z. Zhao, W. Ge, W. Shang, S. Cong, Learning optimal grasping posture of multi-fingered dexterous hands for unknown objects, in: 2018 IEEE International Conference on Robotics and Biomimetics, ROBIO, IEEE, 2018, pp. 2310-2315.

[109]

V. Arapi, Y. Zhang, G. Averta, M.G. Catalano, D. Rus, C. Della Santina, M. Bianchi, To grasp or not to grasp: an end-to-end deep-learning approach for predicting grasping failures in soft hands, in: 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), IEEE, 2020, pp. 653-660.

[110]

V. Kumar, A. Gupta, E. Todorov, S. Levine, Learning dexterous manipula-tion policies from experience and imitation, 2016, arXiv preprint arXiv: 1611.05095.

[111]

H. Liang, L. Cong, N. Hendrich, S. Li, F. Sun, J. Zhang, Multifingered grasping based on multimodal reinforcement learning, IEEE Robot. Autom. Lett. (2021).

[112]

S. Brahmbhatt, C. Tang, C.D. Twigg, C.C. Kemp, J. Hays, ContactPose: A dataset of grasps with object contact and hand pose,in:Computer Vision-ECCV 2020: 16 th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIII 16, Springer, 2020, pp. 361-378.

[113]

O. Taheri, N. Ghorbani, M.J. Black, D. Tzionas, GRAB: A dataset of whole-body human grasping of objects,in:Computer Vision-ECCV 2020: 16 th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16, Springer, 2020, pp. 581-600.

[114]

F. Krebs, A. Meixner, I. Patzer, T. Asfour, The kit bimanual manipulation dataset, in: 2020 IEEE-RAS 20th International Conference on Humanoid Robots, Humanoids, IEEE, 2021, pp. 499-506.

[115]

Z. Fan, O. Taheri, D. Tzionas, M. Kocabas, M. Kaufmann, M.J. Black, O. Hilliges, ARCTIC: A dataset for dexterous bimanual hand-object manipu-lation,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12943-12954.

[116]

Y. Liu, H. Yang, X. Si, L. Liu, Z. Li, Y. Zhang, Y. Liu, L. Yi, Taco: Benchmarking generalizable bimanual tool-action-object understanding,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 21740-21751.

[117]

Y.-W. Chao, W. Yang, Y. Xiang, P. Molchanov, A. Handa, J. Tremblay, Y.S. Narang, K. Van Wyk, U. Iqbal, S. Birchfield, et al., DexYCB: A benchmark for capturing hand grasping of objects,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9044-9053.

[118]

J. Jian, X. Liu, M. Li, R. Hu, J. Liu, Affordpose: A large-scale dataset of hand-object interactions with affordance-driven hand pose,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14713-14724.

[119]

S. Hampali, M. Rad, M. Oberweger, V. Lepetit, Honnotate: A method for 3d annotation of hand and object poses,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3196-3206.

[120]

T. Kwon, B. Tekin, J. Stühmer, F. Bogo, M. Pollefeys, H2O: Two hands ma-nipulating objects for first person interaction recognition,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10138-10148.

[121]

Y. Li, W. Wei, D. Li, P. Wang, W. Li, J. Zhong, HGC-Net: Deep anthropo-morphic hand grasping in clutter, IEEE Int. Conf. Robot. Autom. (ICRA)(2022).

[122]

W. Wei, D. Li, P. Wang, Y. Li, W. Li, Y. Luo, J. Zhong, DVGG: Deep vari-ational grasp generation for dextrous manipulation, IEEE Robot. Autom. Lett. (2022).

[123]

Y. Hasson, G. Varol, D. Tzionas, I. Kalevatykh, M.J. Black, I. Laptev, C. Schmid,Learning joint reconstruction of hands and manipulated objects, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11807-11816.

[124]

M. Liu, Z. Pan, K. Xu, K. Ganguly, D. Manocha, Deep differentiable grasp planner for high-dof grippers, 2020, arXiv preprint arXiv:2002.01530.

[125]

L.F.C. Murrilo, N. Khargonkar, B. Prabhakaran, Y. Xiang, MultiGripper-Grasp: A dataset for robotic grasping from parallel jaw grippers to dexterous hands, 2024, arXiv preprint arXiv:2403.09841.

[126]

L. Zhang, K. Bai, G. Huang, Z. Bing, Z. Chen, A. Knoll, J. Zhang, Multi-fingered robotic hand grasping in cluttered environments through hand-object contact semantic mapping, 2024, arXiv preprint arXiv:2404. 08844.

[127]

Y. Shao, C. Xiao, Bimanual grasp synthesis for dexterous robot hands, IEEE Robot. Autom. Lett. (2024).

[128]

B. Huang, S. El-Khoury, M. Li, J.J. Bryson, A. Billard, Learning a real time grasping strategy, in: Proceedings - IEEE International Conference on Robotics and Automation, 2013, pp. 593-600, http://dx.doi.org/10.1109/ICRA.2013.6630634.

[129]

T. Liu, Z. Liu, Z. Jiao, Y. Zhu, S.-C. Zhu, Synthesizing diverse and physically stable grasps with arbitrary hand structures using differentiable force closure estimator, IEEE Robot. Autom. Lett. 7 (1) (2021) 470-477.

[130]

R. Wang, J. Zhang, J. Chen, Y. Xu, P. Li, T. Liu, H. Wang, Dexgraspnet: A large-scale robotic dexterous grasp dataset for general objects based on simulation, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2023, pp. 11359-11366.

[131]

D. Turpin, T. Zhong, S. Zhang, G. Zhu, E. Heiden, M. Macklin, S. Tsogkas, S. Dickinson, A. Garg, Fast-grasp’D: Dexterous multi-finger grasp generation through differentiable simulation, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2023, pp. 8082-8089.

[132]

L. Yang, K. Li, X. Zhan, F. Wu, A. Xu, L. Liu, C. Lu, Oakink: A large-scale knowledge repository for understanding hand-object interaction,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20953-20962.

[133]

Y. Liu, Y. Liu, C. Jiang, K. Lyu, W. Wan, H. Shen, B. Liang, Z. Fu, H. Wang, L. Yi, Hoi4d: A 4d egocentric dataset for category-level human-object interaction,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21013-21022.

[134]

W. Xie, Z. Yu, Z. Zhao, B. Zuo, Y. Wang, Hmdo: Markerless multi-view hand manipulation capture with deformable objects, Graph. Models 127 (2023) 101178.

[135]

P. Li, T. Liu, Y. Li, Y. Geng, Y. Zhu, Y. Yang, S. Huang, GenDexGrasp: Generalizable dexterous grasping, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, 2023, pp. 8068-8074, http://dx.doi.org/10.1109/ICRA48891.2023.10160667.

[136]

X. Zhan, L. Yang, Y. Zhao, K. Mao, H. Xu, Z. Lin, K. Li, C. Lu, OAKINK2: A dataset of bimanual hands-object manipulation in complex task com-pletion,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2024, pp. 445-456.

[137]

D. Antotsiou, G. Garcia-Hernando, T.-K. Kim,Task-oriented hand motion retargeting for dexterous manipulation imitation, in:Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.

[138]

S. Ottenhaus, D. Renninghoff, R. Grimm, F. Ferreira, T. Asfour, Visuo-haptic grasping of unknown objects based on Gaussian process implicit surfaces and deep learning, in: 2019 IEEE-RAS 19th International Conference on Humanoid Robots, Humanoids, IEEE, 2019, pp. 402-409.

[139]

F. Ficuciello, D. Zaccara, B. Siciliano, Learning grasps in a synergy-based framework, in: International Symposium on Experimental Robotics, Springer, 2016, pp. 125-135.

[140]

F. Ficuciello, A. Migliozzi, G. Laudante, P. Falco, B. Siciliano, Vision-based grasp learning of an anthropomorphic hand-arm system in a synergy-based control framework, Science Robotics 4 (26) (2019) eaao4900.

[141]

B.D. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration, Robot. Auton. Syst. 57 (5) (2009) 469-483.

[142]

H. Li, J. Tan, H. He, MagicHand: Context-aware dexterous grasping using an anthropomorphic robotic hand, in: 2020 IEEE International Conference on Robotics and Automation, ICRA, 2020, pp. 9895-9901, http://dx.doi.org/10.1109/ICRA40945.2020.9196538.

[143]

C.D. Santina, V. Arapi, G. Averta, F. Damiani, G. Fiore, A. Settimi, M.G. Catalano, D. Bacciu, A. Bicchi, M. Bianchi, Learning from humans how to grasp: A data-driven architecture for autonomous grasping with an-thropomorphic soft hands, IEEE Robot. Autom. Lett. 4 (2019) 1533-1540, http://dx.doi.org/10.1109/LRA.2019.2896485.

[144]

D. Hidalgo-Carvajal, H. Chen, G.C. Bettelani, J. Jung, M. Zavaglia, L. Busse, A. Naceri, S. Leutenegger, S. Haddadin, Anthropomorphic grasping with neural object shape completion, IEEE Robot. Autom. Lett. (2023).

[145]

H. Duan, P. Wang, Y. Li, D. Li, W. Wei, Learning human-to-robot dexterous handovers for anthropomorphic hand, IEEE Trans. Cogn. Dev. Syst. 15 (3)(2023) 1224-1238, http://dx.doi.org/10.1109/TCDS.2022.3203025.

[146]

P. Ruppel, J. Zhang, Learning object manipulation with dexterous hand-arm systems from human demonstration, in: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2020, pp. 5417-5424.

[147]

R. Wu, T. Zhu, W. Peng, J. Hang, Y. Sun, Functional grasp transfer across a category of objects from only one labeled instance, IEEE Robot. Autom. Lett. 8 (5) (2023) 2748-2755.

[148]

G.-H. Xu, Y.-L. Wei, D. Zheng, X.-M. Wu, W.-S. Zheng,Dexterous grasp transformer, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 17933-17942.

[149]

S. Zhifei, E.M. Joo, A survey of inverse reinforcement learning techniques, Int. J. Intell. Comput. Cybern. (2012).

[150]

J. Orbik, A. Agostini, D. Lee, Inverse reinforcement learning for dex-terous hand manipulation, in: 2021 IEEE International Conference on Development and Learning, ICDL, IEEE, 2021, pp. 1-7.

[151]

J. Ho, S. Ermon, Generative adversarial imitation learning, Adv. Neural Inf. Process. Syst. 29 (2016) 4565-4573.

[152]

M.S. Kopicki, D. Belter, J.L. Wyatt, Learning better generative models for dexterous, single-view grasping of novel objects, Int. J. Robot. Res. 38 (10-11) (2019) 1246-1267.[153]Y. Liu, Y. Yang, Y. Wang, X. Wu, J. Wang, Y. Yao, S. Schwertfeger, S. Yang,

[153]

W. Wang, J. Yu, et al., Realdex: Towards human-like grasping for robotic dexterous hand, 2024, arXiv preprint arXiv:2402.13853.

[154]

A. Gupta, C. Eppner, S. Levine, P. Abbeel, Learning dexterous manipulation for a soft robotic hand from human demonstrations, in: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2016, pp. 3786-3793.

[155]

V. Kumar, E. Todorov, S. Levine, Optimal control with learned local mod-els: Application to dexterous manipulation, in: 2016 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2016, pp. 378-383.

[156]

M. Omer, R. Ahmed, B. Rosman, S.F. Babikir, Model predictive-actor critic reinforcement learning for dexterous manipulation, in: 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering, ICCCEEE, IEEE, 2021, pp. 1-6.

[157]

J. Schulman, S. Levine, P. Abbeel, M. Jordan, P. Moritz, Trust region policy optimization, in: International Conference on Machine Learning, PMLR, 2015, pp. 1889-1897.

[158]

M. Mudigonda, P. Agrawal, M. Deweese, J. Malik, Investigating deep reinforcement learning for grasping objects with an anthropomorphic hand, 2018.

[159]

J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms, 2017, arXiv preprint arXiv:1707.06347.

[160]

B. Wu, I. Akinola, A. Gupta, F. Xu, J. Varley, D. Watkins-Valls, P.K. Allen, Generative attention learning: a ‘‘GenerAL’’ framework for high-performance multi-fingered grasping in clutter, Auton. Robots 44 (6)(2020) 971-990.

[161]

S. Dasari, A. Gupta, V. Kumar, Learning dexterous manipulation from exemplar object trajectories and pre-grasps, in: 2023 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2023, pp. 3889-3896.

[162]

T. Wu, M. Wu, J. Zhang, Y. Gan, H. Dong,Learning score-based grasp-ing primitive for human-assisting dexterous grasping, Adv. Neural Inf. Process. Syst. 36 (2024).

[163]

S. Zhaole, J. Zhu, R.B. Fisher, Dexdlo: Learning goal-conditioned dexterous policy for dynamic manipulation of deformable linear objects, in: 2024 IEEE International Conference on Robotics and Automation, ICRA, IEEE, 2024, pp. 16009-16015.

[164]

Z. He, M. Ciocarlie, Discovering synergies for robot manipulation with multi-task reinforcement learning, 2021, arXiv preprint arXiv:2110.01530.

[165]

F. Zhang, Y. Chen, H. Qiao, Z. Liu, SURRL: Structural unsupervised representations for robot learning, IEEE Trans. Cogn. Dev. Syst. (2022).

[166]

B. Li, S. Qiu, J. Bai, B. Wang, Z. Zhang, L. Li, H. Wang, X. Wang, Interac-tive learning for multi-finger dexterous hand: A model-free hierarchical deep reinforcement learning approach, Knowl.-Based Syst. 295 (2024) 111847.

[167]

S.H. Huang, M. Zambelli, J. Kay, M.F. Martins, Y. Tassa, P.M. Pilarski, R. Hadsell, Learning gentle object manipulation with curiosity-driven deep reinforcement learning, 2019, ArXiv.

[168]

T. Li, W. Xi, M. Fang, J. Xu, M.Q.-H. Meng, Learning to solve a rubik’s cube with a dexterous hand, in: 2019 IEEE International Conference on Robotics and Biomimetics, ROBIO, IEEE, 2019, pp. 1387-1393.

[169]

R. Fakoor, P. Chaudhari, A.J. Smola, DDPG++: striving for simplicity in continuous-control off-policy reinforcement learning, 2020, arXiv preprint arXiv:2006.15199.

[170]

W. Huang, I. Mordatch, P. Abbeel, D. Pathak, Generalization in dexter-ous manipulation via geometry-aware multi-task learning, 2021, arXiv preprint arXiv:2111.03062.

[171]

M. Alakuijala, G. Dulac-Arnold, J. Mairal, J. Ponce, C. Schmid, Residual reinforcement learning from demonstrations, 2021, arXiv preprint arXiv: 2106.08050.

[172]

L. Huang, W. Cai, Z. Zhu, Z. Zou, Dexterous manipulation of construction tools using anthropomorphic robotic hand, Autom. Constr. 156 (2023) 105133.

[173]

Y.-H. Wu, J. Wang, X. Wang, Learning generalizable dexterous manipu-lation from human grasp affordance, in: Conference on Robot Learning, PMLR, 2023, pp. 618-629.

[174]

E. Valarezo Anazco, P. Rivera Lopez, N. Park, J. Oh, G. Ryu, M.A. Al-antari, T.-S. Kim, Natural object manipulation using anthropomorphic robotic hand through deep reinforcement learning and deep grasping probability network, Appl. Intell. 51 (2021) 1041-1055.

[175]

P. Mandikal, K. Grauman, Dexvip: Learning dexterous grasping with human hand pose priors from video,in:Conference on Robot Learning, PMLR, 2022, pp. 651-661.

[176]

Z. Ding, Y. Chen, A.Z. Ren, S.S. Gu, Q. Wang, H. Dong, C. Jin, Learning a uni-versal human prior for dexterous manipulation from human preference, 2023, arXiv preprint arXiv:2304.04602.

[177]

Z. Chen, S. Chen, C. Schmid, I. Laptev, ViViDex: Learning vision-based dexterous manipulation from human videos, 2024, arXiv preprint arXiv: 2404.15709.

[178]

Y. Xu, W. Wan, J. Zhang, H. Liu, Z. Shan, H. Shen, R. Wang, H. Geng, Y. Weng, J. Chen, et al., Unidexgrasp: Universal robotic dexterous grasping via learning diverse proposal generation and goal-conditioned policy,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4737-4746.

[179]

Y. Li, B. Liu, Y. Geng, P. Li, Y. Yang, Y. Zhu, T. Liu, S. Huang, Grasp multiple objects with one hand, IEEE Robot. Autom. Lett. (2024).

[180]

M. Mosbach, S. Behnke, Grasp anything: Combining teacher-augmented policy gradient learning with instance segmentation to grasp arbitrary objects, 2024, arXiv preprint arXiv:2403.10187.

[181]

Q. Liu, Y. Cui, Q. Ye, Z. Sun, H. Li, G. Li, L. Shao, J. Chen, DexRepNet: Learning dexterous robotic grasping network with geometric and spatial hand-object representations, in: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, IEEE, 2023, pp. 3153-3160.

[182]

Y. Qin, H. Su, X. Wang, From one hand to multiple hands: Imitation learning for dexterous manipulation from single-camera teleoperation, 2022, arXiv:2204.12490.

AI Summary AI Mindmap
PDF (1891KB)

655

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/