A novel task-oriented framework for dual-arm robotic assembly task

Zhengwei WANG, Yahui GAN, Xianzhong DAI

PDF(18847 KB)
PDF(18847 KB)
Front. Mech. Eng. ›› 2021, Vol. 16 ›› Issue (3) : 528-545. DOI: 10.1007/s11465-021-0638-2
RESEARCH ARTICLE

A novel task-oriented framework for dual-arm robotic assembly task

Author information +
History +

Abstract

In industrial manufacturing, the deployment of dual-arm robots in assembly tasks has become a trend. However, making the dual-arm robots more intelligent in such applications is still an open, challenging issue. This paper proposes a novel framework that combines task-oriented motion planning with visual perception to facilitate robot deployment from perception to execution and finish assembly problems by using dual-arm robots. In this framework, visual perception is first employed to track the effects of the robot behaviors and observe states of the workpieces, where the performance of tasks can be abstracted as a high-level state for intelligent reasoning. The assembly task and manipulation sequences can be obtained by analyzing and reasoning the state transition trajectory of the environment as well as the workpieces. Next, the corresponding assembly manipulation can be generated and parameterized according to the differences between adjacent states by combining with the prebuilt knowledge of the scenarios. Experiments are set up with a dual-arm robotic system (ABB YuMi and an RGB-D camera) to validate the proposed framework. Experimental results demonstrate the effectiveness of the proposed framework and the promising value of its practical application.

Graphical abstract

Keywords

dual-arm assembly / AI reasoning / intelligent system / task-oriented motion planning / visual perception

Cite this article

Download citation ▾
Zhengwei WANG, Yahui GAN, Xianzhong DAI. A novel task-oriented framework for dual-arm robotic assembly task. Front. Mech. Eng., 2021, 16(3): 528‒545 https://doi.org/10.1007/s11465-021-0638-2

References

[1]
Kunze L, Hawes N, Duckett T. Artificial intelligence for long-term robot autonomy: a survey. IEEE Robotics and Automation Letters, 2018, 3( 4): 4023– 4030
CrossRef Google scholar
[2]
Pairet È, Ardón P, Broz F, et al. Learning and generalisation of primitives skills towards robust dual-arm manipulation. In: Proceedings of the AAAI Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy. Palo Alto: AAAI Press, 2018, 5‒12
[3]
Diab M, Akbari A, Ud Din M. PMK—A knowledge processing framework for autonomous robotics perception and manipulation. Sensors (Basel), 2019, 19( 5): 1166– 1189
CrossRef Google scholar
[4]
Kyrarini M, Haseeb M A, Ristić-Durrant D. Robot learning of industrial assembly task via human demonstrations. Autonomous Robots, 2019, 43( 1): 239– 257
CrossRef Google scholar
[5]
Weng C Y, Tan W C, Yuan Q. Quantitative assessment at task-level for performance of robotic configurations and task plans. Journal of Intelligent & Robotic Systems, 2019, 96( 3–4): 439– 456
CrossRef Google scholar
[6]
Solana Y, Cueva H H, Garcia A R, et al. A case study of automated dual-arm manipulation in industrial applications. In: Proceedings of 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). Zaragoza: IEEE, 2019, 563‒570
[7]
Kousi N, Dimosthenopoulos D, Matthaiakis A S. AI based combined scheduling and motion planning in flexible robotic assembly lines. Procedia CIRP, 2019, 86 : 74– 79
CrossRef Google scholar
[8]
Tsarouchi P, Makris S, Michalos G. Robotized assembly process using dual arm robot. Procedia CIRP, 2014, 23( 3): 47– 52
CrossRef Google scholar
[9]
Dogar M, Spielberg A, Baker S. Multi-robot grasp planning for sequential assembly operations. Autonomous Robots, 2019, 43( 3): 649– 664
CrossRef Google scholar
[10]
Smith C, Karayiannidis Y, Nalpantidis L. Dual arm manipulation—a survey. Robotics and Autonomous Systems, 2012, 60( 10): 1340– 1353
CrossRef Google scholar
[11]
Breazeal C, Scassellati B. Robots that imitate humans. Trends in Cognitive Sciences, 2002, 6( 11): 481– 487
CrossRef Google scholar
[12]
Schleihauf H, Hoehl S. A dual-process perspective on over-imitation. Developmental Review, 2020, 55 : 100896–
CrossRef Google scholar
[13]
Li D, Liu N, Guo Y. 3D object recognition and pose estimation for random bin-picking using Partition Viewpoint Feature Histograms. Pattern Recognition Letters, 2019, 128 : 148– 154
CrossRef Google scholar
[14]
Cong Y, Tian D, Feng Y. Speedup 3-D texture-less object recognition against self-occlusion for intelligent manufacturing. IEEE Transactions on Cybernetics, 2019, 49( 11): 3887– 3897
CrossRef Google scholar
[15]
Nakano Y. Stereo Vision Based Single-Shot 6D Object Pose Estimation for Bin-Picking by a Robot Manipulator. 2020, arXiv preprint arXiv: 2005.13759
[16]
Wang Z, Gan Y, Dai X. An environment state perception method based on knowledge representation in dual arm robot assembly tasks. International Journal of Intelligent Robotics and Applications, 2020, 4( 2): 177– 190
CrossRef Google scholar
[17]
Fox M, Long D. PDDL2. 1: an extension to PDDL for expressing temporal planning domains. Journal of Artificial Intelligence Research, 2003, 20 : 61– 124
CrossRef Google scholar
[18]
Adjali O, Ramdane-Cherif A. Knowledge processing using EKRL for robotic applications. International Journal of Cognitive Informatics and Natural Intelligence, 2017, 11( 4): 1– 21
CrossRef Google scholar
[19]
Diab M, Muhayyuddin, Akbari A, et al. An ontology framework for physics-based manipulation planning. In: Ollero A, Sanfeliu A, Montano L, et al., eds. ROBOT 2017: Third Iberian Robotics Conference. Cham: Springer, 2017, 452‒464
[20]
Rodríguez C, Suárez R. Combining motion planning and task assignment for a dual-arm system. In: Proceedings of 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon: IEEE, 2016, 4238‒4243
[21]
Tenorth M, Beetz M. Representations for robot knowledge in the KnowRob framework. Artificial Intelligence, 2017, 247 : 151– 169
CrossRef Google scholar
[22]
Fikes R E, Nilsson N J. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 1971, 2( 3‒4): 189– 208
CrossRef Google scholar
[23]
Ghallab M, Nau D, Traverso P. Automated Planning: Theory and Practice. Amsterdam: Elsevier, 2004
[24]
Hoffmann J, Nebel B. The FF planning system: fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 2001, 14 : 253– 302
CrossRef Google scholar
[25]
Moll M, Kavraki L, Rosell J. Randomized physics-based motion planning for grasping in cluttered and uncertain environments. IEEE Robotics and Automation Letters, 2017, 3( 2): 712– 719
[26]
Montaño A, Suárez R. Coordination of several robots based on temporal synchronization. Robotics and Computer-integrated Manufacturing, 2016, 42 : 73– 85
CrossRef Google scholar
[27]
Rodríguez C, Montaño A, Suárez R. Planning manipulation movements of a dual-arm system considering obstacle removing. Robotics and Autonomous Systems, 2014, 62( 12): 1816– 1826
CrossRef Google scholar
[28]
Tenorth M, Bartels G, Beetz M. Knowledge-based Specification of Robot Motions. In: ECAI. 2014, 873‒878
[29]
Bidot J, Karlsson L, Lagriffoul F. Geometric backtracking for combined task and motion planning in robotic systems. Artificial Intelligence, 2017, 247 : 229– 265
CrossRef Google scholar
[30]
Akbari A, Lagriffoul F, Rosell J. Combined heuristic task and motion planning for bi-manual robots. Autonomous Robots, 2019, 43( 6): 1575– 1590
CrossRef Google scholar
[31]
Erdem E, Haspalamutgil K, Palaz C, et al. Combining high-level causal reasoning with low-level geometric reasoning and motion planning for robotic manipulation. In: Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shangai: IEEE, 2011, 4575‒4581
[32]
Plaku E, Hager G D. Sampling-based motion and symbolic action planning with geometric and differential constraints. In: Proceedings of 2010 IEEE International Conference on Robotics and Automation. Anchorage: IEEE, 2010, 5002‒5008
[33]
Hauser K, Latombe J C. Integrating task and PRM motion planning: dealing with many infeasible motion planning queries. In: Proceedings of ICAPS09 Workshop on Bridging the Gap between Task and Motion Planning. Thessaloniki: Citeseer, 2009
[34]
Cambon S, Alami R, Gravot F. A hybrid approach to intricate motion, manipulation and task planning. International Journal of Robotics Research, 2009, 28( 1): 104– 126
CrossRef Google scholar
[35]
Srivastava S, Fang E, Riano L, et al. Combined task and motion planning through an extensible planner-independent interface layer. In: Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong: IEEE, 2014, 639‒646
[36]
Dornhege C, Gissler M, Teschner M, et al. Integrating symbolic and geometric planning for mobile manipulation. In: Proceedings of 2009 IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009). Denver: IEEE, 2009, 1–6
[37]
Wolfe J, Marthi B, Russell S J. Combined task and motion planning for mobile manipulation. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS). Toronto: AAAI, 2010, 254–258
[38]
Dornhege C, Eyerich P, Keller T, et al. Integrating task and motion planning using semantic attachments. In: Proceedings of the 1st AAAI Conference on Bridging the Gap Between Task and Motion Planning. Atlanta: AAAI, 2010, 10‒17
[39]
Gaschler A, Petrick R P A, Giuliani M, et al. KVP: a knowledge of volumes approach to robot task planning. In: Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013, 202‒208
[40]
Kaelbling L P, Lozano-Pérez T. Hierarchical task and motion planning in the now. In: Proceedings of 2011 IEEE International Conference on Robotics and Automation. Shanghai: IEEE, 2011, 1470‒1477
[41]
de Silva L, Pandey A K, Gharbi M, et al. Towards combining HTN planning and geometric task planning. Computer Science, 2013, arXiv preprint arXiv: 1307.1482
[42]
Kaelbling L P, Lozano-Pérez T. Integrated task and motion planning in belief space. International Journal of Robotics Research, 2013, 32( 9–10): 1194– 1227
CrossRef Google scholar
[43]
Srivastava S, Riano L, Russell S, et al. Using classical planners for tasks with continuous operators in robotics. In: Proceedings of International Conference on Automated Planning and Scheduling. Guangzhou: IEEE, 2013, 3
[44]
Dogar M, Srinivasa S. A framework for push-grasping in clutter. Robotics Science and Systems: Online Proceedings, 2011, VII : 65– 72
[45]
Hauser K. The minimum constraint removal problem with three robotics applications. International Journal of Robotics Research, 2014, 33( 1): 5– 17
CrossRef Google scholar
[46]
Krontiris A, Bekris K E. Efficiently solving general rearrangement tasks: a fast extension primitive for an incremental sampling-based planner. In: Proceedings of 2016 IEEE International Conference on Robotics and Automation (ICRA). Stockholm: IEEE, 2016, 3924‒3931
[47]
Dearden R, Burbridge C. Manipulation planning using learned symbolic state abstractions. Robotics and Autonomous Systems, 2014, 62( 3): 355– 365
CrossRef Google scholar
[48]
Leidner D, Borst C. Hybrid reasoning for mobile manipulation based on object knowledge. In: Proceedings of Workshop on AI-based robotics at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Tokyo: IEEE, 2013
[49]
Chen H, Li J, Wan W. Integrating combined task and motion planning with compliant control. International Journal of Intelligent Robotics and Applications, 2020, 4( 2): 149– 163
CrossRef Google scholar
[50]
Moriyama R, Wan W W, Harada K. Dual-arm assembly planning considering gravitational constraints. In: Proceedings of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Macao: IEEE, 2019, 5566‒5572
[51]
Wan W, Harada K, Nagata K. Assembly sequence planning for motion planning. Assembly Automation, 2018, 38( 2): 195– 206
CrossRef Google scholar
[52]
Coleman D, Sucan I, Chitta S, et al. Reducing the barrier to entry of complex robotic software: a MoveIT! case study. Computer Science, 2014, arXiv preprint arXiv: 1404.3785
[53]
Foote T. Tf: the transform library. In: Proceedings of 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA). Woburn: IEEE, 2013, 1‒6

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61873308, 61503076, and 61175113), the Natural Science Foundation of Jiangsu Province (Grant No. BK20150624), and the Fundamental Research Funds for the Central Universities (Grant No. 202008003).

RIGHTS & PERMISSIONS

2021 Higher Education Press 2021.
AI Summary AI Mindmap
PDF(18847 KB)

Accesses

Citations

Detail

Sections
Recommended

/