Task cognition and planning for service robots

Yongcheng Cui , Ying Zhang , Cui-Hua Zhang , Simon X. Yang

Intelligence & Robotics ›› 2025, Vol. 5 ›› Issue (1) : 119 -42.

PDF
Intelligence & Robotics ›› 2025, Vol. 5 ›› Issue (1) :119 -42. DOI: 10.20517/ir.2025.08
Review

Task cognition and planning for service robots

Author information +
History +
PDF

Abstract

With the rapid development of artificial intelligence and robotics, service robots are increasingly becoming a part of our daily lives to provide domestic services. For robots to complete such services intelligently and with high quality, the prerequisite is that they can recognize and plan tasks to discover task requirements and generate executable action sequences. In this context, this paper systematically reviews the latest research progress in task cognition and planning for domestic service robots, covering key technologies such as command text parsing, active task cognition (ATC), multimodal perception, and action sequence generation. Initially, the challenges traditional rule-based command parsing methods face are analyzed, and the enhancement of robots’ understanding of complex instructions through deep learning methods is explored. Subsequently, the research trends in ATC are introduced, discussing the ability of robots to autonomously discover tasks by perceiving the surrounding environment through visual and semantic features. The discussion then moves to the current typical methods in task planning, comparing and analyzing four common approaches to highlight their advantages and disadvantages in this field. Finally, the paper summarizes the challenges of existing research and the future directions for development, providing references for further enhancing the task execution capabilities of domestic service robots in complex home environments.

Keywords

Service robot / task cognition / task planning / robot action sequence generation

Cite this article

Download citation ▾
Yongcheng Cui, Ying Zhang, Cui-Hua Zhang, Simon X. Yang. Task cognition and planning for service robots. Intelligence & Robotics, 2025, 5(1): 119-42 DOI:10.20517/ir.2025.08

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Mukherjee D,Chang LH.A survey of robot learning strategies for human-robot collaboration in industrial settings.Robot Cim Int Manuf2022;73:102231

[2]

Sánchez-Ibáñez JR,García-Cerezo A.Path planning for autonomous mobile robots: a review.Sensors2021;21:7898 PMCID:PMC8659900

[3]

Moshayedi AJ,Liao L,Kolahdooz A.Design and development of FOODIEBOT robot: from simulation to design.IEEE Access2024;12:36148-72

[4]

Ni J,Tang G,Cao W.Deep learning-based scene understanding for autonomous robots: a survey.Intell Robot2023;3:374-401

[5]

Zhou C,Fränti P.A review of motion planning algorithms for intelligent robots.J Intell Manuf2022;33:387-424

[6]

Zhang Y,Shao X.Safe and efficient robot manipulation: task-oriented environment modeling and object pose estimation.IEEE Trans Instrum Meas2021;70:1-12

[7]

Church KW.Word2Vec.Nat Lang Eng2017;23:155-62

[8]

Pennington J,Manning CD. Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP); 2014 Oct 25-29; Doha, Qatar. 2014, pp. 1532-43. Available from: https://aclanthology.org/D14-1162.pdf. (accessed 2025-01-21)

[9]

Weld H,Long S,Han SC.A survey of joint intent detection and slot filling models in natural language understanding.ACM Comput Surv2023;55:1-38

[10]

Iyyer M,Boyd-Graber J.Deep unordered composition rivals syntactic methods for text classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing; 2015 Jul 26-31; Beijing, China. 2015, pp. 1681-91.

[11]

Jiang C,Song Y,Pang B.Discrete sequence rearrangement based self-supervised chinese named entity recognition for robot instruction parsing.Intell Robot2023;3:337-54

[12]

Kleenankandy J.An enhanced Tree-LSTM architecture for sentence semantic modeling using typed dependencies.Inform Process Manag2020;57:102362

[13]

Lecun Y,Bengio Y.Gradient-based learning applied to document recognition.Proc IEEE1998;86:2278-324

[14]

Ayetiran EF.Attention-based aspect sentiment classification using enhanced learning through cnn-Bilstm networks.Knowl Based Syst2022;252:109409

[15]

Vaswani A,Parmar N. Attention is all you need. arXiv 2024, arXiv:1706.03762. Available online: https://doi.org/10.48550/arXiv.1706.03762 (accessed 21 Jan 2025)

[16]

Collobert R,Bottou L,Kavukcuoglu K. Natural language processing (almost) from scratch. J Mach Learn Res 2011;12:2493-537. Available from: https://www.jmlr.org/papers/volume12/collobert11a/collobert11a.pdf?source. (accessed 2025-01-21)

[17]

Yu Y,Hu C.A review of recurrent neural networks: LSTM cells and network architectures.Neural Comput2019;31:1235-70

[18]

Devlin J,Lee K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv 2024, arXiv:1810.04805. Available online: https://doi.org/10.48550/arXiv.1810.04805 (accessed 21 Jan 2025)

[19]

Achiam J,Agarwal S. Gpt-4 technical report. arXiv 2024, arXiv:2303.08774. Available online: https://doi.org/10.48550/arXiv.2303.08774 (accessed 21 Jan 2025)

[20]

Zhang C,Li J,Mao Z.Large language models for human–robot interaction: a review.Biomim Intell Robot2023;3:100131

[21]

Liu B.Attention-Based recurrent neural network models for joint intent detection and slot filling.Interspeech2016;2016:685-9

[22]

Goo CW,Hsu YK.Slot-gated modeling for joint slot filling and intent prediction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2018 Jun 1-6; New Orleans, Louisiana, USA. 2018, pp. 753-7.

[23]

Abro WA,Aamir M.Joint intent detection and slot filling using weighted finite state transducer and BERT.Appl Intell2022;52:17356-70

[24]

Qin L,Che W,Zhao S.A co-interactive transformer for joint slot filling and intent detection. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021 Jun 6-11; Toronto, Canada. IEEE; 2021. pp. 8193-7.

[25]

Cheng L,Yang W.An effective non-autoregressive model for spoken language understanding. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management; 2021 Nov 1-5; Boise, USA. Association for Computing Machinery; 2021, pp. 241-50.

[26]

He T,Wu Y,Chen J.Multitask learning with knowledge base for joint intent detection and slot filling.Appl Sci2021;11:4887

[27]

Rajapaksha UUS.Ontology based optimized algorithms to communicate with a service robot using a user command with unknown terms. In: 2020 2nd International Conference on Advancements in Computing (ICAC); 2020 Dec 10-11; Malabe, Sri Lanka. IEEE; 2020. pp. 258-62.

[28]

Ren S,Girshick R.Faster R-CNN: towards real-time object detection with region proposal networks.IEEE Trans Pattern Anal Mach Intell2017;39:1137-49

[29]

Duan H,Li D.Human–robot object handover: recent progress and future direction.Biomim Intell Robot2024;4:100145

[30]

Wang CY,Liao HYM. YOLOv9: learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. Available online: https://doi.org/10.48550/arXiv.2402.13616 (accessed 21 Jan 2025)

[31]

Zhang Y,Wang H.Cross-level multi-modal features learning with transformer for RGB-D object recognition.IEEE Trans Circuits Syst Video Technol2023;33:7121-30

[32]

Liu W,Erhan D.SSD: Single shot multibox detector. In: Proceedings of the Computer Vision–ECCV 2016: 14th European Conference; 2016 October 11-14; Amsterdam, the Netherlands. Springer International Publishing; 2016, pp. 21-37.

[33]

Zhu X,Wang X.TPH-YOLOv5: improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW); 2021 Oct 11-17; Montreal, Canada. IEEE; 2021. pp. 2778-88.

[34]

Zhang Y,Chen H.Exploring the cognitive process for service task in smart home: a robot service mechanism.Future Gener Comput Syst2020;102:588-602

[35]

Xu D,Choy CB.Scene graph generation by iterative message passing. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 5410-9.

[36]

Dai B,Lin D.Detecting visual relationships with deep relational networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 3076-86.

[37]

Liu AA,Xu N,Zhang Y.Toward region-aware attention learning for scene graph generation.IEEE Trans Neural Netw Learn Syst2022;33:7655-66

[38]

Zellers R,Thomson S.Neural motifs: scene graph parsing with global context. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, USA. IEEE; 2018. pp. 5831-40.

[39]

Zhang M,Zhang Y.Sequential learning for ingredient recognition from images.IEEE Trans Circuits Syst Video Technol2023;33:2162-75

[40]

Zhou F,Zhao H.Long-term object search using incremental scene graph updating.Robotica2023;41:962-75

[41]

Riaz H,Raizer K,Hata A.Scene understanding for safety analysis in human-robot collaborative operations. In: 2020 6th International Conference on Control, Automation and Robotics (ICCAR); 2020 Apr 20-23; Singapore. IEEE; 2020. pp. 722-31.

[42]

Jiao Z,Zhang Z,Zhu Y.Sequential manipulation planning on scene graph. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 8203-10.

[43]

Antol S,Lu J.Vqa: visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision; 2015 Dec 7-13; Santiago, Chile. IEEE; 2015. pp. 2425-33.

[44]

Li G,Zhu W.Boosting visual question answering with context-aware knowledge aggregation. In: Proceedings of the 28th ACM International Conference on Multimedia; 2020 Oct 12-16; Melbourne, Australia. Association for Computing Machinery; 2020. pp. 1227-35.

[45]

Wang P,Men R. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In: Proceedings of the 39th International Conference on Machine Learning; Baltimore, Maryland. 2022. pp. 23318-40. Available from: https://proceedings.mlr.press/v162/wang22al.html. (accessed 2025-01-21)

[46]

Lu P,Zhang W,Zhou M.R-VQA: learning visual relation facts with semantic attention for visual question answering. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; 2018 Aug 19-23; New York, USA. Association for Computing Machinery; 2018. pp. 1880-9.

[47]

Yu T,Yu Z,Tian Q.Long-term video question answering via multimodal hierarchical memory attentive networks.IEEE Trans Circuits Syst Video Technol2021;31:931-44

[48]

Kenfack FK,Balint-Benczedi F.Robotvqa - a scene-graph-and deep-learning-based visual question answering system for robot manipulation. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2020 Oct 24 - 2021 Jan 24; Las Vegas, USA. IEEE; 2020. pp. 9667-74.

[49]

Das A,Lee S,Batra D. Neural modular control for embodied question answering. arXiv 2024, arXiv:1810.11181. Available online: https://doi.org/10.48550/arXiv.1810.11181 (accessed 21 Jan 2025)

[50]

Luo H,Yao Y,Liu Z.Depth and video segmentation based visual attention for embodied question answering.IEEE Trans Pattern Anal Mach Intell2023;45:6807-19

[51]

Chen Z,Chen J. LAKO: knowledge-driven visual question answering via late knowledge-to-text injection. arXiv 2024, arXiv:2207.12888. Available online: https://doi.org/10.48550/arXiv.2207.12888 (accessed 21 Jan 2025)

[52]

Teney D,van Den Hengel A.Graph-structured representations for visual question answering. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21-26; Honolulu, USA. IEEE; 2017. pp. 1-9.

[53]

Norcliffe-Brown W,Parisot S. Learning conditioned graph structures for interpretable visual question answering. arXiv 2024, arXiv:1806.07243. Available online: https://doi.org/10.48550/arXiv.1806.07243 (accessed 21 Jan 2025)

[54]

Li J,Savarese S. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv 2024, arXiv:2301.12597. Available online: https://doi.org/10.48550/arXiv.2301.12597 (accessed 21 Jan 2025)

[55]

Xiao B,Xu W.Florence-2: advancing a unified representation for a variety of vision tasks. In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2024 Jun 16-22; Seattle, USA. IEEE; 2024. pp. 4818-29.

[56]

Bhatti UA,Wu G,Hussain A.Deep learning with graph convolutional networks: an overview and latest applications in computational intelligence.Int J Intell Syst2023;2023:8342104

[57]

Veličković P,Casanova A,Lio P. Graph attention networks. arXiv 2024, arXiv:1710.10903. Available online: https://doi.org/10.48550/arXiv.1710.10903 (accessed 21 Jan 2025)

[58]

Chen ZM,Wang P.Multi-label image recognition with graph convolutional networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019 Jun 15-20; Long Beach, USA. IEEE; 2019. pp. 5177-86.

[59]

Mo S,Lin L.Mutual information-based graph co-attention networks for multimodal prior-guided magnetic resonance imaging segmentation.IEEE Trans Circuits Syst Video Technol2022;32:2512-26

[60]

Li K,Li K,Fu Y.Visual semantic reasoning for image-text matching. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019 Oct 27 - Nov 2; Seoul, South Korea. IEEE; 2019. pp. 4654-62.

[61]

Yang J,Li L,Ding J.SOLVER: scene-object interrelated visual emotion reasoning network.IEEE Trans Image Process2021;30:8686-701

[62]

Liang Z,Guan Y.Visual-semantic graph attention networks for human-object interaction detection. In: 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO); 2021 Dec 27-31; Sanya, China. IEEE; 2021. pp. 1441-7.

[63]

Xie J,Cai Y,Li Q.Knowledge-based visual question generation.IEEE Trans Circuits Syst Video Technol2022;32:7547-58

[64]

Lyu Z,Lai J,Li C.Knowledge enhanced graph neural networks for explainable recommendation.IEEE Trans Knowl Data Eng2023;35:4954-68

[65]

Zhang L,Liu J.MuL-GRN: multi-level graph relation network for few-shot node classification.IEEE Trans Knowl Data Eng2023;35:6085-98

[66]

Huang M,Yang Q.Reasoning and tuning: graph attention network for occluded person re-identification.IEEE Trans Image Process2023;32:1568-82

[67]

Cui Y,Jiang Z,Gu Y.An active task cognition method for home service robot using multi-graph attention fusion mechanism.IEEE Trans Circuits Syst Video Technol2024;34:4957-72

[68]

Ghallab M,Wilkins D. PDDL - The planning domain definition language. Washington: University of Washington Press; 1998. pp. 1-27. Available from: https://www.researchgate.net/publication/2278933_PDDL_-_The_Planning_Domain_Definition_Language. (accessed 2025-01-21)

[69]

Lee J,Yang F. Action language BC: preliminary report. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence; Beijing, China, 2013; pp. 983-9. Available from: https://www.ijcai.org/Proceedings/13/Papers/150.pdf. (accessed 2025-01-21)

[70]

Khandelwal P,Leonetti M,Stone P.Planning in action language BC while learning action costs for mobile robots.ICAPS2014;24:472-80

[71]

Savage J,Matamoros M.Semantic reasoning in service robots using expert systems.Robot Auton Syst2019;114:77-92

[72]

Wang Z,Shao X.Home service robot task planning using semantic knowledge and probabilistic inference.Knowl Based Syst2020;204:106174

[73]

Wang C,Li FF.Generalizable task planning through representation pretraining.IEEE Robot Autom Lett2022;7:8299-306

[74]

Adu-Bredu A,Pusalkar N.Elephants don’t pack groceries: robot task planning for low entropy belief states.IEEE Robot Autom Lett2022;7:25-32

[75]

Bustamante S,Leidner D,Stulp F.CATs: task planning for shared control of assistive robots with variable autonomy. In: 2022 International Conference on Robotics and Automation (ICRA); 2022 May 23-27; Philadelphia, USA. IEEE; 2022. pp. 3775-82.

[76]

Adu-Bredu A,Jenkins OC.Optimal constrained task planning as mixed integer programming. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 12029-36.

[77]

Wang Z,Dai X.Assembly-oriented task sequence planning for a dual-arm robot.IEEE Robot Autom Lett2022;7:8455-62

[78]

Bernardo R,Gonçalves PJ.Planning robotic agent actions using semantic knowledge for a home environment.Intell Robot2021;1:101-5

[79]

Yano T.Goal-oriented task planning for composable robot control system using module-based control-as-inference framework. In: 2024 IEEE/SICE International Symposium on System Integration (SII); 2024 Jan 8-11; Ha Long, Vietnam. IEEE; 2024. pp. 1219-26.

[80]

Zeng F,Fan C,Ota J.Stepwise large-scale multi-agent task planning using neighborhood search.IEEE Robot Autom Lett2024;9:111-8

[81]

Li S,Li S.Temporal logic task planning for autonomous systems with active acquisition of information.IEEE Trans Intell Veh2024;9:1436-49

[82]

Noormohammadi-Asl A,Dautenhahn K. To lead or to follow? Adaptive robot task planning in human-robot collaboration. arXiv 2024, arXiv:2401.01483. Available online: https://doi.org/10.48550/arXiv.2401.01483 (accessed 21 Jan 2025)

[83]

Berger C,Rudol P.Leveraging active queries in collaborative robotic mission planning.Intell Robot2024;4:87-106

[84]

Zhang Y,Shao X,Liu S.Semantic grounding for long-term autonomy of mobile robots toward dynamic object search in home environments.IEEE Trans Ind Electron2023;70:1655-65

[85]

Zhang Y,Shao X.Effective safety strategy for mobile robots based on laser-visual fusion in home environments.IEEE Trans Syst Man Cybern Syst2022;52:4138-50

[86]

Singh I,Mousavian A.Progprompt: generating situated robot task plans using large language models. In: 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023 May 29 - Jun 02; London, UK. IEEE; 2023. pp. 11523-30.

[87]

Wang L,Feng X.A survey on large language model based autonomous agents.Front Comput Sci2024;18:40231

[88]

Ding Y,Amiri S.Integrating action knowledge and LLMs for task planning and situation handling in open worlds.Auton Robot2023;47:981-97

[89]

Pallagani V,Roy K.On the prospects of incorporating large language models (LLMs) in automated planning and scheduling (APS).ICAPS2024;34:432-44

[90]

Sarch G,Tarr MJ. Open-ended instructable embodied agents with memory-augmented large language models. arXiv 2024, arXiv:2301.15127. Available online: https://doi.org/10.48550/arXiv.2310.15127 (accessed 21 Jan 2025)

[91]

Lin BY,Liu Q,Sommerer S.On grounded planning for embodied tasks with language models.AAAI2023;37:13192-200

[92]

Akiyama S,Arulkumaran K,Johns E. Open-loop VLM robot planning: an investigation of fine-tuning and prompt engineering strategies. In: Proceedings of the First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024; Yokohama, Japan. 2024. pp. 1-6. Available from: https://openreview.net/forum?id=JXngwwPMR5. (accessed 2025-01-21)

[93]

Chalvatzaki G,Nandha D,Ribeiro LFR.Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning.Front Robot AI2023;10:1221739 PMCID:PMC10464606

[94]

Rana K,Garg S,Reid ID. SAYPLAN: grounding large language models using 3D scene graphs for scalable robot task planning. arXiv 2024, arXiv:2307.06135. Available online: https://doi.org/10.48550/arXiv.2307.06135 (accessed 21 Jan 2025)

[95]

Agia C,Khodeir M. TASKOGRAPHY: evaluating robot task planning over large 3D scene graphs. arXiv 2024, arXiv:2207.05006. Available online: https://doi.org/10.48550/arXiv.2207.05006 (accessed 21 Jan 2025)

[96]

Immorlica N.Technical perspective: a graph-theoretic framework traces task planning.Commun ACM2018;61:98-98

[97]

Chen T,Nie L,Liu X.Neural task planning with AND–OR graph representations.IEEE Trans Multimed2019;21:1022-34

[98]

Kortik S.LinGraph: a graph-based automated planner for concurrent task planning based on linear logic.Appl Intell2017;47:914-34

[99]

Sellers T,Luo C,Junfeng M.A node selection algorithm to graph-based multi-waypoint optimization navigation and mapping.Intell Robot2022;2:333-54

[100]

Odense S,Macready WG.Neural-guided runtime prediction of planners for improved motion and task planning with graph neural networks. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2022 Oct 23-27; Kyoto, Japan. IEEE; 2022. pp. 12471-8.

[101]

Kan X,Carpin S.Task planning on stochastic aisle graphs for precision agriculture.IEEE Robot Autom Lett2021;6:3287-94

[102]

Mirakhor K,Das D.Task planning for object rearrangement in multi-room environments.AAAI2024;38:10350-7

[103]

Saucedo MAV,Saradagi A,Nikolakopoulos G.Belief scene graphs: expanding partial scenes with objects through computation of expectation. In: 2024 IEEE International Conference on Robotics and Automation (ICRA); 2024 May 13-17; Yokohama, Japan. IEEE; 2024. pp. 9441-7.

[104]

Souza C.Deep reinforcement learning for task planning of virtual characters.Intell Comput2021;284:694-711

[105]

Liu G,Steckelmacher D,Nowe A.Synergistic task and motion planning with reinforcement learning-based non-prehensile actions.IEEE Robot Autom Lett2023;8:2764-71

[106]

Liu G,Durodié Y,Nowe A.Optimistic reinforcement learning-based skill insertions for task and motion planning.IEEE Robot Autom Lett2024;9:5974-81

[107]

Li T,Qiu Q.Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2023 Oct 1-5; Detroit, USA. IEEE; 2023. pp. 4176-83.

[108]

Wete E,Kudenko D. Multi-robot motion and task planning in automotive production using controller-based safe reinforcement learning. In: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems; Auckland, New Zealand. 2024. pp. 1928-37. Available from: http://jgreen.de/wp-content/documents/2024/AAMAS_24_Multi-Robot-Motion-and-Task-Planning-in-Automotive-Production-Using-Controller-based-Safe-Reinforcement-Learning.pdf. (accessed 2025-01-21)

[109]

Liu Y,Georgievski I.Human-flow-aware long-term mobile robot task planning based on hierarchical reinforcement learning.IEEE Robot Autom Lett2023;8:4068-75

[110]

Li X,Wang Q.A distributed multi-vehicle pursuit scheme: generative multi-adversarial reinforcement learning.Intell Robot2023;3:436-52

[111]

Li D,Zhao M.Reliable task planning of networked devices as a multi-objective problem using NSGA-II and reinforcement learning.IEEE Access2022;10:6684-95

[112]

Zhang J,Cui Y,Cong J.Multi-USV task planning method based on improved deep reinforcement learning.IEEE Internet Things J2024;11:18549-67

AI Summary AI Mindmap
PDF

127

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/