TopoLLM: LLM-driven adaptive tool learning for real-time emergency network topology planning

Yizhuo Ma , Rongzheng Wang , Shuang Liang , Guangchun Luo , Ke Qin

›› 2026, Vol. 12 ›› Issue (2) : 273 -282.

PDF
›› 2026, Vol. 12 ›› Issue (2) :273 -282. DOI: 10.1016/j.dcan.2025.10.002
Regular Papers
research-article
TopoLLM: LLM-driven adaptive tool learning for real-time emergency network topology planning
Author information +
History +
PDF

Abstract

Communication infrastructure is often among the first casualties in natural or human-induced disasters, severely impairing the coordination and efficiency of rescue operations. Rapid deployment of Unmanned Aerial Vehicles (UAVs) and satellite systems has thus become essential for establishing robust communication links to support rescue-critical tasks. However, existing emergency communication networks rely heavily on domain expertise for topology design, thereby suffering from issues such as inefficient resource allocation and network congestion, among others. To address these challenges, we present TopoLLM, a framework that leverages Large Language Models (LLMs) for tool-driven optimization of emergency network topologies. This framework effectively combines the reasoning capabilities of the LLM with TopoTool, a domain-specific optimization toolkit engineered for high-precision and load-balanced network planning in disaster scenarios. Guided by an adaptive tool-selection mechanism, TopoLLM autonomously generates resilient topologies and allocates resources intelligently, reducing the need for extensive human interventions. Experimental evaluations on simulated disaster scenarios verify that TopoLLM can rapidly generate high-accuracy and robust topologies, achieving notable performance improvements compared with existing approaches.

Keywords

Large language models / Tool learning / Network planning / Scheme generation

Cite this article

Download citation ▾
Yizhuo Ma, Rongzheng Wang, Shuang Liang, Guangchun Luo, Ke Qin. TopoLLM: LLM-driven adaptive tool learning for real-time emergency network topology planning. , 2026, 12(2): 273-282 DOI:10.1016/j.dcan.2025.10.002

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Yizhuo Ma: Writing-original draft, Investigation, Formal analysis. Rongzheng Wang: Writing-original draft, Methodology, Formal anal-ysis. Shuang Liang: Writing-original draft, Validation. Guangchun Luo: Writing-original draft, Project administration. Ke Qin: Writing-original draft, Project administration, Funding acquisition.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This research is partially supported by the National Natural Sci-ence Foundation of China (Grant NO. 62176046) and Noncommuni-cable Chronic Diseases-National Science and Technology Major Project (2023ZD0501806).

References

[1]

P. Sofie, F. Anna, P. Erik, Digitalized co-production of emergency response: ict-enabled dispatch and coordination of volunteers at the emergency site, J. Humanit. Logist. Suppl. Chain Manag. 15 (1) (2025) 34-47.

[2]

P. Chen, Integrating ai and gis for real-time traffic accident prediction and emergency response: a case study on high-risk urban areas, Adv. Eng. Innov. 13 (1) (2024) 44-48.

[3]

G. Li, T. Wang, J. Wu, Z. Guan, Co-design enhanced power scheme and trajectory optimization of uav-enabled data collection from wsns, Tsinghua Sci. Technol. 30 (6) (2025) 2343-2365.

[4]

J. Zhang, C. Luo, N. Liu, Y. Hong, Z. Chen, Minimizing charging task time of wrsn assisted with multiple muvs and laser-charged uavs, High-Confid. Comput. 5 (2) (2025) 100272.

[5]

F. Wan, M.B. Yaseen, M.B. Riaz, A. Shafiq, A. Thakur, M.O. Rahman, Advancements and challenges in uav-based communication networks: a comprehensive scholarly analysis, Results Eng. 24 (2024) 103271.

[6]

B. Li, Z. Fei, Y. Zhang, Uav communications for 5g and beyond: recent advances and future trends, IEEE Internet Things J. 6 (2) (2019) 2241-2263.

[7]

O. Kodheli, E. Lagunas, N. Maturo, S.K. Sharma, B. Shankar, J.F.M. Montoya, J.C.M. Duncan, D. Spano, S. Chatzinotas, S. Kisseleff, J. Querol, L. Lei, T.X. Vu, G. Goussetis, Satellite communications in the new space era: a survey and future challenges, IEEE Commun. Surv. Tutor. 23 (1) (2021) 70-109.

[8]

B. Han, W. Zheng, Z. Mei, Nuclear emergency evacuation road planning method with limited evacuation time, 2015.

[9]

Q. Pei, L. Hao, C. Chen, X. Zheng, T. He, Minimum collective dose based optimal evacuation path-planning method under nuclear accidents, Ann. Nucl. Energy 147 (2020) 107644.

[10]

F. Sun, J. Zhou, S. Hu, R. Zhang, H. Xing, Dynamically examining emergency re-sponse network resilience: a case study of a typical earthquake in China, Saf. Sci. 184 (2025) 106766.

[11]

H. Okada, H. Oka, K. Mase, Network construction management for emergency com-munication system skymesh in large scale disaster, in: 2012 IEEE Globecom Work-shops, 2012, pp. 875-880.

[12]

Y. Lv, C. Wu, D. Xu, R. Yang, H-hop independently submodular maximization prob-lem with curvature, High-Confid. Comput. 4 (3) (2024) 100208.

[13]

OpenAI, GPT-4 technical report, CoRR, arXiv:2303.08774, 2023.

[14]

H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: open and efficient foundation language models, CoRR, arXiv:2302.13971, 2023.

[15]

A. Zeng, B. Xu, B. Wang, C. Zhang, D. Yin, D. Rojas, G. Feng, Chatglm: a family of large language models from GLM-130B to GLM-4 all tools, CoRR, arXiv:2406.12793, 2024.

[16]

J. Devlin, M. Chang, K. Lee, K. Toutanova, Bert: pre-training of deep bidirectional transformers for language understanding, in: NAACL, Minneapolis, MN, USA, 2019.

[17]

Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: a robustly optimized BERT pretraining approach, CoRR, arXiv: 1907.11692, 2019.

[18]

Z. Zeng, J. Yu, Q. Pang, Z. Wang, H. Zhuang, F. Yu, H. Shao, X. Zou, Resdecode: accelerating large language models inference via residual decoding heads, Big Data Min. Anal. 8 (4) (2025) 779-793.

[19]

J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E.H. Chi, Q.V. Le, D. Zhou,Chain-of-thought prompting elicits reasoning in large language models, in:Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems, 2022.

[20]

T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettle- moyer, N. Cancedda, T. Scialom,Toolformer: language models can teach themselves to use tools, in: Advances in Neural Information Processing Systems 36: Annual Con-ference on Neural Information Processing Systems 2023, New Orleans, LA, USA, December 10-16, 2023.

[21]

S.G. Patil, T. Zhang, X. Wang, J.E. Gonzalez, Gorilla: large language model connected with massive apis, CoRR, arXiv:2305.15334, 2023.

[22]

R. Wang, S. Liang, Q. Chen, J. Zhang, K. Qin, Graphtool-instruction: revolutioniz-ing graph reasoning in llms through decomposed subtask instruction, in: SIGKDD, Toronto, ON, Canada, 2025.

[23]

Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, L. Sun, Toolalpaca: generalized tool learning for language models with 3000 simulated cases, CoRR, arXiv:2306.05301, 2023.

[24]

H. Wang, S. Zhang, M.G.H. Omran, Z. Cui, F. Wang, Artificial bee colony algorithm with hybrid strategies for many-objective optimization, Tsinghua Sci. Technol. 31 (1) (2026) 84-100.

[25]

S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw. 95 (2016) 51-67.

[26]

M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of coop-erating agents, IEEE Trans. Syst. Man Cybern., Part B, Cybern. 26 (1) (1996) 29-41.

[27]

M. Cavazzuti, Deterministic Optimization, Springer Berlin Heidelberg, Berlin, Hei-delberg, 2013, pp. 77-102.

[28]

Q. Wang, W. Li, Z. Yu, Q. Abbasi, M. Imran, S. Ansari, Y. Sambo, L. Wu, Q. Li, T. Zhu, An overview of emergency communication networks, Remote Sens. 15 (6) (2023).

[29]

M. Li, D. Zhang, T. He, X. Xie, Y. Li, K. Qin, Towards effective data-free knowledge distillation via diverse diffusion augmentation, in: ACM MM, Melbourne, VIC, Aus-tralia, 2024.

[30]

M. Li, D. Zhang, Q. Dong, X. Xie, K. Qin, Adaptive Dataset Quantization, AAAI, Philadelphia, PA, USA, 2025.

[31]

A. Taylor, S.M. Musa, Evaluation of hybrid ai-based techniques for mppt optimiza-tion, in: 2022 International Conference on Green Energy, Computing and Sustainable Technology (GECOST), pp. 371-375.

[32]

S. Zahraee, M. Khalaji Assadi, R. Saidur, Application of artificial intelligence meth-ods for hybrid energy system optimization, Renew. Sustain. Energy Rev. 66 (2016) 617-630.

[33]

Z. Li, X. Hou, Y. Ke, M. Tao, Topology optimization with a genetic algorithm for the structural design of composite porous acoustic materials, Appl. Acoust. 197 (2022) 108917.

[34]

X. Zhang, X. Kang, K. Wei, J. Li, K. Ma,Reinforcement learning combined with heuristic search for solving discrete space path planning problems, in:2021 33rd Chinese Control and Decision Conference (CCDC), pp.2142-2147.

[35]

C. Qu, S. Dai, X. Wei, H. Cai, S. Wang, D. Yin, J. Xu, J. Wen, Tool learning with large language models: a survey, Front. Comput. Sci. 19 (8) (2025) 198343.

[36]

T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettle- moyer, N. Cancedda, T. Scialom, Toolformer: language models can teach themselves to use tools, in: NeurIPS, New Orleans, LA, USA, 2023.

[37]

R. Wang, Q. Chen, Y. Huang, Y. Ma, M. Li, J. Li, K. Qin, G. Luo, S. Liang, Graphco-gent: Overcoming llms’ working memory constraints via multi-agent collaboration in complex graph understanding, 2025, arXiv preprint.

[38]

Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, M. Huang, N. Duan, W. Chen, Tora: a tool-integrated reasoning agent for mathematical problem solving, in: ICLR, Vienna, Austria, 2024.

[39]

S. Yuan, K. Song, J. Chen, X. Tan, Y. Shen, K. Ren, D. Li, D. Yang, Easytool: enhancing llm-based agents with concise tool instruction, CoRR, arXiv:2401.06201, 2024.

[40]

M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk, T. Hoefler, Graph of thoughts: solving elaborate problems with large language models,in: AAAI, AAAI Press, Van-couver, Canada, 2024, pp. 17682-17690.

[41]

H.W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Scaling instruction-finetuned language models, J. Mach. Learn. Res. 25 (2024) 70.

[42]

X.L. Li, P. Liang, Prefix-tuning: Optimizing continuous prompts for generation,in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Pro-cessing, ACL/IJCNLP 2021, vol. 1, Long Papers, Virtual Event, August 1-6, 2021, pp. 4582-4597.

[43]

S. Zhang, L. Dong, X. Li, S. Zhang, X. Sun, S. Wang, J. Li, R. Hu, T. Zhang, F. Wu, G. Wang, Instruction tuning for large language models: a survey, CoRR, arXiv:2308. 10792, 2023.

[44]

Y. Ma, K. Qin, S. Liang, Beta-LR: Interpretable logical reasoning based on beta distri-bution, in: Findings of the Association for Computational Linguistics: NAACL 2024.

[45]

S. Chakraborty, A.K. Saha, R. Chakraborty, M. Saha, An enhanced whale optimiza-tion algorithm for large scale optimization problems, Knowl.-Based Syst. 233 (2021) 107543.

[46]

Y. Sun, Y. Chen, Multi-population improved whale optimization algorithm for high dimensional optimization, Appl. Soft Comput. 112 (2021) 107854.

[47]

E.J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen Lora, Low-rank adaptation of large language models, in: The Tenth International Conference on Learning Representations, Virtual Event, April 25-29, 2022.

PDF

4

Accesses

0

Citation

Detail

Sections
Recommended

/