Large Language Model for Secure Operation of Power Systems

Yue Xiang , Ling Tan , Gao Qiu , Zhiyuan Tang , Junyong Liu

Smart Energy Syst. Res. ›› 2025, Vol. 1 ›› Issue (1) : 10005

PDF (3487KB)
Smart Energy Syst. Res. ›› 2025, Vol. 1 ›› Issue (1) :10005 DOI: 10.70322/sesr.2025.10005
research-article
Large Language Model for Secure Operation of Power Systems
Author information +
History +
PDF (3487KB)

Abstract

The integration of large-scale renewable energy, multi-criteria operational constraints, and complex grid topologies has intensified the challenges faced by the security monitoring process within power system dispatch. Dispatch guidelines, typically expressed in natural language, are difficult for conventional algorithms to interpret and apply in real time, while general-purpose Large Language Models (LLMs) lack domain-specific knowledge, risking inaccurate or unsafe recommendations. This study proposes an LLM-based monitoring framework that integrates domain-specific prompt engineering with fuzzy evaluation to address these limitations. The framework interprets dispatch guidelines, analyzes real-time power flow data, and converts semantic assessments into quantitative safety scores, enabling closed-loop decision-making. Validation on the IEEE 14-bus system demonstrates that the optimized LLM outperforms a general LLM in accuracy, logical consistency, and stability under complex multi-standard scenarios, while reducing reliance on manual intervention. The results highlight the framework’s potential to enhance monitoring efficiency and ensure intelligent, secure power system operation.

Keywords

Large language model / Power system dispatch / Prompt engineering / Fuzzy evaluation / Safety assessment

Cite this article

Download citation ▾
Yue Xiang, Ling Tan, Gao Qiu, Zhiyuan Tang, Junyong Liu. Large Language Model for Secure Operation of Power Systems. Smart Energy Syst. Res., 2025, 1(1): 10005 DOI:10.70322/sesr.2025.10005

登录浏览全文

4963

注册一个新账户 忘记密码

Acknowledgments

The authors would like to thank all those who provided support and constructive feedback during the preparation of this work.

Author Contributions

Conceptualization, Y.X.; Methodology, L.T. and Y.X.; Writing—Original Draft Preparation, L.T.; Writing—Review & Editing, L.T., Y.X., G.Q., Z.T. and J.L.; Visualization, L.T.; Project Administration, Y.X.; Funding Acquisition, Y.X.

Ethics Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and materials supporting the findings—prompts, rule templates, topology descriptions, scenario configurations, raw numerical tables for Figures 3-5, fuzzy-evaluation parameters, and the replication protocol—are included in the manuscript’s Materials and Methods section and Appendix.

Funding

This research was funded by the Sichuan University Graduate Course Construction Project “Power Dispatch Intelligent Brain—Large Language Model Based Power Security Early Warning and Economic Dispatch Practice Teaching Platform” (2024CJAL004), and the Sichuan University “Artificial Intelligence Empowerment for Innovative Practice Education Reform” project entitled “Application and Education of Artificial Intelligence Large Language Models in Power System Operation and Control”.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

[1]

Sun H, Huang T, Guo Q, Zhang M, Guo W, Liu W, et al. Research and application of an intelligent machine dispatcher for dispatch decision-making. Power Syst. Technol. 2020, 44, 1-8. doi:10.13335/j.1000-3673.pst.2019.1937.

[2]

Shi G, Qiu X, Zhao J, Ma J. Rolling dispatch of power systems considering wind power prediction errors and demand response. Mod. Electr. Power 2018, 35, 9-15. doi:10.19725/j.cnki.1007-2322.2018.06.002.

[3]

Zhang J, Xu J, Xu P, Chen S, Gao T, Bai Y. Review and prospects of the application of large AI models in power system operation control. J. Wuhan Univ. (Eng. Ed.) 2023, 56, 1368-1379. doi:10.14188/j.1671-8844.2023-11-008.

[4]

Zhao WX, Zhou K, Li J, Tang T, Wang X, Hou Y, et al. A survey of large language models. arXiv 2023, arXiv:2303.18223.

[5]

Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M, Lacroix T, et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971.

[6]

OpenAI. ChatGPT.Available online: accessed on 28 August 2025).

[7]

Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 2023, 55, 1-35. doi:10.1145/3560815.

[8]

Cheng Y, Zhao H, Zhou X, Zhao J, Cao Y, Yang C. GAIA: A large language model for advanced power dispatch. arXiv 2024, arXiv:2408.03847.

[9]

Lai Z, Wu T, Fei X, Ling Q. BERT4ST: Fine-tuning pre-trained large language model for wind power forecasting. Energy Convers. Manag. 2024, 307, 118331. doi:10.1016/j.enconman.2024.118331.

[10]

Gao M, Zhou S, Gu W, Wu Z, Liu H, Zhou A. A general framework for load forecasting based on pre-trained large language model. arXiv 2024, arXiv:2406.11336.

[11]

Liang X, Zhang W, Lei S, Zhang Y, Xu M, Peng L, et al. Multi-classification of electric power metadata based on prompt-tuning. In Artificial Intelligence and Mobile Services—AIMS 2022; Pan X, Jin T, Zhang L-J,Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 102-114. doi:10.1007/978-3-031-23504-7_8.

[12]

Cheng Y, Zhao H, Zhou X, Zhao J, Cao Y, Yang C, et al. A large language model for advanced power dispatch. Sci. Rep. 2025, 15, 8925. doi:10.1038/s41598-025-91940-x.

[13]

Jiang G, Ma ZH, Zhang L, Chen J. EPlus-LLM: A large language model-based computing platform for automated building energy modeling. Appl. Energy 2024, 367, 123431. doi:10.1016/j.apenergy.2024.123431.

[14]

Liu F, Tong X, Yuan M, Zhang Q. Algorithm evolution using large language model. arXiv 2023, arXiv:2311.15249.

[15]

Li B, Jiang Y, Xu J, Liu Z, Sheng Z, Song X, et al. The design and implementation of an intelligent Q&A system for electric power safety regulations based on large language model technology. In Proceedings of the 2024 6th International Conference on Energy Systems and Electrical Power (ICESEP), Wuhan, China, 2024; pp. 604-607. doi:10.1109/ICESEP62218.2024.10652122.

[16]

Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, Aleman FL, et al. GPT-4 technical report. arXiv 2023, arXiv:2303.08774.

[17]

National Technical Committee 446 on Grid Operation and Control of Standardization Administration of China ( SAC/TC 446). Grid Operation Code (GB/T 31464-2022); China Standard Press: Beijing, China, 2022.

[18]

Glover JD, Sarma MS, Overbye TJ. Power System Analysis & Design, 6th ed.; Cengage Learning: Boston, MA, USA, 2017.

[19]

Zimmerman RD, Murillo-Sánchez CE, Thomas RJ. MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education. IEEE Trans. Power Syst. 2011, 26, 12-19. doi:10.1109/TPWRS.2010.2051168.

[20]

International Electrotechnical Commission (IEC). IEC 61970-301: Energy Management System Application Program Interface (EMS-API)—Part 301: Common Information Model (CIM) Base; IEC: Geneva, Switzerland, 2020.

[21]

Blecher L, Cucurull G, Scialom T, Stojnic R. Nougat: Neural optical understanding for academic documents. arXiv 2023, arXiv:2308.13418.

[22]

Wei J, Wang X, Schuurmans D, Stojnic R. Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural Inf. Process. Syst. 2022, 35, 24824-24837.

[23]

Yan Z, Xu Y. Real-time optimal power flow with linguistic stipulations: integrating GPT-agent and deep reinforcement learning. IEEE Trans. Power Syst. 2024, 39, 4747-4750. doi:10.1109/TPWRS.2023.3338961.

[24]

State Council of the People’s Republic of China. Regulations on the Emergency Disposal, Investigation and Handling of Electric Power Safety Accidents (State Council Decree No. 599); State Council: Beijing, China, 2011.

[25]

Reimers J, Gurevych I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), Hong Kong, China, 2019; pp. 3982-3992. doi:10.48550/arXiv.1908.10084.

[26]

Gao T, Yao X, Chen D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, 2021; pp. 6894-6910. doi:10.48550/arXiv.2104.08821.

[27]

Takagi T, Sugeno M. Fuzzy Identification of Systems and Its Applications to Modeling and Control. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 116-132. doi:10.1109/TSMC.1985.6313399.

[28]

Zimmermann H-J. Fuzzy Set Theory—and Its Applications, 4th ed.; Springer Science & Business Media: Berlin, Germany, 2010.

[29]

Zadeh LA. Fuzzy Sets. Inf. Control 1965, 8, 338-353. doi:10.1016/S0019-9958(65)90241-X.

[30]

Huang Z, Shi G, Sukhatme GS. From words to routes: Applying large language models to vehicle routing. arXiv 2024, arXiv:2403.10795.

[31]

Muhammad Y, Khan R, Raja MAZ, Ullah F, Chaudhary NI, He Y. Solution of optimal reactive power dispatch with FACTS devices: A survey. Energy Rep. 2020, 6, 2211-2229. doi:10.1016/j.egyr.2020.07.030.

[32]

Luo H, Sun Q, Xu C, Zhao P, Lou J, Tao C, et al. WizardMath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv 2023, arXiv:2308.09583.

[33]

Jin Z, Chen Y, Leeb F, Gresele L, Kamal O, Lyu Z, et al. CLadder: A benchmark to assess causal reasoning capabilities of language models. arXiv 2023, arXiv:2312.04350.

[34]

Wang B, Xu C, Wang S, Gan Z, Cheng Y, Gao J, et al. Adversarial GLUE: A multi-task benchmark for robustness evaluation of language models. arXiv 2021, arXiv:2111.02840.

[35]

Ye J, Wu Y, Gao S, Huang C, Li S, Li G, et al. RoTBench: A multi-level benchmark for evaluating the robustness of large language models in tool learning. arXiv 2024, arXiv:2401.08326.

[36]

Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. doi:10.1016/j.lindif.2023.102274.

[37]

Bernier F, Cao J, Cordy M, Ghamizi S. PowerGraph-LLM: Novel Power Grid Graph Embedding and Optimization with Large Language Models. IEEE Trans. Power Syst. 2025, early access. doi:10.1109/TPWRS.2025.3596774.

[38]

Bilal A, Ebert D, Lin B. LLMs for Explainable AI: A Comprehensive Survey. arXiv 2025, arXiv:2504.00125.

[39]

Zhou X, Zhao H, Cheng Y, Cao Y, Liang G, Liu G, et al. ElecBench: A Power Dispatch Evaluation Benchmark for Large Language Models. arXiv 2024, arXiv:2407.05365.

[40]

Giannelos S, Pudjianto D, Zhang T, Strbac G. Energy Hub Operation Under Uncertainty: Monte Carlo Risk Assessment Using Gaussian and KDE-Based Data. Energies 2025, 18, 1712. doi:10.3390/en18071712.

PDF (3487KB)

10

Accesses

0

Citation

Detail

Sections
Recommended

/