The potential and challenges of large language model agent systems in chemical process simulation: from automated modeling to intelligent design

Wenli Du , Shaoyi Yang

Front. Chem. Sci. Eng. ›› 2025, Vol. 19 ›› Issue (10) : 99

PDF (1098KB)
Front. Chem. Sci. Eng. ›› 2025, Vol. 19 ›› Issue (10) : 99 DOI: 10.1007/s11705-025-2587-5
VIEWS & COMMENTS

The potential and challenges of large language model agent systems in chemical process simulation: from automated modeling to intelligent design

Author information +
History +
PDF (1098KB)

Abstract

Large language model-based agent systems are emerging as transformative technologies in chemical process simulation, enhancing efficiency, accuracy, and decision-making. By automating data analysis across structured and unstructured sources—including process parameters, experimental results, simulation data, and textual specifications—these systems address longstanding challenges such as manual parameter tuning, subjective expert reliance, and the gap between theoretical models and industrial application. This paper reviews the key barriers to broader adoption of large language model-based agent systems, including unstable software interfaces, limited dynamic modeling accuracy, and difficulties in multimodal data integration, which hinder scalable deployment. We then survey recent progress in domain-specific foundation models, model interpretability techniques, and industrial-grade validation platforms. Building on these insights, we propose a technical framework centered on three pillars: multimodal task perception, autonomous planning, and knowledge-driven iterative optimization. This framework supports adaptive reasoning and robust execution in complex simulation environments. Finally, we outline a next-generation intelligent paradigm where natural language-driven agent workflows unify high-level strategic intent with automated task execution. The paper concludes by identifying future research directions to enhance robustness, adaptability, and safety, paving the way for practical integration of large language model based agent systems into industrial-scale chemical process simulation.

Graphical abstract

Keywords

LLM agent systems / chemical process simulation / automated modeling

Cite this article

Download citation ▾
Wenli Du, Shaoyi Yang. The potential and challenges of large language model agent systems in chemical process simulation: from automated modeling to intelligent design. Front. Chem. Sci. Eng., 2025, 19(10): 99 DOI:10.1007/s11705-025-2587-5

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

MacalC MNorthM J. Agent-based modeling and simulation. In: Proceedings of the 2009 Winter Simulation Conference (WSC), 2009, 86–98

[2]

Nguyen T S , Hoang N H , Tan C K , Bin Hussain M A . Kinetic parameter estimation of reaction systems via dynamic regressor extension and mixing procedure. IFAC-PapersOnLine, 2024, 58(14): 210–215

[3]

Pan H , Mudur N , Taranto W , Huang Z , Chen Y , Dror R O , Gilmer J , Schuetz D A , Mardt A , Kuehne H . . Quantum many-body physics calculations with large language models. Communications on Physics, 2025, 8(1): 1–8

[4]

Jablonka K M , Schwaller P , Ortega-Guerrero A , Smit B . Leveraging large language models for predictive chemistry. Nature Machine Intelligence, 2024, 6(2): 161–169

[5]

LuoJXiaoCMaF. Zero-resource hallucination prevention for large language models. In: Al-Onaizan Y, Bansal M, Chen Y N, eds. Findings of the Association for Computational Linguistics: EMNLP 2024. Florida: Association for Computational Linguistics, 2024, 3586–3602

[6]

Reiser P , Neubert M , Eberhard A , Torresi L , Zhou C , Shao C , Metni H , van Hoesel C , Schopmans H , Sommer T . . Graph neural networks for materials science and chemistry. Communications Materials, 2022, 3(1): 1–18

[7]

Stops L , Leenhouts R , Gao Q , Schweidtmann A M . Flowsheet synthesis through hierarchical reinforcement learning and graph neural networks. AIChE Journal, 2023, 69(1): e17938

[8]

Bran A M , Cox S , Schilter O , Rosspeintner A , Osberger T , Aeby E , Gordon-Brown P , Roch L M , Hessels E , Caramelli D . . Augmenting large language models with chemistry tools. Nature Machine Intelligence, 2024, 6(5): 525–535

[9]

Guo M H , Xu T X , Liu J J , Liu Z N , Jiang P T , Mu T J , Zhang S H , Martin R R , Cheng M M , Hu S M . Attention mechanisms in computer vision: a survey. Computational Visual Media, 2022, 8(3): 331–368

[10]

AchtibatRHatefiS M VDreyerMJainAWiegandTLapuschkinSSamekW. AttnLRP: attention-aware layer-wise relevance propagation for transformers. arXiv Preprint, arXiv:2402.05602, 2024

[11]

Lee A , Ghouse J H , Eslick J C , Laird C D , Siirola J D , Zamarripa M A , Gunter D , Shinn J H , Dowling A W , Bhattacharyya D . . The IDAES process modeling framework and model library—flexibility for process simulation and optimization. Journal of Advanced Manufacturing and Processing, 2021, 3(3): e10095

[12]

JiaBChenYHuangSZhuYZhuS C. LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task Activities. In: Vedaldi A, Bischof H, Brox T, Frahm J M, eds. Computer Vision – ECCV 2020. Cham: Springer International Publishing, 2020, 767–786

[13]

ChenR JLuM YWengW HChenT YWilliamsonD F KManzTShadyMMahmoodF. Multimodal Co-attention transformer for survival prediction in gigapixel whole slide images. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal: IEEE, 2021, 3995–4005

[14]

Mann V , Sales-Cruz M , Gani R , Venkatasubramanian V . eSFILES: Intelligent process flowsheet synthesis using process knowledge, symbolic AI, and machine learning. Computers & Chemical Engineering, 2024, 181: 108505

[15]

Zhang W , Wang Q , Kong X , Xiong J , Ni S , Cao D , Niu B , Chen M , Li Y , Zhang R , Wang Y , Zhang L , Li X , Xiong Z , Shi Q , Huang Z , Fu Z , Zheng M . Fine-tuning large language models for chemical text mining. Chemical Science, 2024, 15(27): 10600–10611

[16]

Alayrac J B , Donahue J , Luc P , Miech A , Barr I , Hasson Y , Lenc K , Mensch A , Millican K , Reynolds M . . Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 2022, 35: 23716–23736

[17]

Li Y , Mamouei M , Salimi-Khorshidi G , Rao S , Hassaine A , Canoy D , Lukasiewicz T , Rahimi K . Hi-BEHRT: hierarchical transformer-based model for accurate prediction of clinical events using multimodal longitudinal electronic health records. IEEE Journal of Biomedical and Health Informatics, 2022, 27(2): 1106–1117

[18]

Yuan Q , Tian C , Song Y , Ou P , Zhu M , Zhao H , Yang Y . GPSFun: geometry-aware protein sequence function predictions with language models. Nucleic Acids Research, 2024, 52(W1): W248–W255

[19]

Ock J , Badrinarayanan S , Magar R , Antony A , Barati Farimani A . Multimodal language and graph learning of adsorption configuration in catalysis. Nature Machine Intelligence, 2024, 6(12): 1501–1511

[20]

YaoSZhaoJYuDDuNShafranINarasimhanKCaoY. ReAct: synergizing reasoning and acting in language models. In: Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), 2023

[21]

Saxe A M , McClelland J L , Ganguli S . A mathematical theory of semantic development in deep neural networks. Proceedings of the National Academy of Sciences of the United States of America, 2019, 116(23): 11537–11546

[22]

Venugopal V , Olivetti E . MatKG: an autonomously generated knowledge graph in material science. Scientific Data, 2024, 11(1): 217

[23]

ZhangQZhaoY BKangY. Autonomous Boundary of Human-Machine Collaboration System Based on Reinforcement Learning. In: 2020 Australian and New Zealand Control Conference (ANZCC), 2020, 160–165

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (1098KB)

788

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/