Automated synthesis of steady-state continuous processes using reinforcement learning

Quirin Göttl, Dominik G. Grimm, Jakob Burger

PDF(833 KB)
PDF(833 KB)
Front. Chem. Sci. Eng. ›› 2022, Vol. 16 ›› Issue (2) : 288-302. DOI: 10.1007/s11705-021-2055-9
RESEARCH ARTICLE
RESEARCH ARTICLE

Automated synthesis of steady-state continuous processes using reinforcement learning

Author information +
History +

Abstract

Automated flowsheet synthesis is an important field in computer-aided process engineering. The present work demonstrates how reinforcement learning can be used for automated flowsheet synthesis without any heuristics or prior knowledge of conceptual design. The environment consists of a steady-state flowsheet simulator that contains all physical knowledge. An agent is trained to take discrete actions and sequentially build up flowsheets that solve a given process problem. A novel method named SynGameZero is developed to ensure good exploration schemes in the complex problem. Therein, flowsheet synthesis is modelled as a game of two competing players. The agent plays this game against itself during training and consists of an artificial neural network and a tree search for forward planning. The method is applied successfully to a reaction-distillation process in a quaternary system.

Graphical abstract

Keywords

automated process synthesis / flowsheet synthesis / artificial intelligence / machine learning / reinforcement learning

Cite this article

Download citation ▾
Quirin Göttl, Dominik G. Grimm, Jakob Burger. Automated synthesis of steady-state continuous processes using reinforcement learning. Front. Chem. Sci. Eng., 2022, 16(2): 288‒302 https://doi.org/10.1007/s11705-021-2055-9

References

[1]
Westerberg A W. A retrospective on design and process synthesis. Computers & Chemical Engineering, 2004, 28(4): 447–458
CrossRef Google scholar
[2]
Stephanopoulos G, Reklaitis G V. Process systems engineering: from solvay to modern bio- and nanotechnology. A history of development, successes and prospects for the future. Chemical Engineering Science, 2011, 66(19): 4272–4306
CrossRef Google scholar
[3]
Siirola J J. Strategic process synthesis: advances in the hierarchical approach. Computers & Chemical Engineering, 1996, 20(S2): S1637–S1643
CrossRef Google scholar
[4]
Chen Q, Grossmann I E. Recent developments and challenges in optimization-based process synthesis. Annual Review of Chemical and Biomolecular Engineering, 2017, 8(1): 249–283
CrossRef Google scholar
[5]
Yeomans H, Grossmann I E. A systematic modeling framework of superstructure optimization in process synthesis. Computers & Chemical Engineering, 1999, 23(6): 709–731
CrossRef Google scholar
[6]
Stephanopoulos G, Westerberg A W. Studies in process synthesis II, evolutionary synthesis of optimal process flowsheets. Chemical Engineering Science, 1976, 31(3): 195–204
CrossRef Google scholar
[7]
Zhang T, Sahinidis N V, Siirola J J. Pattern recognition in chemical process flowsheets. AIChE Journal. American Institute of Chemical Engineers, 2019, 65(2): 592–603
CrossRef Google scholar
[8]
Gani R, O’Connell J P. A knowledge based system for the selection of thermodynamic models. Computers & Chemical Engineering, 1989, 13(4-5): 397–404
CrossRef Google scholar
[9]
Kirkwood R L, Locke M H, Douglas J M. A prototype expert system for synthesizing chemical process flowsheets. Computers & Chemical Engineering, 1988, 12(4): 329–343
CrossRef Google scholar
[10]
Tula A K, Eden M R, Gani R. Process synthesis, design and analysis using a process-group contribution method. Computers & Chemical Engineering, 2015, 81: 245–259
CrossRef Google scholar
[11]
Daichendt M M, Grossmann I E. Integration of hierarchical decomposition and mathematical programming for the synthesis of process flowsheets. Computers & Chemical Engineering, 1997, 22(1-2): 147–175
CrossRef Google scholar
[12]
Martin M, Adams T A II. Challenges and future directions for process and product synthesis and design. Computers & Chemical Engineering, 2019, 128: 421–436
CrossRef Google scholar
[13]
Grossmann I E, Harjunkoski I. Process systems engineering: academic and industrial perspectives. Computers & Chemical Engineering, 2019, 126: 474–484
CrossRef Google scholar
[14]
Stephanopoulos G. Artificial intelligence in process engineering—current state and future trends. Computers & Chemical Engineering, 1990, 14(11): 1259–1270
CrossRef Google scholar
[15]
Stephanopoulos G, Han C. Intelligent systems in process engineering: a review. Computers & Chemical Engineering, 1996, 20(6-7): 143–191
CrossRef Google scholar
[16]
Dimiduk D M, Holm E A, Niezgoda S R. Perspectives on the impact of machine learning, deep learning, and artificial intelligence on materials, processes, and structures engineering. Integrating Materials and Manufacturing Innovation, 2018, 7(3): 157–172
CrossRef Google scholar
[17]
Venkatasubramanian V. The promise of artificial intelligence in chemical engineering: is it here, finally? AIChE Journal. American Institute of Chemical Engineers, 2019, 65(2): 466–478
CrossRef Google scholar
[18]
Lee J H, Shin J, Realff M J. Machine learning: overview of the recent progresses and implications for the process systems engineering field. Computers & Chemical Engineering, 2018, 114: 111–121
CrossRef Google scholar
[19]
Eason J, Cremaschi S. Adaptive sequential sampling for surrogate model generation with artificial neural networks. Computers & Chemical Engineering, 2014, 68: 220–232
CrossRef Google scholar
[20]
Fahmi I, Cremaschi S. Process synthesis of biodiesel production plant using artificial neural networks as the surrogate models. Computers & Chemical Engineering, 2012, 46: 105–123
CrossRef Google scholar
[21]
Fernandes F A N. Optimization of fischer‐tropsch synthesis using neural networks. Chemical Engineering & Technology, 2006, 29(4): 449–453
CrossRef Google scholar
[22]
Sutton R S, Barto A G. Reinforcement Learning: An Introduction. 2nd ed. Cambridge, MA: The MIT Press, 2018
[23]
Lapan M. Deep Reinforcement Learning Hands-On. 1st ed. Birmingham, E.K.: Packt Publishing Ltd., 2018
[24]
Shin J, Badgwell T A, Liu K H, Lee J H. Reinforcement learning—overview of recent progress and implications for process control. Computers & Chemical Engineering, 2019, 127: 282–294
CrossRef Google scholar
[25]
Zhou Z, Li X, Zare R N. Optimizing chemical reactions with deep reinforcement learning. ACS Central Science, 2017, 3(12): 1337–1344
CrossRef Google scholar
[26]
Khan A, Lapkin A. Searching for optimal process routes: a reinforcement learning approach. Computers & Chemical Engineering, 2020, 141: 107027
CrossRef Google scholar
[27]
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D. Mastering the game of Go without human knowledge. Nature, 2017, 550(7676): 354–359
CrossRef Google scholar
[28]
Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, Lanctot M, Sifre L, Kumaran D, Graepel T, Lillicrap T, Simonyan K, Hassabis D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 2018, 362(6419): 1140–1144
CrossRef Google scholar
[29]
Wang Y, Li Y, Song Y, Rong X. The influence of the activation function in a convolution neural network model of facial expression recognition. Applied Sciences (Basel, Switzerland), 2020, 10(5): 1897
CrossRef Google scholar
[30]
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado G S, Davis A, Dean J, Devin M, . TensorFlow: large-scale machine learning on heterogeneous systems. 2015, arXiv: 1603.04467
[31]
Alpaydin E. Introduction to Machine Learning. 2nd ed. Cambridge, MA: The MIT Press, 2010
[32]
Zinser A, Rihko-Struckmann L, Sundmacher K. Computationally efficient steady-state process simulation by applying a simultaneous dynamic method. Computer-Aided Chemical Engineering, 2016, 38: 517–522
CrossRef Google scholar
[33]
Hoffmann A, Bortz M, Burger J, Hasse H, Küfer K H. A new scheme for process simulation by optimization: distillation as an example. Computer-Aided Chemical Engineering, 2016, 38: 205–210
CrossRef Google scholar
[34]
Hausknecht M, Stone P. Deep reinforcement learning in parameterized action space. 2015, arXiv: 1511.04143
[35]
Xiong J, Wang Q, Yang Z, Sun P, Han L, Zheng Y, Fu H, Zhang T, Liu J, Liu H. Parametrized deep q-networks learning: reinforcement learning with discrete-continuous hybrid action space. 2018, arXiv: 1810.06394
[36]
Neunert M, Abdolmaleki A, Wulfmeier M, Lampe T, Springenberg J T, Hafner R, Romano F, Buchli J, Heess N, Riedmiller M. Continuous-discrete reinforcement learning for hybrid control in robotics. 2020, arXiv: 2001.00449v1

Funding note

Open Access funding enabled and organized by Projekt DEAL.

Open Access

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

RIGHTS & PERMISSIONS

2021 The Author(s) 2021. This article is published with open access at link.springer.com and journal.hep.com.cn
AI Summary AI Mindmap
PDF(833 KB)

Accesses

Citations

Detail

Sections
Recommended

/