Meta-Auto-Decoder: a Meta-Learning-Based Reduced Order Model for Solving Parametric Partial Differential Equations
Zhanhong Ye, Xiang Huang, Hongsheng Liu, Bin Dong
Meta-Auto-Decoder: a Meta-Learning-Based Reduced Order Model for Solving Parametric Partial Differential Equations
Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computational domains, etc. Typical reduced order modeling techniques accelerate the solution of the parametric PDEs by projecting them onto a linear trial manifold constructed in the offline stage. These methods often need a predefined mesh as well as a series of precomputed solution snapshots, and may struggle to balance between the efficiency and accuracy due to the limitation of the linear ansatz. Utilizing the nonlinear representation of neural networks (NNs), we propose the Meta-Auto-Decoder (MAD) to construct a nonlinear trial manifold, whose best possible performance is measured theoretically by the decoder width. Based on the meta-learning concept, the trial manifold can be learned in a mesh-free and unsupervised way during the pre-training stage. Fast adaptation to new (possibly heterogeneous) PDE parameters is enabled by searching on this trial manifold, and optionally fine-tuning the trial manifold at the same time. Extensive numerical experiments show that the MAD method exhibits a faster convergence speed without losing the accuracy than other deep learning-based methods.
Parametric partial differential equations (PDEs) / Meta-learning / Reduced order modeling / Neural networks (NNs) / Auto-decoder
[1.] |
Antoniou, A., Edwards, H., Storkey, A.: How to train your MAML. In: Seventh International Conference on Learning Representations (2019)
|
[2.] |
|
[3.] |
|
[4.] |
Bhattacharya, K., Hosseini, B., Kovachki, N.B., Stuart, A.M.: Model reduction and neural networks for parametric PDEs. arXiv:2005.03180 (2020)
|
[5.] |
|
[6.] |
Chan, E.R., Monteiro, M., Kellnhofer, P., Wu, J., Wetzstein, G.: pi-GAN: periodic implicit generative adversarial networks for 3D-aware image synthesis. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021). https://doi.org/10.1109/cvpr46437.2021.00574
|
[7.] |
|
[8.] |
|
[9.] |
|
[10.] |
|
[11.] |
|
[12.] |
Dupont, E., Kim, H., Eslami, S.M.A., Rezende, D.J., Rosenbaum, D.: From data to functa: your data point is a function and you can treat it like one. In: Proceedings of the 39th International Conference on Machine Learning (2022)
|
[13.] |
E, W., Yu, B.: The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6(1), 1–12 (2018)
|
[14.] |
|
[15.] |
|
[16.] |
|
[17.] |
|
[18.] |
|
[19.] |
|
[20.] |
Huang, X., Liu, H., Shi, B., Wang, Z., Yang, K., Li, Y., Weng, B., Wang, M., Chu, H., Zhou, J., Fan, Y., Hua, B., Chen, L., Dong, B.: Solving partial differential equations with point source based on physics-informed neural networks. arXiv:2111.01394 (2021)
|
[21.] |
|
[22.] |
Jiang, C.M., Esmaeilzadeh, S., Azizzadenesheli, K., Kashinath, K., Mustafa, M., Tchelepi, H.A., Marcus, P., Prabhat, M., Anandkumar, A.: MeshfreeFlownet: a physics-constrained deep continuous space-time super-resolution framework. In: International Conference for High Performance Computing, Networking, Storage, and Analysis (2020)
|
[23.] |
|
[24.] |
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
|
[25.] |
Kochkov, D., Smith, J., Alieva, A., Wang, Q., Brenner, M., Hoyer, S.: Machine learning accelerated computational fluid dynamics. Proceedings of the National Academy of Sciences of the United States of America. 118(21), 1 (2021)
|
[26.] |
|
[27.] |
Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A.: Fourier neural operator for parametric partial differential equations. arXiv:2010.08895 (2020)
|
[28.] |
|
[29.] |
Liu, X., Zhang, X., Peng, W., Zhou, W., Yao, W.: A novel meta-learning initialization method for physics-informed neural networks. arXiv:2107.10991 (2021)
|
[30.] |
|
[31.] |
|
[32.] |
|
[33.] |
Lu, Y., Chen, H., Lu, J., Ying, L., Blanchet, J.: Machine learning for elliptic PDEs: fast rate generalization bound, neural scaling law and minimax optimality. In: International Conference on Learning Representations (2022)
|
[34.] |
Lu, L., Jin, P., Karniadakis, G.E.: DeepONet: learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv:1910.03193 (2019)
|
[35.] |
|
[36.] |
|
[37.] |
Mehta, I., Gharbi, M., Barnes, C., Shechtman, E., Ramamoorthi, R., Chandraker, M.: Modulated periodic activations for generalizable local functional representations. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021). https://doi.org/10.1109/iccv48922.2021.01395
|
[38.] |
Nichol, A., Schulman, J.: Reptile: a scalable metalearning algorithm. arXiv:1803.02999 (2018)
|
[39.] |
Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)
|
[40.] |
|
[41.] |
|
[42.] |
|
[43.] |
Schneider, J.B.: Understanding the Finite-Difference Time-Domain Method. School of Electrical Engineering and Computer Science. Washington State University 28 (2010)
|
[44.] |
|
[45.] |
|
[46.] |
|
[47.] |
|
[48.] |
|
[49.] |
Wang, C., Li, S., He, D., Wang, L.: Is L 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^2$$\end{document} physics informed loss always suitable for training physics informed neural network? In: Advances in Neural Information Processing Systems (2022)
|
[50.] |
Wang, S., Wang, H., Perdikaris, P.: Learning the solution operator of parametric partial differential equations with physics-informed deeponets. arXiv:2103.10974 (2021)
|
[51.] |
Yoon, J., Kim, T., Dia, O., Kim, S., Bengio, Y., Ahn, S.: Bayesian model-agnostic meta-learning. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 7343–7353 (2018)
|
[52.] |
|
[53.] |
|
/
〈 | 〉 |