Optimization of Random Feature Method in the High-Precision Regime

Jingrun Chen, Weinan E, Yifei Sun

Communications on Applied Mathematics and Computation ›› 2024, Vol. 6 ›› Issue (2) : 1490-1517. DOI: 10.1007/s42967-024-00389-8
Original Paper

Optimization of Random Feature Method in the High-Precision Regime

Author information +
History +

Abstract

Machine learning has been widely used for solving partial differential equations (PDEs) in recent years, among which the random feature method (RFM) exhibits spectral accuracy and can compete with traditional solvers in terms of both accuracy and efficiency. Potentially, the optimization problem in the RFM is more difficult to solve than those that arise in traditional methods. Unlike the broader machine-learning research, which frequently targets tasks within the low-precision regime, our study focuses on the high-precision regime crucial for solving PDEs. In this work, we study this problem from the following aspects: (i) we analyze the coefficient matrix that arises in the RFM by studying the distribution of singular values; (ii) we investigate whether the continuous training causes the overfitting issue; (iii) we test direct and iterative methods as well as randomized methods for solving the optimization problem. Based on these results, we find that direct methods are superior to other methods if memory is not an issue, while iterative methods typically have low accuracy and can be improved by preconditioning to some extent.

Keywords

Random feature method (RFM) / Partial differential equation (PDE) / Least-squares problem / Direct method / Iterative method

Cite this article

Download citation ▾
Jingrun Chen, Weinan E, Yifei Sun. Optimization of Random Feature Method in the High-Precision Regime. Communications on Applied Mathematics and Computation, 2024, 6(2): 1490‒1517 https://doi.org/10.1007/s42967-024-00389-8

References

[1.]
Alaoui, A., Mahoney, M.W.: Fast randomized kernel ridge regression with statistical guarantees. In: advances in neural information processing systems, vol. 28, pp. 775–783. Curran Associates Inc., New York (2015)
[2.]
Amestoy PR, Duff IS, Koster J, L’Excellent J-Y. A fully asynchronous multifrontal solver using distributed dynamic scheduling. SIAM J. Matrix Anal. Appl., 2001, 23(1): 15-41,
CrossRef Google scholar
[3.]
Anderson, E., Bai, Z., Bischof, C., et al.: LAPACK Users’ Guide. SIAM, Philadelphia (1995)
[4.]
Avron H, Maymounkov P, Toledo S. Blendenpik: supercharging LAPACK’s least-squares solver. SIAM J. Sci. Comput., 2010, 32(3): 1217-1236,
CrossRef Google scholar
[5.]
Bach, F.: Sharp analysis of low-rank kernel matrix approximations. In: Proceedings of the 26th Annual Conference on Learning Theory, PMLR, pp. 185–209 (2013)
[6.]
Balay, S., Gropp, W., McInnes, L.C., Smith, B.F.: PETSc: the portable, extensible toolkit for scientific computation. Argonne National Laboratory, vol. 2, no. 17 (1998)
[7.]
Barron AR. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory, 1993, 39(3): 930-945,
CrossRef Google scholar
[8.]
Belkin M, Hsu D, Ma S, Mandal S. Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proc. Natl. Acad. Sci. USA, 2019, 116(32): 15849-15854,
CrossRef Google scholar
[9.]
Blackford, L.S., Demmel, J., Dongarra, J., Duff, I., Hammarling, S., Henry, G., Heroux, M., Kaufman, L., Lumsdaine, A., Petitet, A., Pozo, R., Remington, K., Whaley, R.C. An updated set of basic linear algebra subprograms (BLAS). ACM Trans. Math. Softw. 28(2), 135–151 (2002)
[10.]
Bollhöfer, M., Schenk, O., Janalik, R., Hamm, S., Gullapalli, K.: State-of-the-art sparse direct solvers. In: Grama, A., Sameh, A. (eds) Parallel Algorithms in Computational Science and Engineering. Modeling and Simulation in Science, Engineering and Technology. pp. 3–33. Birkhäuser, Cham (2020)
[11.]
Calabrò F, Fabiani G, Siettos C. Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients. Comput. Methods Appl. Mech. Eng., 2021, 387(1): 114-188
[12.]
Chandra R, Dagum L, Kohr D, Menon R, Maydan D, McDonald J. . Parallel Programming in OpenMP, 2001 Burlington Morgan Kaufmann
[13.]
Chen, J., Chi, X., E, W.: Bridging traditional and machine learning-based algorithms for solving PDEs: the random feature method. J. Mach. Learn. 1(3), 268–298 (2022)
[14.]
Chen, Z., Schaeffer, H.: Conditioning of random feature matrices: double descent and generalization error. arXiv:2110.11477 (2021)
[15.]
Cybenko GV. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst., 1989, 2(4): 303-314,
CrossRef Google scholar
[16.]
Davis, T.A.: Direct Methods for Sparse Linear Systems. Fundamentals of Algorithms. SIAM, Philadelphia (2006)
[17.]
Davis TA. Algorithm 915, SuiteSparseQR: multifrontal multithreaded rank-revealing sparse QR factorization. ACM Trans. Math. Softw., 2011, 38(1): 8,
CrossRef Google scholar
[18.]
Dong S, Li Z. Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Comput. Methods Appl. Mech. Eng., 2021, 387(1): 114-129
[19.]
Drineas P, Kannan R, Mahoney MW. Fast Monte Carlo algorithms for matrices II: computing a low-rank approximation to a matrix. SIAM J. Comput., 2006, 36(1): 158-183,
CrossRef Google scholar
[20.]
E, W., Bing, Y.: The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6(1), 1–12 (2018)
[21.]
E, W., Han, J., Jentzen, A.: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun. Math. Stat. 5(4), 349–380 (2017)
[22.]
E, W., Wang, Q.: Exponential convergence of the deep neural network approximation for analytic functions. Sci. China Math. 61(10), 1733–1740 (2018)
[23.]
Elmroth E, Gustavson FG. Applying recursion to serial and parallel QR factorization leads to better performance. IBM J. Res. Dev., 2000, 44(4): 605-624,
CrossRef Google scholar
[24.]
Falgout, R.D., Yang, U.M.: hypre: A library of high performance preconditioners. In: Sloot, P.M.A., Hoekstra, A.G., Tan, C.J.K., Dongarra, J.J. (eds) Computational Science—ICCS 2002. ICCS 2002. Lecture Notes in Computer Science, vol. 2331, pp. 632–641. Springer, Berlin, Heidelberg (2002)
[25.]
Fong DC-L, Saunders M. LSMR: an iterative algorithm for sparse least-squares problems. SIAM J. Sci. Comput., 2011, 33(5): 2950-2971,
CrossRef Google scholar
[26.]
Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. Johns Hopkins Studies in the Mathematical Sciences, vol. 3. Johns Hopkins University Press, Baltimore (2013)
[27.]
Gould N, Scott J. The state-of-the-art of preconditioners for sparse linear least-squares problems. ACM Trans. Math. Softw., 2017, 43(4): 1-35,
CrossRef Google scholar
[28.]
Guennebaud, G., et al.: Eigen v3. http://eigen.tuxfamily.org (2010)
[29.]
Halko N, Martinsson P-G, Tropp JA. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 2011, 53(2): 217-288,
CrossRef Google scholar
[30.]
Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA 115(34), 8505–8510 (2018)
[31.]
Hénon P, Ramet P, Roman J. PaStiX: a high-performance parallel direct solver for sparse symmetric positive definite systems. Parallel Comput., 2002, 28: 301-321,
CrossRef Google scholar
[32.]
Hestenes MR, Stiefel E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand., 1952, 49(6): 409,
CrossRef Google scholar
[33.]
Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: theory and applications. Neurocomputing, 2006, 70(1): 489-501,
CrossRef Google scholar
[34.]
IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019 (Revision of IEEE 754-2008), 1–84 (2019)
[35.]
Karczmarz S. Angenäherte auflösung von systemen linearer gleichungen. Bull. Int. Acad. Pol. Sci. Lett., 1937, 35: 355-357
[36.]
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR preprint. arXiv:1412.6980 (2014)
[37.]
Lehoucq, R.B., Sorensen, D.C., Yang, C.: RPACK users’ guide—solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. Software, environments, tools (1998)
[38.]
LeVeque RJ. . Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems, 2007 Philadelphia SIAM,
CrossRef Google scholar
[39.]
Li XS. An overview of superlu: algorithms, implementation, and user interface. ACM Trans. Math. Softw., 2003, 31: 302-325,
CrossRef Google scholar
[40.]
Luo, T., Yang, H.: Two-layer neural networks for partial differential equations: optimization and generalization theory. arXiv:2006.15733 (2020)
[41.]
Neal, R.M.: Bayesian learning for neural networks. Ph.D. thesis, University of Toronto, Toronto (1995)
[42.]
Nguyen VP, Rabczuk T, Bordas S, Duflot M. Meshless methods: a review and computer implementation aspects. Math. Comput. Simul., 2008, 79(3): 763-813,
CrossRef Google scholar
[43.]
Nocedal J, Wright SJ, Optimization N. . Springer Series in Operations Research and Financial Engineering, 2006 New York Springer
[44.]
Owhadi H, Zhang L. Metric-based upscaling. Commun. Pure Appl. Math., 2007, 60(5): 675-723,
CrossRef Google scholar
[45.]
Paige CC, Saunders MA. LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw., 1982, 8(1): 43-71,
CrossRef Google scholar
[46.]
Polak E. Introduction to linear and nonlinear programming. IEEE Trans. Autom. Control, 1974, 19(3): 290-290,
CrossRef Google scholar
[47.]
Rahaman, N., Baratin, A., Arpit, D., Dräxler, F., Lin, M., Hamprecht, F.A., Bengio, Y., Courville, A.C.: On the spectral bias of neural networks. In: International Conference on Machine Learning (2018)
[48.]
Rahimi, A., Recht, B.: Random features for large-scale kernel machines. In: Advances in Neural Information Processing Systems (NIPS), vol. 20. Curran Associates Inc. (2007)
[49.]
Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys., 2019, 378: 686-707,
CrossRef Google scholar
[50.]
Ruder, S.: An overview of gradient descent optimization algorithms. CoRR preprint. arXiv:1609.04747 (2016)
[51.]
Rudi, A., Camoriano, R., Rosasco, L.: Less is more: Nyström computational regularization. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 1, pp. 1657–1665. MIT Press, Cambridge (2015)
[52.]
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM (2003)
[53.]
Shen, J., Tang, T., Wang, L.-L.: Spectral Methods: Algorithms, Analysis and Applications, vol. 41. Springer Science & Business Media, New York (2011)
[54.]
Sirignano JA, Spiliopoulos K. DGM: a deep learning algorithm for solving partial differential equations. J. Comput. Phys., 2018, 375: 1339-1364,
CrossRef Google scholar
[55.]
Sutherland, D.J., Schneider, J.: On the error of random Fourier features. In: Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence (Arlington, Virginia, USA), UAI’15. pp. 862–871. AUAI Press (2015)
[56.]
The Trilinos Project Team. The Trilinos Project Website (2023). https://trilinos.github.io
[57.]
Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S.J., Brett, M., Wilson, J., Millman, K.J., Mayorov, N., Nelson, A.R.J., Jones, E., Kern, R., Larson, E., Carey, C.J., Polat, İ, Feng, Y., Moore, E.W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E.A., Harris, C.R., Archibald, A.M., Ribeiro, A.H., Pedregosa, F., van Mulbregt, P., SciPy 1.0 Contributors: SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020)
[58.]
Williams, C., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Advances in Neural Information Processing Systems, vol. 13. MIT Press, Cambridge (2000)
[59.]
Xu Z-QJ. Frequency principle: Fourier analysis sheds light on deep neural networks. Commun. Comput. Phys., 2020, 28(5): 1746-1767,
CrossRef Google scholar
[60.]
Yang Y, Hou M, Luo J. A novel improved extreme learning machine algorithm in solving ordinary differential equations by Legendre neural network methods. Adv. Difference Equ., 2018, 2018(1): 1-24,
CrossRef Google scholar
[61.]
Zang Y, Bao G, Ye X, Zhou H. Weak adversarial networks for high-dimensional partial differential equations. J. Comput. Phys., 2020, 411,
CrossRef Google scholar
[62.]
Zhang, X., Wang, Q., Zhang, Y.: Model-driven level 3 BLAS performance optimization on Loongson 3a processor. In: 2012 IEEE 18th International Conference on Parallel and Distributed Systems, pp. 684–691 (2012)
[63.]
Zienkiewicz OC, Taylor RL, Zhu JZ. . The Finite Element Method: Its Basis and Fundamentals, 2005 Amsterdam Elsevier
[64.]
Zouzias A, Freris NM. Randomized extended Kaczmarz for solving least squares. SIAM J. Matrix Anal. Appl., 2013, 34(2): 773-793,
CrossRef Google scholar
Funding
National Natural Science of China(92370205)

Accesses

Citations

Detail

Sections
Recommended

/