A Dynamical System-Based Framework for Dimension Reduction

Ryeongkyung Yoon, Braxton Osting

Communications on Applied Mathematics and Computation ›› 2023, Vol. 6 ›› Issue (2) : 757-789. DOI: 10.1007/s42967-022-00234-w
Original Paper

A Dynamical System-Based Framework for Dimension Reduction

Author information +
History +

Abstract

We propose a novel framework for learning a low-dimensional representation of data based on nonlinear dynamical systems, which we call the dynamical dimension reduction (DDR). In the DDR model, each point is evolved via a nonlinear flow towards a lower-dimensional subspace; the projection onto the subspace gives the low-dimensional embedding. Training the model involves identifying the nonlinear flow and the subspace. Following the equation discovery method, we represent the vector field that defines the flow using a linear combination of dictionary elements, where each element is a pre-specified linear/nonlinear candidate function. A regularization term for the average total kinetic energy is also introduced and motivated by the optimal transport theory. We prove that the resulting optimization problem is well-posed and establish several properties of the DDR method. We also show how the DDR method can be trained using a gradient-based optimization method, where the gradients are computed using the adjoint method from the optimal control theory. The DDR method is implemented and compared on synthetic and example data sets to other dimension reduction methods, including the PCA, t-SNE, and Umap.

Keywords

Dimension reduction / Equation discovery / Dynamical systems / Adjoint method / Optimal transportation

Cite this article

Download citation ▾
Ryeongkyung Yoon, Braxton Osting. A Dynamical System-Based Framework for Dimension Reduction. Communications on Applied Mathematics and Computation, 2023, 6(2): 757‒789 https://doi.org/10.1007/s42967-022-00234-w

References

[1.]
Baldi P. Guyon G, Dror G, Lemaire V, Taylor G, Silver D. Autoencoders, unsupervised learning, and deep architectures. Proceedings of ICML Workshop on Unsupervised and Transfer Learning. Proceedings of Machine Learning Research, 2012 Bellevue PMLR 37-49
[2.]
Benamou J-D, Brenier Y. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math., 2000, 84(3): 375-393,
CrossRef Google scholar
[3.]
Borwein JM, Lewis AS. . Convex Analysis and Nonlinear Optimization, 2000 New York Springer,
CrossRef Google scholar
[4.]
Brunton SL, Proctor JL, Kutz JN. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci., 2016, 113(15): 3932-3937,
CrossRef Google scholar
[5.]
Calvetti D, Morigi S, Reichel L, Sgallari F. Tikhonov regularization and the L-curve for large discrete ill-posed problems. J. Comput. Appl. Math., 2000, 123(1-2): 423-446,
CrossRef Google scholar
[6.]
Chalvidal M, Ricci M, VanRullen R, Serre T. Go with the flow: adaptive control for neural ODEs. arXiv preprint, 2020,
CrossRef Google scholar
[7.]
Chang B, Chen M, Haber E, Chi EH. Antisymmetric RNN: a dynamical system view on recurrent neural networks. Int. Conf. Learn. Represent., 2019,
CrossRef Google scholar
[8.]
Chen RT, Rubanova Y, Bettencourt J, Duvenaud DK. Neural ordinary differential equations. Adv. Neural. Inf. Process. Syst., 2018, 31: 07366,
CrossRef Google scholar
[9.]
Finlay C, Jacobsen J-H, Nurbekyan L, Oberman A. How to train your neural ODE: the world of Jacobian and kinetic regularization. International Conference on Machine Learning, 2020 Cham PMLR 3154-3164
[10.]
Garsdal M, Søgaard V, Sørensen S. Generative time series models using neural ODE in variational autoencoders. ArXiv preprint, 2022,
CrossRef Google scholar
[11.]
Goodfellow I, Bengio Y, Courville A. . Deep Learning, 2016 London MIT Press,
CrossRef Google scholar
[12.]
Grathwohl W, Chen RT, Bettencourt J, Sutskever I, Duvenaud D. FFJORD: free-form continuous dynamics for scalable reversible generative models. arXiv preprint, 2018,
CrossRef Google scholar
[13.]
Greydanus, S., Dzamba, M., Yosinski, J.: Hamiltonian neural networks. In: NIPS'19: Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 15379–15389. ACM (2019). https://doi.org/10.48550/arXiv.1906.01563
[14.]
Haber E, Ruthotto L. Stable architectures for deep neural networks. Inverse Prob., 2017, 34,
CrossRef Google scholar
[15.]
Heinonen M, Yildiz C, Mannerström H, Intosalmi J, Lähdesmäki H. Learning unknown ODE models with Gaussian processes. International Conference on Machine Learning, 2018 Cham PMLR 1959-1968,
CrossRef Google scholar
[16.]
Hotelling H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol., 1933, 24(6): 417,
CrossRef Google scholar
[17.]
Kingma DP, Welling M. Auto-encoding variational Bayes. arXiv preprint, 2013,
CrossRef Google scholar
[18.]
Long Z, Lu Y, Dong B. PDE-Net 2.0: learning PDEs from data with a numeric-symbolic hybrid deep network. J. Comput. Phys., 2019, 399,
CrossRef Google scholar
[19.]
McInnes L, Healy J, Melville J. Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint, 2018,
CrossRef Google scholar
[20.]
Santambrogio, F.: Optimal transport for applied mathematicians: calculus of variations, PDEs, and modeling. In: Progress in Nonlinear Differential Equations and Their Applications. Birkäuser, New York (2015). https://doi.org/10.1007/978-3-319-20828-2
[21.]
Sideris TC. . Ordinary Differential Equations and Dynamical Systems, 2013 Cham Springer,
CrossRef Google scholar
[22.]
Van der Maaten L, Hinton G. Visualizing data using t-SNE. J. Mach. Learn. Res., 2008, 9(86): 2579-2605
[23.]
Xia, H., Suliafu, V., Ji, H., Nguyen, T., Bertozzi, A., Osher, S., Wang, B.: Heavy ball neural ordinary differential equations. In: Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS 2021), arXiv: 2110.04840 (2021). https://doi.org/10.48550/arXiv.2110.04840
[24.]
Yoon R, Bhat HS, Osting B. A nonautonomous equation discovery method for time signal classification. SIAM J. Appl. Dyn. Syst., 2022, 21(1): 33-59,
CrossRef Google scholar
[25.]
Zhang L, Schaeffer H. On the convergence of the SINDy algorithm. Multiscale Model. Simul., 2019, 17(3): 948-972,
CrossRef Google scholar
[26.]
Zhong YD, Dey B, Chakraborty A. Symplectic ODE-net: learning Hamiltonian dynamics with control. arXiv preprint, 2019,
CrossRef Google scholar
Funding
Directorate for Mathematical and Physical Sciences(17-52202)

Accesses

Citations

Detail

Sections
Recommended

/