We propose a novel framework for learning a low-dimensional representation of data based on nonlinear dynamical systems, which we call the dynamical dimension reduction (DDR). In the DDR model, each point is evolved via a nonlinear flow towards a lower-dimensional subspace; the projection onto the subspace gives the low-dimensional embedding. Training the model involves identifying the nonlinear flow and the subspace. Following the equation discovery method, we represent the vector field that defines the flow using a linear combination of dictionary elements, where each element is a pre-specified linear/nonlinear candidate function. A regularization term for the average total kinetic energy is also introduced and motivated by the optimal transport theory. We prove that the resulting optimization problem is well-posed and establish several properties of the DDR method. We also show how the DDR method can be trained using a gradient-based optimization method, where the gradients are computed using the adjoint method from the optimal control theory. The DDR method is implemented and compared on synthetic and example data sets to other dimension reduction methods, including the PCA, t-SNE, and Umap.
Computing tasks may often be posed as optimization problems. The objective functions for real-world scenarios are often nonconvex and/or nondifferentiable. State-of-the-art methods for solving these problems typically only guarantee convergence to local minima. This work presents Hamilton-Jacobi-based Moreau adaptive descent (HJ-MAD), a zero-order algorithm with guaranteed convergence to global minima, assuming continuity of the objective function. The core idea is to compute gradients of the Moreau envelope of the objective (which is “piece-wise convex”) with adaptive smoothing parameters. Gradients of the Moreau envelope (i.e., proximal operators) are approximated via the Hopf-Lax formula for the viscous Hamilton-Jacobi equation. Our numerical examples illustrate global convergence.
We present a class of preconditioners for the linear systems resulting from a finite element or discontinuous Galerkin discretizations of advection-dominated problems. These preconditioners are designed to treat the case of geometrically localized stiffness, where the convergence rates of iterative methods are degraded in a localized subregion of the mesh. Slower convergence may be caused by a number of factors, including the mesh size, anisotropy, highly variable coefficients, and more challenging physics. The approach taken in this work is to correct well-known preconditioners such as the block Jacobi and the block incomplete LU (ILU) with an adaptive inner subregion iteration. The goal of these preconditioners is to reduce the number of costly global iterations by accelerating the convergence in the stiff region by iterating on the less expensive reduced problem. The tolerance for the inner iteration is adaptively chosen to minimize subregion-local work while guaranteeing global convergence rates. We present analysis showing that the convergence of these preconditioners, even when combined with an adaptively selected tolerance, is independent of discretization parameters (e.g., the mesh size and diffusion coefficient) in the subregion. We demonstrate significant performance improvements over black-box preconditioners when applied to several model convection-diffusion problems. Finally, we present performance results of several variations of iterative subregion correction preconditioners applied to the Reynolds number
While much progress has been made in capturing high-quality facial performances using motion capture markers and shape-from-shading, high-end systems typically also rely on rotoscope curves hand-drawn on the image. These curves are subjective and difficult to draw consistently; moreover, ad-hoc procedural methods are required for generating matching rotoscope curves on synthetic renders embedded in the optimization used to determine three-dimensional (3D) facial pose and expression. We propose an alternative approach whereby these curves and other keypoints are detected automatically on both the image and the synthetic renders using trained neural networks, eliminating artist subjectivity, and the ad-hoc procedures meant to mimic it. More generally, we propose using machine learning networks to implicitly define deep energies which when minimized using classical optimization techniques lead to 3D facial pose and expression estimation.
We provide a concise review of the exponentially convergent multiscale finite element method (ExpMsFEM) for efficient model reduction of PDEs in heterogeneous media without scale separation and in high-frequency wave propagation. The ExpMsFEM is built on the non-overlapped domain decomposition in the classical MsFEM while enriching the approximation space systematically to achieve a nearly exponential convergence rate regarding the number of basis functions. Unlike most generalizations of the MsFEM in the literature, the ExpMsFEM does not rely on any partition of unity functions. In general, it is necessary to use function representations dependent on the right-hand side to break the algebraic Kolmogorov n-width barrier to achieve exponential convergence. Indeed, there are online and offline parts in the function representation provided by the ExpMsFEM. The online part depends on the right-hand side locally and can be computed in parallel efficiently. The offline part contains basis functions that are used in the Galerkin method to assemble the stiffness matrix; they are all independent of the right-hand side, so the stiffness matrix can be used repeatedly in multi-query scenarios.
Signal decomposition and multiscale signal analysis provide many useful tools for time-frequency analysis. We proposed a random feature method for analyzing time-series data by constructing a sparse approximation to the spectrogram. The randomization is both in the time window locations and the frequency sampling, which lowers the overall sampling and computational cost. The sparsification of the spectrogram leads to a sharp separation between time-frequency clusters which makes it easier to identify intrinsic modes, and thus leads to a new data-driven mode decomposition. The applications include signal representation, outlier removal, and mode decomposition. On benchmark tests, we show that our approach outperforms other state-of-the-art decomposition methods.
Higher order finite difference weighted essentially non-oscillatory (WENO) schemes have been constructed for conservation laws. For multidimensional problems, they offer a high order accuracy at a fraction of the cost of a finite volume WENO or DG scheme of the comparable accuracy. This makes them quite attractive for several science and engineering applications. But, to the best of our knowledge, such schemes have not been extended to non-linear hyperbolic systems with non-conservative products. In this paper, we perform such an extension which improves the domain of the applicability of such schemes. The extension is carried out by writing the scheme in fluctuation form. We use the HLLI Riemann solver of Dumbser and Balsara (J. Comput. Phys. 304: 275–319, 2016) as a building block for carrying out this extension. Because of the use of an HLL building block, the resulting scheme has a proper supersonic limit. The use of anti-diffusive fluxes ensures that stationary discontinuities can be preserved by the scheme, thus expanding its domain of the applicability. Our new finite difference WENO formulation uses the same WENO reconstruction that was used in classical versions, making it very easy for users to transition over to the present formulation. For conservation laws, the new finite difference WENO is shown to perform as well as the classical version of finite difference WENO, with two major advantages: (i) It can capture jumps in stationary linearly degenerate wave families exactly. (ii) It only requires the reconstruction to be applied once. Several examples from hyperbolic PDE systems with non-conservative products are shown which indicate that the scheme works and achieves its design order of the accuracy for smooth multidimensional flows. Stringent Riemann problems and several novel multidimensional problems that are drawn from compressible Baer-Nunziato multiphase flow, multiphase debris flow and two-layer shallow water equations are also shown to document the robustness of the method. For some test problems that require well-balancing we have even been able to apply the scheme without any modification and obtain good results. Many useful PDEs may have stiff relaxation source terms for which the finite difference formulation of WENO is shown to provide some genuine advantages.
We present a class of arbitrarily high order fully explicit kinetic numerical methods in compressible fluid dynamics, both in time and space, which include the relaxation schemes by Jin and Xin. These methods can use the CFL number larger or equal to unity on regular Cartesian meshes for the multi-dimensional case. These kinetic models depend on a small parameter that can be seen as a “Knudsen” number. The method is asymptotic preserving in this Knudsen number. Also, the computational costs of the method are of the same order of a fully explicit scheme. This work is the extension of Abgrall et al. (2022) [
In this paper, a new finite element and finite difference (FE-FD) method has been developed for anisotropic parabolic interface problems with a known moving interface using Cartesian meshes. In the spatial discretization, the standard
Graph learning, when used as a semi-supervised learning (SSL) method, performs well for classification tasks with a low label rate. We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi- or hyperspectral image segmentation. Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification. This work builds on recent advances in the design of novel active learning acquisition functions (e.g., the Model Change approach in arXiv:2110.07739) while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods. In addition to improvements in the accuracy, our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
An improved algorithm for computing multiphase flows is presented in which the multimaterial Moment-of-Fluid (MOF) algorithm for multiphase flows, initially described by Li et al. (2015), is enhanced addressing existing MOF difficulties in computing solutions to problems in which surface tension forces are crucial for understanding salient flow mechanisms. The Continuous MOF (CMOF) method is motivated in this article. The CMOF reconstruction method inherently removes the “checkerboard instability” that persists when using the MOF method on surface tension driven multiphase (multimaterial) flows. The CMOF reconstruction algorithm is accelerated by coupling the CMOF method to the level set method and coupling the CMOF method to a decision tree machine learning (ML) algorithm. Multiphase flow examples are shown in the two-dimensional (2D), three-dimensional (3D) axisymmetric “RZ”, and 3D coordinate systems. Examples include two material and three material multiphase flows: bubble formation, the impingement of a liquid jet on a gas bubble in a cryogenic fuel tank, freezing, and liquid lens dynamics.
We investigate the following inverse problem: starting from the acoustic wave equation, reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of sound. When the amplitudes of the source are known a priori, we prove a unique determination result of the shape and propose a level set algorithm to reconstruct the singularities. When the singularities of the source are known a priori, we show unique determination of the source amplitudes and propose a least-squares fitting algorithm to recover the source amplitudes. The analysis bridges the low-frequency source inversion problem and the inverse problem of gravimetry. The proposed algorithms are validated and quantitatively evaluated with numerical experiments in 2D and 3D.
Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computational domains, etc. Typical reduced order modeling techniques accelerate the solution of the parametric PDEs by projecting them onto a linear trial manifold constructed in the offline stage. These methods often need a predefined mesh as well as a series of precomputed solution snapshots, and may struggle to balance between the efficiency and accuracy due to the limitation of the linear ansatz. Utilizing the nonlinear representation of neural networks (NNs), we propose the Meta-Auto-Decoder (MAD) to construct a nonlinear trial manifold, whose best possible performance is measured theoretically by the decoder width. Based on the meta-learning concept, the trial manifold can be learned in a mesh-free and unsupervised way during the pre-training stage. Fast adaptation to new (possibly heterogeneous) PDE parameters is enabled by searching on this trial manifold, and optionally fine-tuning the trial manifold at the same time. Extensive numerical experiments show that the MAD method exhibits a faster convergence speed without losing the accuracy than other deep learning-based methods.
We propose a new framework for the sampling, compression, and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces. Our approach involves constructing a tensor called the RaySense sketch, which captures nearest neighbors from the underlying geometry of points along a set of rays. We explore various operations that can be performed on the RaySense sketch, leading to different properties and potential applications. Statistical information about the data set can be extracted from the sketch, independent of the ray set. Line integrals on point sets can be efficiently computed using the sketch. We also present several examples illustrating applications of the proposed strategy in practical scenarios.
We prove, under mild conditions, the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model, both in batch gradient descent and stochastic gradient descent. We also discuss a Riemannian version of the Adam algorithm. We show numerical simulations of these algorithms on various benchmarks.
We propose a simple embedding method for computing the eigenvalues and eigenfunctions of the Laplace-Beltrami operator on implicit surfaces. The approach follows an embedding approach for solving the surface eikonal equation. We replace the differential operator on the interface with a typical Cartesian differential operator in the surface neighborhood. Our proposed algorithm is easy to implement and efficient. We will give some two- and three-dimensional numerical examples to demonstrate the effectiveness of our proposed approach.
Anderson acceleration (AA) is an extrapolation technique designed to speed up fixed-point iterations. For optimization problems, we propose a novel algorithm by combining the AA with the energy adaptive gradient method (AEGD) [arXiv:2010.05109]. The feasibility of our algorithm is ensured in light of the convergence theory for AEGD, though it is not a fixed-point iteration. We provide rigorous convergence rates of AA for gradient descent (GD) by an acceleration factor of the gain at each implementation of AA-GD. Our experimental results show that the proposed AA-AEGD algorithm requires little tuning of hyperparameters and exhibits superior fast convergence.