To accurately model flows with shock waves using staggered-grid Lagrangian hydrodynamics, the artificial viscosity has to be introduced to convert kinetic energy into internal energy, thereby increasing the entropy across shocks. Determining the appropriate strength of the artificial viscosity is an art and strongly depends on the particular problem and experience of the researcher. The objective of this study is to pose the problem of finding the appropriate strength of the artificial viscosity as an optimization problem and solve this problem using machine learning (ML) tools, specifically using surrogate models based on Gaussian Process regression (GPR) and Bayesian analysis. We describe the optimization method and discuss various practical details of its implementation. The shock-containing problems for which we apply this method all have been implemented in the LANL code FLAG (Burton in Connectivity structures and differencing techniques for staggered-grid free-Lagrange hydrodynamics, Tech. Rep. UCRL-JC-110555, Lawrence Livermore National Laboratory, Livermore, CA, 1992, 1992, in Consistent finite-volume discretization of hydrodynamic conservation laws for unstructured grids, Tech. Rep. CRL-JC-118788, Lawrence Livermore National Laboratory, Livermore, CA, 1992, 1994, Multidimensional discretization of conservation laws for unstructured polyhedral grids, Tech. Rep. UCRL-JC-118306, Lawrence Livermore National Laboratory, Livermore, CA, 1992, 1994, in FLAG, a multi-dimensional, multiple mesh, adaptive free-Lagrange, hydrodynamics code. In: NECDC, 1992). First, we apply ML to find optimal values to isolated shock problems of different strengths. Second, we apply ML to optimize the viscosity for a one-dimensional (1D) propagating detonation problem based on Zel’dovich-von Neumann-Doring (ZND) (Fickett and Davis in Detonation: theory and experiment. Dover books on physics. Dover Publications, Mineola, 2000) detonation theory using a reactive burn model. We compare results for default (currently used values in FLAG) and optimized values of the artificial viscosity for these problems demonstrating the potential for significant improvement in the accuracy of computations.
We explore an intersection-based remap method between meshes consisting of isoparametric elements. We present algorithms for the case of serendipity isoparametric elements (QUAD8 elements) and piece-wise constant (cell-centered) discrete fields. We demonstrate convergence properties of this remap method with a few numerical experiments.
In this paper, we design high-order Runge-Kutta discontinuous Galerkin (RKDG) methods with multi-resolution weighted essentially non-oscillatory (multi-resolution WENO) limiters to compute compressible steady-state problems on triangular meshes. A troubled cell indicator extended from structured meshes to unstructured meshes is constructed to identify triangular cells in which the application of the limiting procedures is required. In such troubled cells, the multi-resolution WENO limiting methods are used to the hierarchical
We analyze mimetic properties of a conservative finite-volume (FV) scheme on polygonal meshes used for modeling solute transport on a surface with variable elevation. Polygonal meshes not only provide enormous mesh generation flexibility, but also tend to improve stability properties of numerical schemes and reduce bias towards any particular mesh direction. The mathematical model is given by a system of weakly coupled shallow water and linear transport equations. The equations are discretized using different explicit cell-centered FV schemes for flow and transport subsystems with different time steps. The discrete shallow water scheme is well balanced and preserves the positivity of the water depth. We provide a rigorous estimate of a stable time step for the shallow water and transport scheme and prove a bounds-preserving property of the solute concentration. The scheme is second-order accurate over fully wet regions and first-order accurate over partially wet or dry regions. Theoretical results are verified with numerical experiments on rectangular, triangular, and polygonal meshes.
The deferred correction (DeC) is an iterative procedure, characterized by increasing the accuracy at each iteration, which can be used to design numerical methods for systems of ODEs. The main advantage of such framework is the automatic way of getting arbitrarily high order methods, which can be put in the Runge-Kutta (RK) form. The drawback is the larger computational cost with respect to the most used RK methods. To reduce such cost, in an explicit setting, we propose an efficient modification: we introduce interpolation processes between the DeC iterations, decreasing the computational cost associated to the low order ones. We provide the Butcher tableaux of the new modified methods and we study their stability, showing that in some cases the computational advantage does not affect the stability. The flexibility of the novel modification allows nontrivial applications to PDEs and construction of adaptive methods. The good performances of the introduced methods are broadly tested on several benchmarks both in ODE and PDE contexts.
We construct an unconventional divergence preserving discretization of updated Lagrangian ideal magnetohydrodynamics (MHD) over simplicial grids. The cell-centered finite-volume (FV) method employed to discretize the conservation laws of volume, momentum, and total energy is rigorously the same as the one developed to simulate hyperelasticity equations. By construction this moving mesh method ensures the compatibility between the mesh displacement and the approximation of the volume flux by means of the nodal velocity and the attached unit corner normal vector which is nothing but the partial derivative of the cell volume with respect to the node coordinate under consideration. This is precisely the definition of the compatibility with the Geometrical Conservation Law which is the cornerstone of any proper multi-dimensional moving mesh FV discretization. The momentum and the total energy fluxes are approximated utilizing the partition of cell faces into sub-faces and the concept of sub-face force which is the traction force attached to each sub-face impinging at a node. We observe that the time evolution of the magnetic field might be simply expressed in terms of the deformation gradient which characterizes the Lagrange-to-Euler mapping. In this framework, the divergence of the magnetic field is conserved with respect to time thanks to the Piola formula. Therefore, we solve the fully compatible updated Lagrangian discretization of the deformation gradient tensor for updating in a simple manner the cell-centered value of the magnetic field. Finally, the sub-face traction force is expressed in terms of the nodal velocity to ensure a semi-discrete entropy inequality within each cell. The conservation of momentum and total energy is recovered prescribing the balance of all the sub-face forces attached to the sub-faces impinging at a given node. This balance corresponds to a vectorial system satisfied by the nodal velocity. It always admits a unique solution which provides the nodal velocity. The robustness and the accuracy of this unconventional FV scheme have been demonstrated by employing various representative test cases. Finally, it is worth emphasizing that once you have an updated Lagrangian code for solving hyperelasticity you also get an almost free updated Lagrangian code for solving ideal MHD ensuring exactly the compatibility with the involution constraint for the magnetic field at the discrete level.
This is the second part of our series works on failure-informed adaptive sampling for physic-informed neural networks (PINNs). In our previous work (SIAM J. Sci. Comput. 45: A1971–A1994), we have presented an adaptive sampling framework by using the failure probability as the posterior error indicator, where the truncated Gaussian model has been adopted for estimating the indicator. Here, we present two extensions of that work. The first extension consists in combining with a re-sampling technique, so that the new algorithm can maintain a constant training size. This is achieved through a cosine-annealing, which gradually transforms the sampling of collocation points from uniform to adaptive via the training progress. The second extension is to present the subset simulation (SS) algorithm as the posterior model (instead of the truncated Gaussian model) for estimating the error indicator, which can more effectively estimate the failure probability and generate new effective training points in the failure region. We investigate the performance of the new approach using several challenging problems, and numerical experiments demonstrate a significant improvement over the original algorithm.
In this paper, a new efficient, and at the same time, very simple and general class of thermodynamically compatible finite volume schemes is introduced for the discretization of nonlinear, overdetermined, and thermodynamically compatible first-order hyperbolic systems. By construction, the proposed semi-discrete method satisfies an entropy inequality and is nonlinearly stable in the energy norm. A very peculiar feature of our approach is that entropy is discretized directly, while total energy conservation is achieved as a mere consequence of the thermodynamically compatible discretization. The new schemes can be applied to a very general class of nonlinear systems of hyperbolic PDEs, including both, conservative and non-conservative products, as well as potentially stiff algebraic relaxation source terms, provided that the underlying system is overdetermined and therefore satisfies an additional extra conservation law, such as the conservation of total energy density. The proposed family of finite volume schemes is based on the seminal work of Abgrall [
Adaptive mesh refinement (AMR) is fairly practiced in the context of high-dimensional, mesh-based computational models. However, it is in its infancy in that of low-dimensional, generalized-coordinate-based computational models such as projection-based reduced-order models. This paper presents a complete framework for projection-based model order reduction (PMOR) of nonlinear problems in the presence of AMR that builds on elements from existing methods and augments them with critical new contributions. In particular, it proposes an analytical algorithm for computing a pseudo-meshless inner product between adapted solution snapshots for the purpose of clustering and PMOR. It exploits hyperreduction—specifically, the energy-conserving sampling and weighting hyperreduction method—to deliver for nonlinear and/or parametric problems the desired computational gains. Most importantly, the proposed framework for PMOR in the presence of AMR capitalizes on the concept of state-local reduced-order bases to make the most of the notion of a supermesh, while achieving computational tractability. Its features are illustrated with CFD applications grounded in AMR and its significance is demonstrated by the reported wall-clock speedup factors.
This contribution is dedicated to the celebration of Rémi Abgrall’s accomplishments in Applied Mathematics and Scientific Computing during the conference “Essentially Hyperbolic Problems: Unconventional Numerics, and Applications”. With respect to classical Finite Elements Methods, Trefftz methods are unconventional methods because of the way the basis functions are generated. Trefftz discontinuous Galerkin (TDG) methods have recently shown potential for numerical approximation of transport equations [