In the partition of unity finite element method, the nodal basis of the standard linear Lagrange finite element is multiplied by the Pk polynomial basis to form a local basis of an extended finite element space. Such a space contains the P1 Lagrange element space, but is a proper subspace of the Pk+1 Lagrange element space on triangular or tetrahedral grids. It is believed that the approximation order of this extended finite element is k, in H1-norm, as it was proved in the first paper on the partition of unity, by Babuska and Melenk. In this work we show surprisingly the approximation order is k+1 in H1-norm. In addition, we extend the method to rectangular/cuboid grids and give a proof to this sharp convergence order. Numerical verification is done with various partition of unity finite elements, on triangular, tetrahedral, and q-uadri-lateral grids.
This paper presents a Fourier matching method to rigorously study resonances in a sound-hard slab with a finite number of narrow cylindrical holes. The cross sections of the holes, of diameters
This paper proposes a novel method to establish the well-posedness of uniaxial perfectly matched layer (UPML) method for a two-dimensional acoustic scattering from a compactly supported source in a two-layered medium. We solve a long standing problem by showing that the truncated layered medium scattering problem is always resonance free regardless of the thickness and absorbing strength of UPML. The main idea is based on analyzing an auxiliary waveguide problem obtained by truncating the layered medium scattering problem through PML in the vertical direction only. The Green function for this waveguide problem can be constructed explicitly based on the separation of variables and Fourier transform. We prove that such a construction is always well-defined regardless of the absorbing strength. The well-posedness of the fully UPML truncated scattering problem follows by assembling the waveguide Green function through periodic extension.
The fluctuation of mRNA molecule numbers within an isogenic cell population is primarily attributed to randomly switching between active (ON) and inactive (OFF) periods of gene transcription. In most studies the waiting-times for ON or OFF states are modeled as exponential distributions. However, increasing data suggest that the residence durations at ON or OFF are non-exponential distributed for which the traditional master equations cannot be presented. By combining Kolmogorov forward equations with alternating renewal processes, we present a novel method to compute the average transcription level and its noise by circumventing the bottleneck of master equations under gene ON and OFF switch. As an application, we consider lifetimes of OFF and ON states having Erlang distributions. We show that: (i) multiple steps from OFF to ON force the oscillating transcription while multiple steps from ON to OFF accelerate the transcription, (ii) the increase of steps between ON and OFF rapidly reduces the transcription noise to approach its minimum value. This suggests that a large number of steps between ON and OFF are not needed in the model to capture the stochastic transcription data. Our computation approach can be further used to treat a series of transcription cycles which are non-lattice distributed.
This paper is concerned with a C1-conforming Gauss collocation approximation to the solution of a model two-dimensional elliptic boundary problem. Superconvergence phenomena for the numerical solution at mesh nodes, at roots of a special Jacobi polynomial, and at the Lobatto and Gauss lines are identified with rigorous mathematical proof, when tensor products of C1 piecewise polynomials of degree not more than
Summary. In this work, we delve into the relationship between deep and shallow neural networks (NNs), focusing on the critical points of their loss landscapes. We discover an embedding principle in depth that loss landscape of an NN "contains" all critical points of the loss landscapes for shallower NNs. The key tool for our discovery is the critical lifting that maps any critical point of a network to critical manifolds of any deeper network while preserving the outputs. To investigate the practical implications of this principle, we conduct a series of numerical experiments. The results confirm that deep networks do encounter these lifted critical points during training, leading to similar training dynamics across varying network depths. We provide theoretical and empirical evidence that through the lifting operation, the lifted critical points exhibit increased degeneracy. This principle also provides insights into the optimization benefits of batch normalization and larger datasets, and enables practical applications like network layer pruning. Overall, our discovery of the embedding principle in depth uncovers the depth-wise hierarchical structure of deep learning loss landscape, which serves as a solid foundation for the further study about the role of depth for DNNs.
In this paper, we propose a class of stochastic Runge-Kutta (SRK) methods for solving semilinear parabolic equations. By using the nonlinear Feynman-Kac formula, we first write the solution of the parabolic equation in the form of the backward stochastic differential equation (BSDE) and then deduce an ordinary differential equation (ODE) containing the conditional expectations with respect to a diffusion process. The time semidiscrete SRK methods are then developed based on the corresponding ODE. Under some reasonable constraints on the time step, we theoretically prove the maximum bound principle (MBP) of the proposed methods and obtain their error estimates. By combining the Gaussian quadrature rule for approximating the conditional expectations, we further propose the first- and second-order fully discrete SRK schemes, which can be written in the matrix form. We also rigorously analyze the MBP-preserving and error estimates of the fully discrete schemes. Some numerical experiments are carried out to verify our theoretical results and to show the efficiency and stability of the proposed schemes.