2021-12-20 2021, Volume 2 Issue 4

  • Select all
  • research-article
    Xin Liu , Zaiwen Wen , Ya-Xiang Yuan

    Subspace techniques such as Krylov subspace methods have been well known and extensively used in numerical linear algebra. They are also ubiquitous and becoming indispensable tools in nonlinear optimization due to their ability to handle large scale problems. There are generally two types of principals: i) the decision variable is updated in a lower dimensional subspace; ii) the objective function or constraints are approximated in a certain smaller subspace of their domain. The key ingredients are the constructions of suitable subspaces and subproblems according to the specific structures of the variables and functions such that either the exact or inexact solutions of subproblems are readily available and the corresponding computational cost is significantly reduced. A few relevant techniques include but not limited to direct combinations, block coordinate descent, active sets, limited-memory, Anderson acceleration, subspace correction, sampling and sketching. This paper gives a comprehensive survey on the subspace methods and their recipes in unconstrained and constrained optimization, nonlinear least squares problem, sparse and low rank optimization, linear and nonlinear eigenvalue computation, semidefinite programming, stochastic optimization and etc. In order to provide helpful guidelines, we emphasize on high level concepts for the development and implementation of practical algorithms from the subspace framework.

  • research-article
    Anis Theljani , Zakaria Belhachmi , Moez Kallel , Maher Moakher

    We consider a Partial Differential Equation model combining second and fourth-order operators for solving the geometry inpainting and denoising problems. The model allows the accurate recovery of curvatures and the singular set of the reconstructed image (edges, corners). The approach proposed permits a dynamical modelling by constructing a family of simple discrete energies that admit as a $\mathrm{\Gamma }$-limit Mumford-Shah-Euler like functional. The approximation functionals are build within an adaptive strategy, based on two ingredients: a fine location of the singular set using mesh refinement, and second, a local choice of the diffusion coefficients which modify the reconstruction operator. Unlike the usual methods, mostly based on prior guess on the continuous solution and leading to complex and nonlinear systems of PDEs, our method consists in solving linear problems and updating the diffusion coefficients. The high order of the operator allows us to perform simultaneously efficient filtering of the data and the interpolation in the damaged regions. The method turns out to be superior to any second-order model in restoring large gap connections and curvy features. In order to validate this approach, we compare the results of our method with those of some existing one in the fields of geometry- oriented inpainting and we present several numerical examples.

  • research-article
    Huanfei Ma , Siyang Leng , Luonan Chen

    Reconstructing faithfully causal networks from observed time series data is fundamental to revealing the intrinsic nature of complex systems. With the increase of the network scale, indirect causal relations will arise due to causation transitivity but existing methods suffer from dimension curse in eliminating such indirect influences. In this paper, we propose a novel technique to overcome the difficulties by integrating the idea of randomly distributed embedding into conditional Granger causality. Validated by both benchmark and synthetic data sets, our method demonstrates potential applicability in reconstructing high-dimensional causal networks based only on a short-term time series.

  • research-article
    Li Zhou , Lihao Yan , Mark A. Caprio , Weiguo Gao , Chao Yang

    We examine the possibility of using a reinforcement learning (RL) algorithm to solve large-scale eigenvalue problems in which the desired the eigenvector can be approximated by a sparse vector with at most k nonzero elements, where k is relatively small compare to the dimension of the matrix to be partially diagonalized. This type of problem arises in applications in which the desired eigenvector exhibits localization properties and in large-scale eigenvalue computations in which the amount of computational resource is limited. When the positions of these nonzero elements can be determined, we can obtain the k-sparse approximation to the original problem by computing the eigenvalue of a k×k submatrix extracted from k rows and columns of the original matrix. We review a previously developed greedy algorithm for incrementally probing the positions of the nonzero elements in a k-sparse approximate eigenvector and show that the greedy algorithm can be improved by using an RL method to refine the selection of k rows and columns of the original matrix. We describe how to represent states, actions, rewards and policies in an RL algorithm designed to solve the k-sparse eigenvalue problem and demonstrate the effectiveness of the RL algorithm on two examples originating from quantum many-body physics.

  • research-article
    Wei Wang , Caifei Li , Michael K. Ng

    In this paper, we propose and develop a novel saturation component based fuzzy Mumford-Shah model for color image segmentation. The main feature of this model is that we determine different segments by using the saturation component in hue, saturation, and value (HSV) color space instead of the original red, green and blue (RGB) color space. The proposed model is formulated for multiphase segmentation of color images with the assumption that a piecewise smooth function is approximated by the product of a piecewise constant function and a smooth function. The piecewise constant function and the smooth function are used to represent different segments and to estimate the bias field respectively in the color image. The approximation is calculated based on the saturation component which is particularly useful to distinguish edges and capture the inherent correlation among red, green and blue channels in color images. Experimental results are presented to demonstrate that the segmentation performance of the proposed model is much better than existing color image segmentation methods.

  • research-article
    Liyao Lyu , Keke Wu , Rui Du , Jingrun Chen

    Boundary and initial conditions are important for the wellposedness of partial differential equations (PDEs). Numerically, these conditions can be enforced exactly in classical numerical methods, such as finite difference method and finite element method. Recent years, we have witnessed growing interests in solving PDEs by deep neural networks (DNNs), especially in the high-dimensional case. However, in the generic situation, a careful literature review shows that boundary conditions cannot be enforced exactly for DNNs, which inevitably leads to a modeling error. In this work, based on the recently developed deep mixed residual method (MIM), we demonstrate how to make DNNs satisfy boundary and initial conditions automatically by using distance functions and explicit constructions. As a consequence, the loss function in MIM is free of the penalty term and does not have any modeling error. Using numerous examples, including Dirichlet, Neumann, mixed, Robin, and periodic boundary conditions for elliptic equations, and initial conditions for parabolic and hyperbolic equations, we show that enforcing exact boundary and initial conditions not only provides a better approximate solution but also facilitates the training process.

  • research-article
    Yingxia Xi , Xia Ji

    The paper presents a holomorphic operator function approach for the Laplace eigenvalue problem using the discontinuous Galerkin method. We rewrite the problem as the eigenvalue problem of a holomorphic Fredholm operator function of index zero. The convergence for the discontinuous Galerkin method is proved by using the abstract approximation theory for holomorphic operator functions. We employ the spectral indicator method to compute the eigenvalues. Extensive numerical examples are presented to validate the theory.