Jincheng College, Nanjing University of Aeronautics and Astronautics, Nanjing 210012, China
sunleiguanyong@sohu.com
Show less
History+
Received
Accepted
Published
2023-06-15
Issue Date
Revised Date
2023-11-16
PDF
(1307KB)
Abstract
This paper gives the truncated version of the generalized minimum backward error algorithm (GMBACK)—the incomplete generalized minimum backward perturbation algorithm (IGMBACK) for large nonsymmetric linear systems. It is based on an incomplete orthogonalization of the Krylov vectors in question, and gives an approximate or quasi-minimum backward perturbation solution over the Krylov subspace. Theoretical properties of IGMBACK including finite termination, existence and uniqueness are discussed in details, and practical implementation issues associated with the IGMBACK algorithm are considered. Numerical experiments show that, the IGMBACK method is usually more efficient than GMBACK and GMRES, and IMBACK, GMBACK often have better convergence performance than GMRES. Specially, for sensitive matrices and right-hand sides being parallel to the left singular vectors corresponding to the smallest singular values of the coefficient matrices, GMRES does not necessarily converge, and IGMBACK, GMBACK usually converge and outperform GMRES.
Lei SUN.
An incomplete generalized minimum backward perturbation algorithm for large nonsymmetric linear systems.
Front. Math. China, 2023, 18(3): 203-222 DOI:10.3868/s140-DDD-023-0014-x
In many applied sciences and engineering calculations, it is often necessary to solve large nonsymmetric sparse linear systems:
where is nonsingular, . The Krylov subspace method is a very effective method for solving (1). Krylov subspace methods [20] such as the generalized minimum residual method (CMRES algorithm [4, 22]) often use the residual norm as a condition for determining the termination of the algorithm. If the approximate solution is accurate, then the residual norm is small. Conversely, a small residual norm does not mean that the approximate solution is accurate, especially when is a ill-posed matrix. In order to overcome the shortcomings of residual norm as a termination condition, the idea of using backward perturbation norm [10] as the termination condition is proposed in reference [1]. Kasenally minimized the backward perturbation norm on by finding an approximate solution satisfying the perturbation equation when solving nonsymmetric linear systems , that is, makes
The generalized minimum backward perturbation method (CMBACK algorithm [13]) for solving large nonsymmetric linear systems is presented. In this notation, represents the approximate solution of (1) with the form: , where , represents the original estimate of the approximate solution, denoted . and represent perturbations to matrix and vector , respectively, forming a joint backward perturbation matrix . In the late 1990s, Kasenally, Simoncini and Zhihao Cao popularized the GMBACK algorithm by perturbing 6 on top of 4, that is, makes
The minimum joint backward perturbation method for solving large nonsymmetric linear systems is obtained (Minpert algorithm [8, 14]). The GMRES algorithm can be regarded as minimizing the perturbation norm on by finding an approximate solution satisfying the equation , that is, makes
Therefore, GMBACK and GMRES algorithms can be regarded as minimizing the perturbation norm on the coefficient matrix and vector , respectively. While the Minpert algorithm is minimizing the norm of joint backward perturbation matrix on matrix and vector .
The GMBACK, GMRES, and Minpert algorithms are all a set of bases that utilize the Arnoldi process to generate . This means that these algorithms use long recursive formulas, which leads to a dramatic increase in computation and storage capacity with the number of steps, and the algorithms become unusable when the number of steps increases to a certain point. Therefore, these algorithms usually need to be restarted, but for difficult problems, even when combined with preprocessing techniques, the number of steps taken in a restart is still quite large to ensure convergence of the algorithm. In order to overcome the shortcomings of long recursive formulas, a popular technique is to resort to truncation strategies. The idea is to use only a few rather than all the previously calculated vectors to construct a new short recursive formula to calculate the following vectors, which greatly reduces the computation and storage capacity. The truncation form of CMRES algorithm is given in references [5, 6]: quasi-incomplete orthogonalization method (QGMRES) or incomplete generalized minimum Residue Method (ICMRES). Sun Lei provided the truncated form of the Minpert algorithm in reference [25]: incomplete minimum joint backward perturbation method (IMinpert algorithm). However, the truncated form of the CMIBACK algorithm has not yet been given. In this paper, by means of incomplete orthogonalization process [12, 20, 21], a group of bases of will be generated. The truncation form of CMBACK algorithm is given: incomplete generalized minimum backward perturbation method (IGMBACK). As with GMBACK, the approximate solution of IGMBACK is generated by solving the minimum problem (2).
The structure of this paper is as follows: Section 2 gives the theoretical derivation process of the ICMBACK algorithm, followed by the IGMBACK algorithm, and some theoretical studies are made on the algorithm, including the finite termination of the algorithm, existence and uniqueness of the solution. Section 3 gives the execution of IGMBACK algorithm. In Section 4, we will show with numerical examples that IGMBACK is generally more efficient than GMBACK and GMRES, and IGMBACK and GMBACK generally converge better than GMRES. In particular, the restarted GMRES algorithm does not necessarily converge if the coefficient matrix is a sensitive matrix and the vector on the right-hand side of equations is parallel to the left singular vector corresponding to the minimum singular value of . However, IGMBACK and GMBACK algorithms generally converge and converge better than GMRES. Section 5 is the summary of the full text and the prospect of the work.
2 IGMBACK algorithm
2.1 Analysis of the backward perturbation matrix
The following lemma gives the general formula for the backward perturbed matrix .
Lemma 2.1Suppose that an incomplete orthogonalization process ofsteps has been performed and a set of basis vectors andofis obtained, denoted , . At the same time, an upper Hessenberg matrixis obtained satisfying:
where the last line ofis removed to get . The approximate solution of the equation (1) can be written as follows:
Denoted . Thus the set of backward perturbation matrices satisfying can be expressed as:
where .
Proof Substituting and (3) into , we get
Thus for any , we immediately obtain (4) (see [3]).
By Lemma 2.1, we can further obtain F- in the backward perturbation matrix with the minimum norm and its norm .
Theorem 2.1The backward perturbed matrixwith the minimum norm ofincan be expressed as
where . Thus the backward perturbation norm .
Proof Require F- in the backward perturbation matrix with the minimum norm. According to Lemma 2.1, let
We can write as
where indicates that the columns of the matrix are straightened, denotes tensor product. From the above equation, the minimum value of is required. By the knowledge of optimization theory, we have . Thus we have , and the conclusion of the theorem holds. □
2.2 Generation of IGMBACK algorithm
This section will derive the IGMBACK algorithm. Lemma 2.1 gives the general formula (4) for the backward perturbation matrix . In the proof of Theorem 2.1, we establish the function according to the general formula (4). It can be seen that the only free variables in are and , so that the minimum value problem (2) can be transformed into , i.e.,
According to Theorem 2.1, problem (5) can be further transformed into
Since is not a column orthogonal norm matrix, this can be computationally intensive, and since
we can choose such that is minimal. That is, problem (6) is approximated as
For convenience, we define two matrices and , where
The following theorem will give the solution to the minimum value problem (8).
Theorem 2.2Assuming that the -step incomplete orthogonalization process has been performed, the form ofin problem (8) is as in (3). Letbe the combination of all generalized eigenvalues and corresponding eigenvectors of , where . If the last element of the vectoris not , i.e., , then the solutionof the minimum value problem (8) satisfies
and
Proof The preceding reasoning process eventually transforms problem (2) into the minimum value problem (8). By
where is the minimum generalized eigenvalue of . Since , the solution of (8) can be calculated by using equation (10). From (7), we soon obtain (11).
By Theorem 2.2, we derive the IGMBACK algorithm. In order to reduce the storage and computation capacity of the algorithm, the algorithm in this paper adopts the step loop format. The main steps for are given as follows.
Algorithm 1 Restart :
① Choose the initial estimate of (1) and the parameter , where satisfies , compute and , where .
② Incomplete orthogonalization process : Perform steps of the incomplete orthogonalization process to compute and , for .
i. Calculate .
ii. For , compute .
iii. Calculate . If , compute , otherwise stop.
③ Solving generalized eigenvalue problems.
where and are defined in (9).
④ The approximate solution is calculated from Equation (10) in Theorem 2.2.
⑤ Start over: Define . If the termination condition is met, stop, otherwise redefine , compute , where , and return to step (2).
Denote
Therefore, the generalized eigenvalue problem (13) can then be written as .
The minimum value problem (12) can be rewritten as
Using equation (15), we can give an estimate of the range of in the termination condition of the algorithm. This is shown by the following theorem. Here is the approximate solution of the IGMBACK algorithm.
Theorem 2.3Assuming thatsteps of incomplete orthogonalization have been performed, denotes the maximum singular value of . are the minimum singular values of respectively. Let , we have
Proof By employing equation (15), we have
and
Thus, the conclusion of the theorem holds.□
2.3 The theoretical research of IGMBACK algorithm
This section presents a theoretical study of the IGMBACK algorithm in terms of finite termination, existence and uniqueness of the solution.
2.3.1 Finite termination of the algorithm
Suppose that after steps of incomplete orthogonalization, we get , i.e., is singular, then the vector cannot be formed. When this happens, we call it a lucky break [22].
Let the column vectors of be linearly independent and the F- backward perturbation matrix with minimum norm generated in the IGMBACK algorithm satisfies when . According to Theorem 2.1, a sufficient condition for is , then we have
In fact, assume , , then , thus is an exact solution to the system of equations (1). Conversely, if , then . Let , where . From the last line of the equation , we get . If , then , and thus , which contradicts the previous assumption and gives .
We know that the algorithm can execute at most steps before a lucky break occurs, so the following corollary holds.
Corollary 2.1For anyand , the IGMBACK algorithm can converge by up tosteps.
According to the above discussion, is assumed in the rest of this paper. That is, is assumed to be non-singular.
2.3.2 Existence and uniqueness of solutions
There are two cases where the solution of the algorithm does not exist: one is when the generalized eigenvalue problem has degenerate generalized eigenvalues, the other is when in (10).
Lemma 2.2Let the incomplete orthogonalization process insteps have been performed, then, the necessary and sufficient condition forto be a singular matrix isor .
Proof According to the basic knowledge of algebra, is a singular matrix if and only if the rank of matrix , where is shown in (9). On the one hand, if is a singular matrix, then , i.e., the set of column vectors of is linearly related. The set of column vectors of is linearly independent, so can be uniquely linearly tabulated by the set of column vectors of , i.e., . On the other hand, if or , i.e., can be linearly tabulated by the set of column vectors of , then the set of column vectors of is linearly related. Thus , i.e., is a singular matrix.
By Lemma 2.2, is potentially singular. We choose in the actual implementation of the IGMBACK algorithm such that , which ensures that is a non-singular matrix. Thus, the generalized eigenvalue problem belongs to the non-degenerate generalized eigenvalue problem [7], without degenerate generalized eigenvalues and eigenvectors. In general, the dimension of chosen in the IGMBACK algorithm is much smaller than the order of . Therefore, there is plenty of scope for choosing in the -dimensional vector space such that . Throughout the rest of the paper, we assume that , i.e., that is a non-singular matrix.
The following theorem gives a sufficient condition for the IGMBACK algorithm to fail to form an approximate solution when the multiplicity of is 1.
Theorem 2.4Supposeis the combination of the minimum generalized eigenvalue and the corresponding eigenvector of , is a non-singular matrix, and the multiplicity ofis 1. Let . Then
Proof Let . Substituting the expressions for in (14) into , we have
From the first line of the matrix equation above, we get . From the second line we get , i.e., , that is,
Assume , and . Then is the eigenvector of , and the corresponding generalized eigenvalue is . Since is a non-singular matrix, does not have degenerate generalized eigenvalues and eigenvectors, and since the multiplicity of is 1, is the eigenvector of all corresponding to . Therefore, there must be .
The existence and uniqueness of the solution are discussed further below. For convenience, according to (14) and (8), denote
Let’s start with a lemma.
Lemma 2.3Suppose that an incomplete orthogonalization process ofsteps has been performed. andrepresent all generalized eigenvalues ofandin non-increasing order, respectively, then
Proof Let be all generalized characteristic pairs of . According to Subsection 2.3.1, if , then are non-singular matrices. Therefore, we can write as , where is the reciprocal of . The size order of determines . By
where represents the primary matrix after interchanging the and columns of the unit matrix , one gets that
Substituting the above equation into , we have
where
Thus, according to Theorem IV.4.2 in reference [24], one obtains that
where is all the eigenvalues of . Since are the reciprocals of and , respectively, the conclusion of the lemma holds.
According to Lemma 2.3, we obtain the following conclusions about the convergence, existence and uniqueness of the solution.
Corollary 2.2Assuming that the incomplete orthogonalization process has been performed insteps, the following conclusion holds:
(a) .
(b) If and , then there is a unique approximate solution to the IGMBACK algorithm.
(c) If , and the approximate solution of the IGMBACK algorithm exists, then the approximate solution of the IGMBACK algorithm is not unique, and , i.e., the IGMBACK algorithm stalls from step to step .
Let the minimum generalized eigenvalue of be of multiplicity not . Assume , i.e., let be of multiplicity, and the corresponding standard orthogonal eigenvectors are , respectively. Let
Then for any , if , the approximate solution of the IGMBACK algorithm takes the form
In the practical implementation of the algorithm, we often choose such that is minimal to make the resulting approximate solution unique. Maximize in all unit vectors containing span maximizes such that is minimized.
3 Execution of the IGMBACK algorithm
This section considers how to solve the generalized eigenvalue problem (13), i.e., .
In the implementation of Algorithm 1, we focus on solving the generalized eigenvalue problem (13). Based on Theorem 2.2, it is sufficient to compute the minimum generalized eigenvalue of and the corresponding combination of eigenvectors . Since is symmetric, we can compute with the help of the inverse iteration method. However, when the condition numbers of and are relatively large, it becomes difficult to compute by inverse iteration. Therefore, in the GMBACK algorithm [13], the authors solve the corresponding generalized eigenvalue problem by solving the associated singular value decomposition problem, which overcomes the difficulties associated with the increasing number of conditions for and . Can Algorithm 1 also calculate by solving the associated singular value decomposition problem?
Noting that the matrix contains , make the Cholesky decomposition of : , where is the lower triangular matrix, thus
where
By substituting into , the original generalized eigenvalue problem is transformed into an ordinary eigenvalue problem for symmetric matrices:
where . When is small, based on , to solve exactly for the matrix with the minimum eigenvalue and the corresponding eigenvector is not an easy task. Therefore, we transform the original problem (16) into a solution for the minimum singular value of another matrix
and the corresponding right singular vector, which solves the original problem very well. In equation (17), .
So we get the specific execution process of IGMBACK algorithm.
Algorithm 2 Restart IGMBACK algorithm, :
① Select initial estimated value and parameter of (1), where satisfies . Calculate and , where .
② Incomplete orthogonalisation process (): Performing an incomplete orthogonalisation process in steps, and were calculated, for .
i. Calculate .
ii. For , compute , .
iii. Calculate , if , calculate , otherwise stop.
③ Compute the minimum singular value of the matrix and the corresponding right singular vector . Calculate . Standardize the vector to derive . Forming an approximate solution .
④ Starting over: Define . If the termination condition is met, stop. Otherwise, redefine and calculate , where , and return to step ②.
4 Numerical experiments
It is well known that when the coefficient matrix is a ill-posed matrix, a small perturbation of or causes a large change in the approximate solution of (1). This is further exacerbated [9] when is parallel to the left singular vector corresponding to the minimum singular value of . In this section, we illustrate the effectiveness of the ICMBACK() algorithm in solving the preceding problem by means of several numerical examples, and the IGMBACK() algorithm is often more efficient than the GMBACK() and CMRES() algorithms in solving many problems. The programming software used for the numerical experiments in this paper is MATLAB 7.0.
We mainly compare the following three algorithms:
1) IGMBACK()( is the parameter in the incomplete orthogonalisation process).
2) GMBACK().
3) CMRES(). The different convergence rates of the three algorithms during execution are reflected by comparing the change in the backward perturbation norm of the approximate solutions of the three algorithms with the increase in the number of selected generations (restarts). In the GMRES() algorithm, let the backward perturbation norm . The stop criteria is .
4.1 Numerical experiments with strong informal matrices
This part of the numerical experiment will consider two matrices that are sensitive to small perturbations (called sensitivity matrices).
We notate the singular value decomposition of the matrix A with the following symbols: , where
For , in , then and are the left singular vectors and the right singular vector corresponding to the maximum singular value of matrix , respectively.
Example 4.1 Consider a Toeplitz matrix of order : . Calculate the maximum singular value , and minimum singular value of the matrix . For the first experiment Test 1, set , . Thus, we obtain , that is to say, is parallel to (noted as ). For the second experiment Test 2, assume , . Thus, we obtain , that is to say, . In these two experiments, let , . The experimental results are shown in Fig.1 (a) and Fig.1 (b), respectively. in the vertical coordinate represents , and so on.
Fig.1 (a) shows that the convergence rates of the three algorithms in the first experiment are essentially the same. From Fig.1 (b), it can be seen that the convergence rates of IGMBACK() and GMBACK() in the second experiment are equivalent (IGMBACK () converges slightly faster), and both converge significantly faster than GMRES(). The coefficient matrices are the same in both experiments (both sensitive matrices), what makes IGMBACK() and GMBACK() significantly better than GMRES() in the second experiment? This is mainly because the second experiment was set up with and making .
Example 4.2 Consider another matrix of order : . Matrix has sensitive eigenvalues. Calculate the maximum singular value , and minimum singular value of the matrix . The first experiment Test 1, solves the equation , with a setting of , . Let , . The result is shown in Fig.2 (a).
From Fig.2 (a), it can be seen that the convergence rates of IGMBAC-K(), GMBACK(), and GMRES() are completely consistent, that is, the setting of vector and the selection of initial estimates for approximate solutions cannot cause differences in the convergence rates of the three algorithms, as shown in Example 4.1 (even if set at ). This is mainly owing to the fact that the coefficient matrix is well conditioned. However, by reducing the minimum singular value of , the convergence rates of the above three algorithms will become different. Use to perturb the matrix . The first singular values of the perturbed coefficient matrix are the same as , while the minimum singular value decreases to . The left and right singular vectors also remain unchanged. The second experiment Test 2 considers solving the system of equations using the three algorithms described above, still setting , . Let , . The result is shown in Fig.2 (b).
As depicted in Fig.2 (b), stagnation occurs in GMRES, while IGMBACK() and GMBACK() are convergent, and converges at a rate comparable to GMBACK(m). Obviously, the latter two algorithms are clearly superior to the restarted GMRES(), which is largely attributable to the the foundation of setting . In the second experiment, when the minimum singular value of is reduced, the condition of the coefficient matrix becomes worse and the coefficient matrix becomes sensitive matrix.
From the previous two examples, we can conclude that the GMRES() algorithm, which uses the least-squares problem to derive approximate solutions, is not sufficient to guarantee convergence if the coefficient matrix is a sensitive matrix and the vector on the right-hand side of the system of equations is parallel to the left singular vector corresponding to the least singular value of the coefficient matrix. However, GMBACK() and IGMBACK() algorithms can get better convergence effect by minimizing or approximately minimizing the backward perturbation norm. Interestingly, when comparing the TLS (Total Least Sguares) algorithm with the classical least square method, we can come to a similar conclusion [26].
In order to further illustrate that the algorithms IGMBACK() can be compared with GMBACK(), we compare the CPU time taken by each of the two algorithms to run in Example 4.1 and Example 4.2. The comparison results are shown in Tab.1. In the table, CPU and CPU represent the entire CPU time used and the average CPU time used every reboot, respectively, in seconds. represents the number of restarts; is the ratio of CPU time used by algorithm IGMBACK() and GMBACK(); represents the norm of backward perturbation. Note that IGMBACK() becomes GMBACK() when .
From Tab.1, we can see that the algorithm is more efficient than GMBACK().
4.2 Numerical experiments related to partial differential equations
In practical applications, a large number of large-scale nonsymmetric linear systems arise from the discretization of partial differential equations.
Example 4.3 Consider the convective diffusion equation defined over the region (0,1) × (0,1)
where , . The boundary condition is . The above convective diffusion equation is discretized by the central difference method with lattice length to obtain an nonsymmetric matrix of order . Take such that is an exact solution to the system of equations . Suppose is a random matrix of order , with the values of the elements in the interval , and let , . The experimental result is shown in Fig.3.
In this example, the GMRES algorithm does not converge in 400 steps when is set at , while Fig.3 shows that the ICMBACK and GMBACK can drop to around in steps by the backward perturbation norm. This indicates that the latter two algorithms are significantly better than GMRES algorithm. Tab.2 shows the CPU time spent by each of the three algorithms when is 15 and 25 respectively in Example 4.3.
Tab.2 illustrates that IGMBACK is more efficient than GMBACK and that both algorithms are significantly better than GMRES algorithm.
Example 4.4 Consider the second order elliptic partial differential equation defined on the region :
the boundary condition is , , , . Differentiating the above second-order elliptic partial differential equations by the central difference method with lattice lengths A and h duals, respectively, yields a nonsymmetric difference linear systems of coefficient matrix orders and . Let . The experimental results are shown in Tab.3 and Tab.4, respectively. The symbols in Tab.3 and Tab.4 are used as in Tab.1, where represents the ratio of CPU time used by the algorithm or GMRES() and GMBACK() for the same value of .
As seen in Tab.3 and Tab.4: overall and GMBACK() converge faster than CMRES(). GMBACK() is more effective than GMRES(), which in turn is more effective than . GMBACK() is more effective than GMRES(), and is more effective than GMBACK(). We have additionally done a number of numerical experiments and found that and GMBACK() often converge better than GMRES(), and that is generally more efficient than GMBACK() and GMRES().
5 Summary and expectation of the work
In today's increasingly popular Krylov subspace methods [15], this paper presents another method for solving large sparse linear systems of equations Krylov subspace method: The truncated form of GMBACK algorithm, that is, IGMBACK algorithm. The IGMBACK algorithm uses an incomplete orthogonalization process to generate a set of bases for , which overcomes the disadvantage of using Arnold process in CMBACK algorithm with large amount of computation and storage capacity. The detailed theoretical derivation and some theoretical analysis of the new algorithm are given. Numerical experiments have shown that IGMBACK is generally more effective than CMBACK and CMRES.
It should be noted that the GMRES algorithm has been greatly improved in recent years by the efforts of many authors, and has become a major iterative method for solving large nonsymmetric linear systems. Reference [18] lists the GMRES algorithm including some of the major deformations [11, 17, 19, 23] and theoretical developments [2, 16] in recent years. At present, the international exploration of GMRES algorithm is deepening, and GMRES algorithm itself has become a relatively independent research field. At the same time, this algorithm is also widely used in many fields such as science and engineering calculation. In this case, the advantages of IGMBACK and GMBACK over CMRES in solving many problems illustrate the value of IGMBACK and GMBACK.
The convergence problem of the GMBACK algorithm has not yet been solved. In the truncated version of the GMBACK algorithm, the cost is the loss of some important properties of the original method, such as orthogonality or -orthogonality, minimization of the backward perturbation parametrization, etc., which are the main basis for the analysis of the convergence of the method, thus making the convergence of the IGMBACK algorithm more complicated. Therefore, one of our major research directions in the future is how to establish the convergence theory of these two algorithms and compare them with GMRES algorithm theoretically, so as to further determine theoretically what kind of equations the GMBACK series algorithm is better than the restarted GMRES for solving. In addition, in many cases and applications, direct use of the iterative method does not converge or converges very slowly. The current general solution is to combine iterative methods with preprocessing techniques. Therefore, another major research direction in the future is how to combine IGMBACK and GMBACK algorithm with preprocessing technology, so as to obtain a new algorithm with better convergence effect and higher accuracy.
Arioli M, Duff I, Ruiz D. Stopping criteria for iterative solvers. SIAM J Matrix Anal Appl1992; 13(1): 138–144
[2]
Baker A H, Jessup E R, Kolev T V. A simple strategy for varying the restart parameter in GMRES(m). J Comput Appl Math2009; 230(2): 751–761
[3]
Ben-IsraelAGrevilleT N E. Generalized Inverses: Theory and Applications. New York: Wiley Interscience, 1974
[4]
Brown P N. A theoretical comparison of the Arnoldi and GMRES algorithms. SIAM J Sci Stat Comput1991; 12(1): 58–78
[5]
Brown P N, Hindmarsh A C. Reduced storage matrix methods in stiff ODE systems. Appl Math Comput1989; 31: 40–91
[6]
Brown P N, Saad Y. Hybrid Krylov methods for nonlinear systems of equations. SIAM J Sci Stat Comput1990; 11(3): 450–481
[7]
Cao Z H. On a deflation method for the symmetric generalized eigenvalue problem. Linear Algebra Appl1987; 92: 187–196
[8]
Cao Z H. Total generalized minimum backward error algorithm for solving nonsymmetric linear systems. J Comput Math1998; 16(6): 539–550
[9]
GolubG HVan LoanC F. Matrix Computations, 2nd ed. Baltimore, MD: John Hopkins Univ Press, 1990
[10]
Higham D J, Higham N J. Backward error and condition of structured linear systems. SIAM J Matrix Anal Appl1992; 13(1): 162–175
[11]
Huhtanen M, Permki A. Orthogonal polynomials of the R-linear generalized minimal residual method. J Approx Theory2013; 167(3): 220–239
[12]
Jia Z X. On IOM(q): the incomplete orthogonalization method for large unsymmetric linear systems. Numer Linear Algebra Appl1996; 3(6): 491–512
[13]
Kasenally E M. GMBACK: a generalized minimum backward error algorithm for nonsymmetric linear systems. SIAM J Sci Comput1995; 16(3): 698–719
[14]
Kasenally E M, Simoncini V. Analysis of a minimum perturbation algorithm for nonsymmetric linear systems. SIAM J Numer Anal1997; 34(1): 48–66
[15]
Li X A, Chen Y H, Zhang Y, Wang X P. Development of the Krylov subspace method for solving large sparse linear systems. science & Technology Review2013; 11: 68–73
[16]
LiesenJTichyP. The field of values bound on ideal GMRES, 2012, arXiv: 1211.5969v1
[17]
Liu Y Q, Yin K X, WU E H. Fast GMRES-GPU algorithm for solving large scale sparse linear systems. Journal of Computer-Aided Design & Computer Graphics2011; 23(4): 553–560
[18]
Ma X F. An overview of recent developments and applications of the GMRES method. Pure Math2013; 3: 181–187
[19]
Najafi H S, Zareamonghaddam H. A new computational GMRES method. Appl Math Comput2008; 199(2): 527–534
[20]
Saad Y. Krylov subspace methods for solving large unsymmetric linear systems. Math Comp1981; 37(155): 105–126
[21]
Saad Y. Practical use of some Krylov subspace methods for solving indefinite and nonsymmetric linear systems. SIAM J Sci Stat Comput1984; 5(1): 203–228
[22]
Saad Y, Schultz M H. GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J Sci Stat Comput1986; 7(3): 856–869
[23]
Sterck H D. Steepest descent preconditioning for nonlinear GMRES optimization. Numer Linear Algebra Appl2013; 20(3): 453–471
[24]
StewartG WSunJ-G. Matrix Perturbation Theory. New York: Academic Press, 1990
[25]
Sun L, Wang X H, Guan Y. IMinpert: an incomplete minimum perturbation algorithm for large unsymmetric linear systems. Numer Math J Chinese Univ2007; 16(4): 300–312
[26]
Van HuffelSVandewalleJ. The Total Least Squares Problem: Computational Aspects and Analysis. Frontiers in Applied Mathematics, Vol 9. Philadelphia, PA: SIAM, 1991
RIGHTS & PERMISSIONS
Higher Education Press 2023
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.