1. School of Mathematics Sciences, Huaibei Normal University, Huaibei 235000, China
2. Department of Mathematics, Shanghai University, Shanghai 200444, China
3. Institute of Statistics and Applied Mathematics, Anhui University of Finance & Economics, Bengbu 233030, China
clmyf2@163.com
Show less
History+
Received
Accepted
Published
2023-04-15
Issue Date
Revised Date
2023-10-19
PDF
(370KB)
Abstract
We are concerned with the numerical methods for nonlinear equation and their semilocal convergence in this paper. The construction techniques of iterative methods are induced by using linear approximation, integral interpolation, Adomian series decomposition, Taylor expansion, multi-step iteration, etc. The convergent conditions and proof methods, including majorizing sequences and recurrence relations, in semilocal convergence of iterative methods for nonlinear equations are discussed in the theoretical analysis. The majorizing functions, which are used in majorizing sequences, are also discussed in this paper.
With the progress of society and the development of science and technology, nonlinear problems have become a very important problem in modern scientific research. For most of these nonlinear differential or integral equations, although we can obtain the existence and stability of their solutions by theoretical methods, it is difficult to obtain their analytic solutions, so after finite dimensional discretization, these nonlinear differential or integral equations are transformed into nonlinear (operator) equations. Therefore, the study of numerical iterative methods for nonlinear equations is of great theoretical significance and practical application.
However, except for some very special nonlinear equations, it is almost impossible to solve the nonlinear equations by direct method, but only by indirect method to find the approximate value of the real solution of the nonlinear equations. Therefore, the study of numerical iterative methods for nonlinear equations has become an urgent problem.
As early as the 17th century, Newton, Halley, Cauchy, Euler and others were interested in efficient numerical iterative methods for nonlinear equations, and obtained a series of classical numerical methods named after them, which became the basis of contemporary research on nonlinear numerical problems [13, 19, 46, 81].
Around the 1960s, researchers in different countries continued to do a lot of research work on numerical iterative methods for nonlinear equations [36, 56, 58, 66, 74, 86−89], which has led to the rapid development of numerical iterative methods for nonlinear equations. With the development of large, fast and high-precision electronic computers, the numerical iterative method for nonlinear equations has become a hot problem and many fast and efficient numerical iterative methods have been obtained.
In this paper, the current development of numerical iterative methods for nonlinear equations is briefly described. By summarizing, the construction techniques of current iterative methods are given, and the convergence conditions and proof methods of semi-local convergence of iterative methods are analyzed. The paper is organized as follows: Section 2 introduces the classical Newton iterative method and its semi-local convergence theorem; Section 3 summarizes the current construction techniques of numerical iterative methods and points out the advantages and disadvantages of each construction technique; Section 4 analyzes and summarizes various convergence conditions and proof methods of semi-local convergence of iterative methods; Section 5 summarizes the paper, points out the problems in the iterative method, and gives some opinions on the problems to be solved and the research directions.
It should be noted that: (I) In this paper, we only consider iterative solutions of single roots of nonlinear equations, and we do not study iterative solutions of multiple roots; (II) Most iterative solutions of nonlinear equations are constructed either from iterative methods of solving nonlinear equations or from linear equations. In the study, of iterative methods of nonlinear equations, equations are often used as models, because the conclusions obtained in the equation. The conclusions obtained in the case of equations can often be generalized to the case of systems of equations, or at least can be inspired by these conclusions [36]. Thus, the iterative method for nonlinear operator equations in Banach spaces discussed in Section 4 of this paper, is essentially the case of the iterative solution of a nonlinear equations.
2 Newton’s iterative method and its semi-local convergence
Newton's iterative method, also known as Newton-Raphson's iterative method, is one of the most basic and important iterative methods for solving nonlinear equations because of its simple structure, small computational effort and fast convergence rate. Although new iterative methods have emerged in recent decades, almost all iterative methods have been developed based on Newton's method. Therefore, to discuss the numerical solution of nonlinear equations, we must first review Newton's method.
Consider the nonlinear equations:
where is the real function, is the open interval, and is the exact solution of the nonlinear Eq. (1).
There are three different perspectives on the theoretical derivation and geometric interpretation of Newton's iterative method in one-dimensional real space.
(a) Linearization method. It is well known that Newton's method, also known as the tangent method, uses the idea of “straight instead of curved” local linearization. The nonlinear equation (1) is reduced to a linear problem by replacing the curve with a tangent to a point on the curve :
To find the solution , we obtain Newton's method
whose convergence order is 2.
(b) Taylor expansion. Expanding at the Taylor point, and omitting the terms above the square, we have
yields the linear Eq. (2), which leads to the Newton iterative method (3). The geometric meaning of this method is similar to the linearization method of (a).
(c) Integral interpolation [29]. By the Newton−Leibniz formula, a constant equation is obtained by integrating over the closed interval formed by points and . And by the integral median formula, we have
also yields the linear Eq. (2), which leads to Newton's iterative method (3). In a geometrical sense, this can be understood as approximating the curve by an integral curve instead of a tangent line.
Now consider the nonlinear operator equation in Banach space below:
where the map is a nonlinear operator, , are Banach spaces, and is an open convex subset on . Extending Newton's iterative method (3) on one-dimensional real spaces to Banach spaces, we obtain the following Newton's iterative method:
The convergence of Newton's method has been studied for a long time. In 1948, Kantorovich gave the most famous semi-local convergence theorem for Newton's method in Banach spaces—The Newton−Kantorovich theorem [58, 74], which proves that the second-order continuous Fréchet differentiable operator satisfies the K-condition:
We have three conclusions:
(I) The Newton iterative sequence converges to the solution of the equation in the neighborhood of ;
(II) The solution of the nonlinear operator equation (4) exists uniquely in some neighborhood of ;
(III) The error estimate for the Newton iterative sequence is given by the recurrence relation:
where is the constant associated with the K-condition and the initial value , conclusions that cannot be given by other types of convergence theorems. This theorem has become a theoretical starting point for the study of convergence of iterative methods for nonlinear equations.
3 Umerical iterative method for nonlinear equations in one-dimensional real space
Although Newton's iterative method is a very effective iterative method, its drawbacks are obvious, such as the denominator cannot be zero or tends to zero, and the convergence speed is only 2nd order. Therefore, in order to meet the current needs of high-speed and high-precision computation, a lot of accelerated and improved Newton's method or higher-order numerical iterative methods have been researched and obtained, which improve the convergence order, accuracy and efficiency index. Through analysis, these iterative techniques can be broadly classified into the following categories.
3.1 Linear approximation
In the derivation of Newton's method (a), the tangent line at point is used instead of the curve to obtain the Newton’s iterative formula, and the derivative of at the iteration point needs to be calculated. For example, (I) when the derivative of function at is zero or tends to zero, Newton's method fails and the iterative process cannot continue; (II) when the function is complex, the calculation of the derivative is more complicated, which increases the computational effort of the iterative method and reduces the efficiency index. In particular, for nonlinear operator equations, the inverse matrix of the Jacobi matrix is more computationally intensive. Therefore, in order to avoid calculating the derivatives of functions, numerical iterative methods without derivative calculation have been studied. The Newton's iterative method is obtained by approximating the curve by the tangent line of at according to the geometric meaning of Newton's iterative method. If the curve is approximated by a tangent line instead of a cut line through two points , , then the well-known Secant's iteration method is obtained:
The convergence order of Secant's iterative method is , which is a superlinearly convergent iterative method.
Leaving aside the geometric meaning, by the idea of interpolation, we can also consider the linear interpolation (difference quotient) of using the derivative in the Newton iteration to obtain the Secant iteration, i.e., we have
From this idea, many scholars have improved and obtained a series of numerical iterations [78, 82, 85, 106]. Extending this idea to the case of higher order derivatives, for example, the second order derivative is approximated by the difference quotient of the first order derivative:
A series of numerical iterations [34, 62, 72] is also obtained.
Some numerical iterations can also be obtained if other interpolation methods in the interpolation method are used, such as parabolic interpolation, Newton interpolation. In addition, in order to simplify the calculation, a line passing through the point and parallel to the tangent line of the curve at is intersected with the -axis to obtain the intersection , which gives the well-known simplified Newton method
Using this improved technique, a large number of numerical iterations are also obtained.
Although some improved new iterative methods are obtained by this technique to avoid the solution of (higher-order) derivatives, the convergence order of the iterative methods is not effectively increased. Moreover, in general, the iterative methods obtained by this iterative technique have only linear or superlinear convergence orders, which are low-order numerical iterative methods.
3.2 Integral interpolation
In the theoretical derivation of Newton's method (c), Newton's method is obtained by using the median integral formula, and by analogy, a new series of accelerated iterative algorithms can be obtained by using other interpolation formulas in numerical integration. For example, Weerakoon and Fernando [101] obtained the 3rd order arithmetic mean Newton method using the trapezoidal formula:
Frontini [38], Homeier [51−52], Özban [76] and so on by the 3rd order midpoint mean Newton method obtained from the middle rectangle formula:
Hasanov et al. [48] using Simpson’s formula to obtain the 3rd order iterative method:
where , ; and some methods [39, 54, 87] obtained by improving or modifying on these methods. The paper [12] summarizes the iterative methods obtained by such improvement techniques.
However, due to the limitation of numerical integration research, this improvement technique cannot be extended deeply, and in general, the convergence order of the iterative method obtained by this technique is generally 3rd order, which does not effectively improve the convergence order of the iterative method.
3.3 Adomian level decomposition
In the 1980s, Adomian proposed the Adomian series method, the main idea of which is to use infinite series to approximate the exact solution of nonlinear problems. The nonlinear equation (1) is rewritten as:
where is the nonlinear part of and is a constant. Decompose and into series, respectively:
where the Adomian polynomial:
is the argument. Obviously, the value of the Adomian polynomial depends on the iteration points .
By decomposing the function corresponding to the nonlinear Eq. (1) in different ways, for example, as a coupled system [28], Lagrange interpolation polynomial [65], etc., to compute , a series of efficient algorithms [16, 27, 55] is obtained. Since the Adomian polynomial is very complicated and increases the computational effort of the iterative method, this accelerated iterative technique is not applicable to some computationally complex nonlinear equations.
3.4 Taylor expansion
In the theoretical derivation of Newton's method (b), the Newton's method is obtained by making the Taylor expansion of at the point and omitting the terms above the square. Similarly, if we omit the terms above the 3rd power in the Taylor expansion, i.e.,
The solution gives Halley's method [46], Cauchy's method [40] (known in some literature as Euler's method [70]). By analogy, some accelerated and efficient iterative methods can be obtained by using the higher Taylor expansions of at .
Let the Taylor expansion of the function at the point be:
Where is the th Taylor polynomial, , . Define the iterative solution to be a root of the polynomial closest to . Then when , the second-order Newton iterative method is obtained; when , the third-order Halley, Cauchy iterative method and Cauchy cluster iterative method [42] are obtained; when , Kou obtains some 4th-order convergent Cauchy cluster iterative method [61]. And many methods [26, 63, 69] are derived from this idea.
This acceleration technique has a clear geometric meaning. When , it can be understood as approximating the curve by a straight line; when , it can be understood as approximating the curve by a parabola or hyperbola, and when or higher, we can understand it as approximating the curve by a curve of higher order.
Because of the difficulty of solving higher polynomials and the lack of a general algebraic solution and root formula for higher order equations of unity of order 5 and higher (i.e., they cannot be solved by finite quadratic operations and multiplication of the coefficients), it is difficult for Taylor to extend this improved technique.
3.5 Multi-step iterations
Multi-step iteration is the most used acceleration technique, and most of the improved iterative methods studied today can be considered as multi-step iterative methods. Multi-step iterative methods are also known as predictor-corrector iterative methods, which are essentially combinations of two or more iterative methods with certain techniques and methods to obtain a new iterative method. The principle of the combination is to make full use of the information available in each iterative method (e.g., the value of a function or derivative) to increase the convergence order of the new iterative method as much as possible without increasing the computational cost, thus obtaining an efficient iterative method. This technique has been used since the 1960s, as in the classical two-step Newton iterative method [6, 75, 88] with 3rd order convergence:
Compared with Newton's iterative method (3), the two-step Newton's method (9) improves the convergence order with only one additional function value computed, and the efficiency index is , which is higher than the efficiency index of Newton's method .
The famous 4th order Jarratt’s iterative method [88]
By this combination of acceleration techniques, one obtains many multi-step iterative methods with high order convergence [9, 25, 59, 64, 73], and can even increase the convergence order of the iterative method to more than 10 orders [41, 60, 67, 80]. Interestingly, it has been found that the new iterative method obtained by combining Newton's method as a prognostic factor with some other iterative methods usually increases the order of () by [104] than that of the original iterative method.
In the obtained multi-step iterative method, sometimes it is not necessary to increase the prediction factor, but only to correct the prediction value of some points in the prediction factor, which can also improve the convergence order and efficiency index [97] of the iterative method.
Although the convergence order can be increased by increasing the prediction factor or correcting the prediction factor, this technique increases the computational effort of the iterative method and decreases the efficiency index of the iterative method because it often requires the calculation of additional function values or derivative values.
For example, the inverse function interpolation technique [53]: Let the inverse function of the function be . Then by the Newton-Leibniz formula, we have:
where , . Following the second technique mentioned above, a series of numerical iterations can be obtained by using various integral interpolation formulas for the integration of the inverse function . Usually, iterative methods are not obtained by a single technique, but by a combination of several techniques, such as linearization and multi-step iterative techniques [94], so that various techniques should be used wisely to obtain fast and efficient iterative methods. Considering the application context and computational complexity, the use of higher order derivatives in iterative methods is generally not recommended.
4 Semi-local convergence of the iterative method for nonlinear operator equations in Banach spaces
For the nonlinear operator Eq. (4), the iterative method is still the main numerical method for solving this equation, and theoretically, the iterative method for nonlinear equations in one-dimensional real space can be extended to Banach space.
Therefore, the iterative method we discuss here is actually the extension of the iterative method for equations in one-dimensional real space to the system of equations.
The theoretical analysis of the numerical iterative method is a very important research topic, not only to solve the existence problem of solutions, but also to determine the existence range of solutions and the computational efficiency. Here we focus only on the semi-local theory of the numerical iterative method in Banach spaces. Other types of convergence, such as local and global convergence, will not be discussed. This section has been discussed in the literature [14, 68, 71, 103–104]. In this paper, we discuss the semi-local convergence in detail, based on the latest research results.
4.1 Semi-local convergence in iterative analysis
The theoretical analysis of the iterative method consists of three basic problems [74]: the analysis of fitness, the analysis of convergence and the analysis of efficiency. The most fundamental problem is the convergence analysis, which consists of three types: global convergence, local convergence and semi-local convergence. Global convergence is the most perfect overall convergence theorem, which is not easy to obtain in general. Although local convergence can verify the convergence of the iterative method theoretically, it usually cannot solve the following three problems: (I) the existence of the solution from the iterative process; (II) the criterion for the convergence of the iterative process from the initial value; (III) how to estimate the error of the iteration, and local convergence also depends on the zeros of the equation. Therefore, it is necessary to find a semi-local convergence theorem that does not depend on the zeros and gives the existence of the solution, the convergence criterion of the iterative method and the error estimate.
In 1948, Kantorovich gave a semilocal convergence theorem for Newton's iterative method in Banach space—the Newton−Kantorovich theorem [58, 74], which not only proves the exsitence and uniqueness of solution to nonlinear equation in the neighborhood of the initial vector , but also provides the convergence of the Newton iteration to the solution and its error estimation. However, the condition of Kantorovich’s theorem (called K-condition) requires that the second-order Fréchet derivative of be bounded in its entirety, which is a strong condition and generally difficult to verify. Therefore, many scholars have used various conditions (e.g., Lipschitz condition, Höder condition, etc.) to weaken the K-condition, change the radius of convergence, and improve the relevant results of Newton's iterative method [74, 88]. On the basis of this theorem, the theoretical analysis of some iterative methods has been carried out, and the convergence of the corresponding iterative sequence and an error estimate of the solution of the equation are given according to the convergence condition of the function, and the theory of uniqueness of existence in a certain range of solutions is obtained.
4.2 Convergence condition
In the process of proving convergence, it is necessary to give the conditions satisfied by the nonlinear operators and their Fréchet derivatives. The Newton−Kantorovich theorem proves the convergence of Newton's method based on the satisfaction of K-conditions. However, since the K-condition is strong and not easy to verify in general, many scholars use various conditions (called weak conditions) to weaken the condition. By extending these weak conditions to the convergence analysis of other iterative methods, the semi-local convergence of these iterative methods can also be obtained. The main weak conditions used so far are as follows.
Fenyö [37] reduces the overall boundedness of the K-condition in to satisfying the Lipschitz condition in , i.e., there exists a positive real number such that:
Xing-Hua Wang generalized the condition and proposed the inner tangent ball center Lipschitz condition, radius Lipschitz condition and center Lipschitz condition. Deuflhard[30] and Ypma [105] further reduce the Lipschitz condition (13) to the affine covariant Lipschitz condition:
Rokne [79] further reduces the affine covariant Lipschitz condition to satisfying the Höder continuity condition:
More generally, Appell [1] and Argyros [3] use a given real function such that satisfies the condition:
Under these weak conditions, the Kantorovich type theorem of the corresponding iterative method can be obtained by imitating Kantorovich's theorem, which increases the radius of convergence of these iterative methods and extends the convergence range (or convergence sphere).
At the same time, Huang [57] points out that if we make full use of the higher order Fréchet derivatives of the nonlinear operator and use more assumptions than the Kantorovich theorem, we can also determine whether certain iterative methods that do not satisfy the K condition and start from some initial value converge. Introduce the second-order Fréchet derivative of :
Argyros [4] also presents the convergence condition for the information on the order derivatives containing
where , , , .
In addition, Smale also introduced the -criterion and -criterion from the analyticity condition of the nonlinear operator function at , which characterizes the local and semi-local behavior of Newton’s method [83−84]. Wang et al. proposed the concept of universal constants, and used it to analyze the convergence of some higher-order iterative methods [91, 93, 100], based on which they also gave a weak condition [90]. Some scholars also used this class of conditions to establish local and semi-local convergence theorems for some iterative methods [22, 95, 99, 102].
Huang [57] also points out that if an iterative process itself does not involve higher-order derivatives, and the operator is higher-order derivable in some neighborhood of the initial point , then it is not a bad way to determine the convergence of the iterative process by using the information about the higher-order derivability of at the initial point . Therefore, he gave a series of conditions for the second-order derivatives of to satisfy the Lipschitz continuity, and Gutiérrez [43] reduced this condition to the condition that the second-order derivatives of satisfy the Höder continuity. On this basis, many scholars have given some convergence theorems for iterative methods under the condition that the higher order derivatives satisfy Lipschitz continuity or Hölder continuity.
4.3 Method of proving convergence
The most used methods for proving semi-local convergence of the iterative method are the superior bounded sequence method and the recursive method, both of which were proposed by Kantorovich. In essence, they are both methods of stepwise induction. The essence of the stepwise induction method is to make the approximation error of the next iteration smaller than the approximation error of the previous iteration, and to prove the convergence of the iterative process by using the technique that each step forward does not break the theorem assumptions.
In [30–32], wavelet inscriptions in Hardy space and binary BMO space and their relationship with the local variance of images are investigated, and a denoising method that can distinguish texture from noise is given in [30–32]. Both theoretical analysis and numerical experiments show the effectiveness of the described methods.
These denoising (decomposition) models have been studied throughout the development of image restoration theory and techniques in the last 20 years. The ideas and methods have contributed to the development of image restoration and other disciplines and fields.
4.3.1 Superior boundary sequence method
The method of superior bounded sequences was proposed by Kantorovich to prove the convergence of Newton's method, and the theory was refined by Ortega and Rheinboldt [74]. The main idea of the superior bounded sequence method is to compare a high-dimensional iterative process with another one-dimensional iterative process and derive the convergence of the original process from the convergence of the one-dimensional process. That is, the estimator has an upper bound:
where the sequence converges, and is a Cauchy sequence.
In the process of proving convergence of an optimally bounded sequence, one must use a majorant function, which Ortega and Rheinboldt call a strong function. There are usually two types of euclidean functions: one is an algebraic polynomial. Kantorovich is the quadratic polynomial
as the majorant function to prove the convergence of Newton's iterative method.
Of course, higher algebraic polynomials can be used depending on the situation. For example, some third-order iterative methods use the cubic polynomial [47, 57]
as the majorant function. The biggest advantage of using algebraic polynomials as majorant functions is that one can obtain exact explicit error estimates [10, 31].
In particular, in proving the convergence of the iterative method under the Hölder condition, the majorant function becomes a real function. In proving the convergence of the iterative method of order , the majorant function is usually used as [2]
Shown that with order convergence, the euclidean function takes the form [43]
Another type of euclidean function is the rational function. It was first proposed by Smale [98], which takes the form
This majorant function can be used not only to prove the convergence of various iterative methods under the point estimation criterion, but more importantly, to compare the efficiency of different algorithms under the same conditions. Xing-Hua Wang [92−93] also proposed a more general expression for the majorant function
This function contains the special cases of , , and other euclidean functions.
Using the optimal sequence method, Chen and Argyros proved the semi-local convergence of the midpoint Newton method [7, 20] and the Halley method [21] using the optimal function (15). Argyros proved the semi-local convergence of the two-step Newton method [5] using the optimal Eq. (18). Many scholars have also proved the convergence of a series of iterative methods [11, 15, 107].
4.3.2 Recursive method
Another way to show the convergence of the iterative method is the recursive method, which was originally given by Kantorovich in 1948 when he proved the convergence of Newton's method. This method uses the recurrence relation between real sequences to show that, under a given convergence condition, the iterative sequence obtained by the iterative method is a Cauchy sequence that converges to an exact solution of the nonlinear Eq. (4) in the neighborhood of a given initial value , and that the exact solution is unique in some neighborhood of . On this basis, the convergence order of the iterative method, the convergence domain (called the convergence sphere in some literature) and, most importantly, the a priori error estimator are also verified by the recursive relational equation:
where , , , are the constants associated with the given conditions, and is the order of convergence of the iterative method.
Following this approach, Candela and Hernández et al. proved the semi-local convergence of iterative methods such as Halley [17], Chebyshev [18, 49−50], super-Halley [44−45], Jarratt [32−33], respectively. Ren and Argyros also proved the convergence of a class of integrable operators [77] under the weak condition (14). On this basis, several authors have used this proof to give semi-local convergence theorems for some higher order iterative methods [8, 23−24, 35, 96, 108].
The Taylor expansion of the nonlinear operator at must be investigated in the proof of either the recursive or the superior bounded sequence method, which generally requires the use of the median theorem and the Taylor series expansion.
5 Summary and outlook
This paper focuses on the numerical solution of nonlinear equations and semi-local theoretical analysis. The improvement techniques of the iterative method for nonlinear equations are summarized, and the drawbacks of each improvement technique are pointed out; the convergence conditions and proof methods of the iterative method for nonlinear equations used in the theoretical analysis are discussed. It is important to note that both numerical iterative methods and theoretical analysis of iterative methods must be constructed from practical problems, based on the characteristics of the equations and the convergence conditions satisfied, to obtain practical theoretical results and error estimates. In recent years, the numerical solution of nonlinear equations has been greatly developed and many research results have been obtained, but there are still many meaningful problems to be studied.
(I) Many application areas have high demand for fine numerical simulations, and the numerical solution of large-scale nonlinear equations is the key to the efficiency of practical numerical simulations. Therefore, we should focus on the research of numerical solution of large-scale nonlinear equations, and construct fast and efficient numerical methods based on the characteristics of functions and their derivatives (Jacobi matrices) obtained from real problems.
(II) Usually, in the process of numerical iteration, the iterative process converges only when the initial value chosen is close to the real solution. Therefore, we should pay attention to the influence of the initial value selection on the convergence, efficiency index, and error estimation, and study the iterative methods with large convergence domains and their semi-local convergence theoretical analysis.
(III) Which type and characteristics of nonlinear equations are suitable for solving the constructed numerical iterative methods, and which nonlinear problems are also worthy of attention.
Appell J, De Pascale E, Lysenko J V, Zabrejko P P. New results on Newton-Kantorovich approximations with applications to nonlinear integral equations. Numer Funct Anal Optim1997; 18(1/2): 1–17
[2]
Argyros I K. On the approximate solutions of non-linear functional equations under mild differentiability conditions. Acta Math Hungar1991; 58(1/2): 3–7
[3]
Argyros I K. On Newton’s method under mild differentiability conditions and applications. Appl Math Comput1999; 102(2/3): 177–183
[4]
Argyros I K. A Newton-Kantorovich theorem for equations involving m-Fréchet differentiable operators and applications in radiative transfer. J Comput Appl Math2001; 131(1/2): 149–159
[5]
Argyros I K. On the semilocal convergence of a fast two-step Newton method. Revista Colombiana de Matematicás2008; 42(1): 15–24
[6]
ArgyrosI K. Convergence and Applications of Newton-type Iterations. New York: Springer-Verlag, 2008
[7]
Argyros I K, Chen D. On the midpoint method for solving nonlinear operator equations and applications to the solution of integral equations. Revue d’Analyse Numérique et de Théorie de l’Approximation1994; 23(2): 139–152
[8]
Argyros I K, Ezquerro A, J J M, Gutiérrez M A, Hernández S. On the semilocal convergence of efficient Chebyshev-Secant-type methods. J Comput Appl Math2011; 235(10): 3195–3206
[9]
Argyros I K, Hilout S. On the Gauss-Newton method. J Appl Math Comput2011; 35(1/2): 537–550
[10]
Argyros I K, Hilout S. Improved local convergence of Newton’s method under weak majorant condition. J Comput Appl Math2012; 236(7): 1892–1902
[11]
Argyros I K, Uko L U. An improved convergence analysis of a one-step intermediate Newton iterative scheme for nonlinear equations. J Appl Math Comput2012; 38(1/2): 243–256
[12]
Babajee D K R, Dauhoo M Z. An analysis of the properties of the variants of Newton’s method with third order convergence. Appl Math Comput2006; 183(1): 659–684
[13]
Banach S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fund Math1922; 3(1): 133–181
[14]
BaoA G. The theoretical analysis of some high-order iterative methods for nonlinear equations. Master thesis. Hangzhou: Zhejiang Normal University, 2007 (in Chinese)
[15]
Bi W H, Ren H M, Wu Q B. A new semilocal convergence theorem of Müller’s method. Appl Math Comput2008; 199(1): 375–384
[16]
Biazar J, Ilie M, Khoshkenar A. An improvement to an alternate algorithm for computing Adomian polynomials in special cases. Appl Math Comput2006; 173(1): 582–592
[17]
Candela V, Marquína A. Recurrence relations for rational cubic methods I: The Halley method. Computing1990; 44(2): 169–184
[18]
Candela V, Marquína A. Recurrence relations for rational cubic methods II: The Chebyshev method. Computing1990; 45(4): 355–367
[19]
CauchyA L. Sur la détermination des racines d’une équation algébrique ou transcendente. In: Leçons sur le Calcul Différentiel, Paris: Bure Fréres, 1829. Reprinted in: Euvres Complétes (II), Vol 4. Paris: Gauthier-Villars, 1899, 573–609 (in French)
[20]
Chen D, Argyros I K. The midpoint method for solving nonlinear operator equations in Banach space. Appl Math Lett1992; 5(4): 7–9
[21]
Chen D, Argyros I K, Qian Q S. A note on the Halley method in Banach spaces. Appl Math Comput1993; 58(2/3): 215–224
[22]
Chen J H, Li W G. Convergence behaviour of inexact Newton methods under weak Lipschitz condition. J Comput Appl Math2006; 191(1): 143–164
[23]
Chen L, Gu C Q, Ma Y F. Semilocal convergence for a fifth-order Newton’s method using recurrence relations in Banach spaces. J Appl Math2011; 2011: 786306
[24]
Chen L, Gu C Q, Ma Y F. Recurrence relations for the Harmonic mean Newton’s method in Banach spaces. J Comput Anal Appl2012; 14(6): 1154–1164
[25]
Chen L, Ma Y F. A new modified King-Werner method for solving nonlinear equations. Comput Math Appl2011; 62(10): 3700–3705
[26]
Chen L, Ma Y F. A fourth-order iterative method for solving nonlinear equations. J Huaibei Normal Univ (Natural Science)2012; 33(1): 5–10
[27]
Chen W H, Lu Z Y. An algorithm for Adomian decomposition method. Appl Math Comput2004; 159(1): 221–235
[28]
Chun C B. Iterative methods improving Newton’s method by the decomposition method. Comput Math Appl2005; 50(10/11/12): 1559–1568
[29]
DennisJ ESchnabelR B. Numerical Methods for Unconstrained Optimisation and Nonlinear Equations. Englewood Cliffs: Prentice Hall, 1983
[30]
Deuflhard P, Heindl G. Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J Numer Anal1979; 16(1): 1–10
[31]
Ezquerro J A, González D, Hernández M A. Majorizing sequences for Newton’s method from initial value problems. J Comput Appl Math2012; 236(9): 2246–2258
[32]
Ezquerro J A, Gutiérrez J M, Hernández M A, Salanova M A. The application of an inverse-free Jarratt-type approximation to nonlinear integral equations of Hammerstein-type. Comput Math Appl1998; 36(4): 9–20
[33]
Ezquerro J A, Hernández M A. Relaxing convergence conditions for an inverse-free Jarratt-type approximation. J Comput Appl Math1997; 83(1): 131–135
[34]
Ezquerro J A, Hernández M A. Avoiding the computation of the second Fréchet-derivative in the convex acceleration of Newton’s method. J Comput Appl Math1998; 96(1): 1–12
[35]
Ezquerro J A, Hernández M A. New iterations of R-order four with reduced computational cost. BIT Numer Math2009; 49(2): 325–342
[36]
FengG C. Iterative Methods for Systems of Nonlinear Equations. Shanghai: Shanghai Scientific & Technical Publishers, 1989 (in Chinese)
[37]
Fenyö I, Rényi A. Über die lösung der im Banachschen Raume definierten nichtlinearen gleichungen. Acta Math Acad Sci Hungar1954; 5(1/2): 85–93
[38]
Frontini M, Sormani E. Some variant of Newton’s method with third-order convergence. Appl Math Comput2003; 140(2/3): 419–426
[39]
Frontini M, Sormani E. Modified Newton’s method with third- order convergence and multiple roots. J Comput Appl Math2003; 156(2): 345–354
[40]
Gerlach J. Accelerated convergence in Newton’s method. SIAM Rev1994; 36(2): 272–276
[41]
Geum Y H, Kim Y I. A penta-parametric family of fifteenth-order multipoint methods for nonlinear equations. Appl Math Comput2010; 217(6): 2311–2319
[42]
Grau M, Noguera M. A variant of Cauchy’s method with accelerated fifth-order convergence. Appl Math Lett2004; 17(5): 509–517
[43]
Gutiérrez J M. A new semilocal convergence theorem for Newton’s method. J Comput Appl Math1997; 79(1): 131–145
[44]
Gutiérrez J M, Hernández M A. Recurrence relations for the super-Halley method. Comput Math Appl1998; 36(7): 1–8
[45]
Gutiérrez. M. and Hernández, M.A., An acceleration of Newton’s method: super-Halley method. Appl Math Comput2001; 117(2/3): 223–239
[46]
Halley E. Methodus Nova Accurata & facilis inveniendi Radices Æqùationium quarumcumque generaliter, sine prævia Reductione. Phil Trans Royal Soc London1694; 18: 136–148
[47]
Han D F. The convergence on a family of iterations with cubic order. J Comput Math2001; 19(5): 467–474
[48]
HasanovV IIvanovI GNedjibovG. A new modification of Newton’s method. In: Applications of Mathematics in Engineering and Economics, Proceedings of the XXVII Summer School. Sofia: Heron Press, 2002, 278–286
[49]
Hernández M A. Chebyshev’s approximation algorithms and applications. Comput Math Appl2001; 41(3/4): 433–445
[50]
Hernández M A, Salanova M A. A family of Chebyshev type methods in Banach spaces. Int J Comput Math1996; 61(1/2): 145–154
[51]
Homeier H H H. A modified Newton method for root finding with cubic convergence. J Comput Appl Math2003; 157(1): 227–230
[52]
Homeier H H H. A modified Newton method with cubic convergence: the multivariate case. J Comput Appl Math2004; 169(1): 161–169
[53]
Homeier H H H. On Newton-type methods with cubic convergence. J Comput Appl Math2005; 176(2): 425–432
[54]
Homeier H H H. On Newton-type methods for multiple roots with cubic convergence. J Comput Appl Math2009; 231(1): 249–254
[55]
Hosseini M. Adomian decomposition method with Chebyshev polynomials. Appl Math Comput2006; 175(2): 1685–1693
[56]
HuangX DZengZ GMaY N. The Theory and Methods for Nonlinear Numerical Analysis. Wuhan: Wuhan University Press, 2004 (in Chinese)
[57]
Huang Z D. A note on the Kantorovich theorem for Newton iteration. J Comput Appl Math1993; 47(2): 211–217
[58]
KantorovicL. Functional analysis and applied mathematics. Uspehi Mat Nauk (N S), 1948, 3: 89‒185 (in Russian). Translated by Benster C D. National Bureau of Standards, Report No 1509, Washington, 1952
[59]
Khattri S K, Argyros I K. Sixth order derivative free family of iterative methods. Appl Math Comput2011; 217(12): 5500–5507
[60]
Kim Y-I, Chun C B. New twelfth-order modifications of Jarratt’s method for solving nonlinear equations. Studies Nonl Sci2010; 1(1): 14–18
[61]
Kou J S. Some variants of Cauchy’s method with accelerated fourth-order convergence. J Comput Appl Math2008; 213(1): 71–78
[62]
Kou J S, Li Y T. Modified Chebyshev’s method free from second derivative for non-linear equations. Appl Math Comput2007; 187(2): 1027–1032
[63]
Kou J S, Li Y T, Wang X H. A variant of super-Halley method with accelerated fourth-order convergence. Appl Math Comput2007; 186(1): 535–539
[64]
Kou J S, Wang X H, Li Y T. Some eighth-order root-finding three-step methods. Commun Nonl Sci Numer Simulat2010; 15(3): 536–544
[65]
Layeni O. Remark on modifications of Adomian decomposition method. Appl Math Comput2008; 197(1): 167–171
[66]
LiQ YMoZQiL Q. Numerical Methods for Systems of Nonlinear Equations. Beijing: Science Press, 1987 (in Chinese)
[67]
Li X W, Mu C L, Ma J W, Wang C. Sixteenth-order method for nonlinear equations. Appl Math Comput2010; 215(10): 3754–3758
[68]
LiuJ. The Theoretical Analysis of High-Order Iterative Methods for Non-Linear Equations. Master thesis, Hangzhou: Zhejiang University, 2004
[69]
Luo X G. A note on the new iteration method for solving algebraic equations. Appl Math Comput2005; 171(2): 1177–1183
[70]
Melman A. Geometry and convergence of Euler’s and Halley’s methods. SIAM Rev1997; 39(4): 728–735
[71]
MiaoH. The theoretical analysis of some iterative methods for nonlinear equations. Master thesis, Hangzhou: Zhejiang University, 2006
[72]
Noor M A, Khan W A, Noor K I, Al-said E. Higher-order iterative methods free from second derivative for solving nonlinear equations. Internat J Phys Sci2011; 6(8): 1887–1893
[73]
Noor K I, Noor M A. Predictor-corrector Halley method for non-linear equations. Appl Math Comput2007; 188(2): 1587–1591
[74]
OrtegaJ MRheinboldtW C. Iterative Solution of NonlinearEquations in Several Variables. Philadelphia: SIAM, 1970
[75]
OstrowskiA. Solution of Equations and Systems of Equations. New York: Academic Press, 1973
[76]
Özban A Y. Some new variants of Newton’s method. Appl Math Lett2004; 17(6): 677–682
[77]
Ren H M, Argyros I K. A new semilocal convergence theorem for a fast iterative method with nondifferentiable operators. J Appl Math Comput2010; 34(1/2): 39–46
[78]
Ren H M, Wu Q B, Bi W H. On convergence of a new secant-like method for solving nonlinear equations. Appl Math Comput2010; 217(2): 583–589
[79]
Rokne J. Newton’s method under mild differentiability conditions with error analysis. Numer Math1971; 18(5): 401–412
[80]
Sargolzaei P, Soleymani F. Accurate fourteenth-order methods for solving nonlinear equations. Numer Algor2011; 58(4): 513–527
[81]
Scavo T R, Thoo J B. On the geometry of Halley’s method. Amer Math Monthly1995; 102(5): 417–426
[82]
Sharma J R, Guha R K, Gupta P. Some efficient derivative free methods with memory for solving nonlinear equations. Appl Math Comput2012; 219(2): 699–707
[83]
SmaleS. Newton’s method estimates from data at one point. In: The Merging of Disciplines: New Directions in Pure, Applied, and Computational Mathematics. New York: Springer-Verlag, 1986, 185–196
[84]
SmaleS. Algorithms for solving equations. In: Proceedings of the International Congress of Mathematicians. Providence: AMS, 1987, 172–195
[85]
Soleymani F. Letter to the editor regarding the article by Khattri: derivative free algorithm for solving nonlinear equations. Computing2013; 95(2): 159–162
[86]
Tatari M, Dehghan M. On the convergence of He’s variational iteration method. J Comput Appl Math2007; 207(1): 121–128
[87]
Thukral R. Introduction to a Newton-type method for solving nonlinear equations. Appl Math Comput2008; 195(2): 663–668
[88]
TraubJ F. Iterative Methods for the Solution of Equations. Englewood Cliffs: Prentice-Hall, 1964
[89]
WangD R. Numerical Methods for Nonlinear Equations and Optimization. Beijing: People’s Education Press, 1979
[90]
Wang X H. Convergence of a family of Halley’s method under weak condition. Chinese Sci Bull1997; 42(2): 119–121
[91]
Wang X H. Convergence of the iteration of Halley’s family and Smale operator class in Banach space. Sci China Ser A1998; 41(7): 700–709
[92]
Wang X H. Convergence of Newton’s method and inverse function theorem in Banach space. Math Comp1999; 68(225): 169–186
[93]
Wang X H. Convergence of Newton’s method and uniqueness of thesolution of equations in Banach space. IMA J Numer Anal2000; 20(1): 123–134
[94]
Wang X H, Dzunić J, Zhang T. On an efficient family of derivative free three-point methods for solving nonlinear equations. Appl Math Comput2012; 219(4): 1749–1760
[95]
Wang X H, Han D F. Criterion α and Newton’s method in the weak conditions. Math Numer Sin1997; 19(1): 103–112
[96]
Wang X H, Kou J S. Semilocal convergence and R-order for modified Chebyshev-Halley methods. Numer Algor2013; 64(1): 105–126
[97]
Wang X H, Kou J S, Li Y T. Modified Jarratt method with sixth-order convergence. Appl Math Lett2009; 22(12): 1798–1802
[98]
Wang X H, Li C. Local and global behaviors for algorithms of solving equations. Chinese Sci Bull2001; 46(6): 444–451
[99]
Wang X H, Li C. On the united theory of the family of Euler-Halley type methods with cubical convergence in Banach spaces. J Comput Math2003; 21(2): 195–200
[100]
Wang X H, Li C. Convergence of Newton’s method and uniqueness of the solution of equations in Banach spaces II. Acta Math Sin Engl Ser2003; 19(2): 405–412
[101]
Weerakoon S, Fernando T G I. A variant of Newton’s method with accelerated third-order convergence. Appl Math Lett2000; 13(8): 87–93
[102]
Wu M. A convergence theorem for the Newton-like methods under some kind of weak Lipschitz conditions. J Math Anal Appl2008; 339(2): 1425–1431
[103]
WuM. Some Theories About Iterative Methods for Solving Nonlinear Equations. Ph D thesis. Hangzhou: Zhejiang University, 2008
[104]
WuP. Iterative Methods with Higher-Order Convergence for Solving Nonlinear Equations and Analysis Convergence. Ph D thesis. Hangzhou: Zhejiang University, 2008
[105]
Ypma T J. Affine invariant convergence results for Newton’s method. BIT Numer Math1982; 22(1): 108–118
[106]
Zhang H, Li D S, Liu Y Z. A new method of secant-like for nonlinear equations. Commun Nonl Sci Numer Simulat2009; 14(7): 2923–2927
[107]
Zheng L, Gu C Q. Fourth-order convergence theorem by using majorizing functions for super-Halley method in Banach spaces. Int J Comput Math2012; 90(2): 423–434
[108]
Zheng L, Gu C Q. Semilocal convergence of a sixth-order method in Banach spaces. Numer Algor2012; 61(3): 413–427
RIGHTS & PERMISSIONS
Higher Education Press 2023
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.