School of Mathematics and Statistics, Chongqing Three Gorges University, Chongqing 404000, China
liujinkui2006@126.com
Show less
History+
Received
Accepted
Published
2023-08-15
Issue Date
Revised Date
2023-12-07
PDF
(410KB)
Abstract
In this paper, a three-term derivative-free projection method is proposed for solving nonlinear monotone equations. Under some appropriate conditions, the global convergence and R-linear convergence rate of the proposed method are analyzed and proved. With no need of any derivative information, the proposed method is able to solve large-scale nonlinear monotone equations. Numerical comparisons show that the proposed method is effective.
In this paper, we consider solving the nonlinear monotonic system of equations:
where is continuous nonlinear mapping. The monotonicity of means:
Nonlinear monotone equations are originated from many scientific and practical problems, such as first-order necessity conditions for unconstrained convex optimization problems and subproblems of generalized proximity point algorithms [9], some monotone variational inequality problems [20], chemical equilibrium problems [14], economic equilibrium problems [6], and power equations [4]. Therefore, many algorithms have been proposed to solve nonlinear monotone systems of equations, among which Newton’s Method and Quasi-Newton Methods [4, 5, 11, 17, 21-23] are well known by researchers due to their excellent convergence speed. However, these algorithms are not suitable for solving large-scale non-smooth systems of equations, because they need to solve a linear system of equations using the Jacobian matrix of the objective function or an approximate Jacobian matrix at each iteration to determine the next search direction.
In recent years, in order to efficiently solve large-scale systems of equations, many researchers have promoted the nonlinear conjugate gradient method under the hyperplane projection technique [17]. For example, Cheng [3] combined the well-known PRP method [15, 16] and hyperplane projection technique to study the derivative-free PRP-type projection method, and proved the global convergence for nonlinear monotone systems of equations without requiring the objective function to be differentiable. Li et al. [12] and Ahookhosh et al. [1] further studied three derivative-free PRP-type projection algorithms. Wu et al. [18] established a derivative-free three-term HS projection algorithm based on the well-known HS conjugate gradient method [8] and hyperplane projection technique. The algorithm inherits the advantages of the HS conjugate gradient method with small storage and does not need to calculate any derivatives, thus it can solve large-scale non-smooth nonlinear monotonic systems of equations. Xiao and Zhu [19], Liu and Li [13] proposed a derivative-free CG_DESCENT type projection method based on the CG_DESCENT method [7] and hyperplane projection technique, and achieved good numerical results and proved the global convergence of the method under appropriate conditions.
Recently, Andrei [2] proposed a three-term nonlinear conjugate gradient method (denoted as TTCG method) based on the well-known CG_DESCENT method. This method is a modified memoryless BFGS algorithm whose search direction satisfies the descent and D-L conjugacy conditions. Numerous numerical experiments have shown that the TTCG method is more efficient than the well-known CG_DESCENT method. To solve the large-scale system of equations problem more effectively, this paper extends the TTCG method based on the hyperplane projection technique and establishes a three-term derivative-free projection algorithm. Under appropriate assumptions, we prove the global convergence and R-linear convergence rate of the algorithm.
2 Derivative-free projection method
The iterative method is one of the most used methods for solving nonlinear systems of equations problems. It generates an iterative column of points , which is usually based on the following unconstrained optimization problem, namely
where . Various iterative methods based on solving Problem (2.1) have been widely studied and applied in practice. However, the stable point of Problem (2.1) is not necessarily the solution of Problem (1.1). Moreover, in many practical problems, the mapping is usually non-smooth.
To solve Problem (1.1) directly, Solodov and Svaiter [17] discussed the hyperplane projection method. Let and be the current iteration point and the search direction, respectively. A certain line search was used to obtain the step so that the point is obtained such that:
If for some point , then it follows from the monotonicity of :
Thus, the hyperplane strictly separates the current iteration point from the solution of the nonlinear monotonic system of equations. Using the hyperplane, the next iteration point is obtained:
The following is a three-term derivative-free projection algorithm for solving a nonlinear monotonic system of equations based on the TTCG method and the hyperplane projection method.
Algorithm 2.1
Step 0: Given the initial point and the constant . If , stop.
Step 1: Set , put .
Step 2: Calculate the step such that satisfies the following equation for the smallest non-negative integer m:
Step 3: Let . If , stop. Otherwise, use (2.2) to solve for .
Step 4: If , stop. Otherwise, calculate the search direction:
where , , , , = .
Step 5: Set , turn to Step 2.
The following theorem shows that Algorithm 2.1 is a descending iterative algorithm when is the gradient of the real-valued function .
Theorem 2.1Let the sequenceandbe generated by Algorithm 2.1, then
Proof According to the definition of parameter, it can be seen that
Multiply both ends of (2.4) by and use (2.6) to obtain
Known that , then . The conclusion is proved.
Note 2.1 According (2.5) and (2.6), we know that
Since that, before Algorithm 2.1 stops, the parameter and is always meaningful.
3 Global convergences
To prove the global convergence of Algorithm 2.1, assume that the mapping F satisfies:
(H) Let the mapping be Lipschitz continuous, i.e., there exists a constant such that
Lemma 3.1Let the mappingsatisfy (H). The sequenceandare generatedby Algorithm 2.1, then the step factorsatisfies
Proof If , it follows from the definition of that does not satisfy the line search condition (2.3). Thus,
According to (2.5) and (3.1), it follows that
So, it can be concluded that
The lemma is proven.
The following lemma shows that sequence is descending and convergent, where sequence is generated by Algorithm 2.1 and point is an arbitrary solution to Problem (1.1). The proof process of this lemma can be found in Reference [17].
Lemma 3.2Assuming that mappingsatisfies (H) and sequence andaregenerated by Algorithm 2.1, then for any solutionto Problem (1.1), there is
Especially, the sequenceis bounded and
Note 3.1 According to (2.2) and (2.3), we know that
According to (3.4), we know that
Lemma 3.3Assuming that the mappingsatisfies (H), the sequencesandaregenerated by Algorithm 2.1, then there exists a positive real numbersuch that
Proof For a certain solution of Problem (1.1) satisfies
Then, applying the boundedness of sequence , we can see that there is a normal number , which makes
By using (2.2) and (3.1), as well as the Cauchy-Schwartz inequality and the definition of step , it can be obtained that
According to (2.6), (3.7), (3.8) and Cauchy-Schwartz inequality, it can be obtained that
Based on the above two inequalities and (2.4), (3.8), it can be concluded that
Since , take , the conclusion is proved.□
Theorem 3.1If the mappingsatisfies (H), the sequencesandare generated byAlgorithm 2.1, then
Proof Assuming (3.9) doesn’t hold, there exists a normal number such that
From (3.2), (3.6) and (3.7), it can be found that
Obviously, it is contradictory with (3.5). Therefore, the hypothesis does not hold and the original proposition holds.
4 -Linear convergence rate
To further prove the R-linear convergence rate of Algorithm 2.1, it is necessary to assume that the mapping satisfies:
(G) Let , there exist positive constants and such that
where is the set of solutions of Problem (1.1),, denotes the distance of point from the solution set .
Theorem 4.1Suppose that the mappingsatisfies (H) and (G), the sequenceis generated by Algorithm 2.1, then the sequenceisQ-linearly convergent to 0. Thus, the sequenceis R-linearly convergent.
Proof Let . we can know is the solution closest to point in solution set , i.e.,
By using (2.5) and Cauchy-Schwartz inequality, it can be obtained that
Known that , using (3.3), (4.1)‒(4.3) and Note 3.1, we can get
This shows that the sequence is Q-linearly convergent to 0. Since , the sequence is R-linearly convergent.
5 Numerical tests
Algorithm 2.1 is tested numerically below and compared numerically with the MPRP-based method [12] and the MHS-based method [18]. The program code is written using MATLAB 7.0, running on an Intel Core i5-4590 processor (quad-core, 3.30 GHz), 8.0 GB of internal memory, and the Windows 7 operating system. The selected test problems were taken as follows, from the literature [22, 3, 10], respectively.
Problem 5.1 [22] is defined by the following equation:
where and
Problem 5.2 [3] is defined by the following equation:
where
Problem 5.3 [10] is defined by the following equation:
where
The parameter value in Algorithm 2.1 is . The termination conditions for all programs are or .
In numerical tests, the rand function in MATLAB software is used to randomly generate a fixed initial point within , and calculate three test problems. The numerical results are shown in Tab.1‒Tab.7. In Tab.1‒Tab.7, “Dim” represents the dimension of the test problem, “NI” represents the number of iterations, and “NF” represents the number of calculations for , “CPU ” represents CPU time (in seconds).
Tab.1‒Tab.7 indicate that Algorithm 2.1, PRP-based method, and HS-based method can successfully solve the given test problem starting from randomly generated initial points. For Problems 5.1 and 5.2, the numerical effects of the three test methods are comparable; However, for Problem 5.3, the performance of Algorithm 2.1 is significantly better than that of PRP-based and HS-based methods.
Ahookhosh M, Amini K, Bahrami S. Two derivative-free projection approaches for systems of largescale nonlinear monotone equations. Numer Algorithms2013; 64(1): 21–42
[2]
Andrei N. On three-term conjugate gradient algorithms for unconstrained optimization. Appl Math Comput2013; 219: 6316–6317
[3]
Cheng W Y. A PRP type method for systems of monotone equations. Math Comput Modelling2009; 50: 15–20
[4]
Dennis J E, More J J. A characterization of superlinear convergence and its application to quasi-Newton methods. Math Comp1974; 28(126): 549–560
[5]
Dennis J E, More J J. Quasi-Newton method, motivation and theory. SIAM Rev1997; 19(1): 46–89
[6]
Dirkse S P, Ferris M C. MCPLIB: A collection of nonlinear mixed complementarity problems. Optim Methods Softw2002; 5(4): 319–345
[7]
Hager W W, Zhang H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J Optim2005; 16(1): 170–192
[8]
Hestens M R, Stiefel E L. Methods of conjugate gradients for solving linear systems. J Research Nat Bur Standards1952; 49(6): 409–436
[9]
Iusem A N, Solodov M V. Newton-type methods with generalized distances for constrained optimization. Optimization1997; 41(3): 257–278
[10]
La Cruz M, Martinez J M, Raydan M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math Comp2006; 75(255): 1429–1448
[11]
Li D H, Fukushima M. A global and superlinear convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations. SIAM J Numer Anal1999; 37(1): 152–172
[12]
Li Q N, Li D H. A class of derivative-free methods for large-scale nonlinear monotone equations. IMA J Numer Anal2011; 31(4): 1625–1635
[13]
Liu J K, Li S J. A projection method for convex constrained monotone nonlinear equations with applications. Comput Math Appl2015; 70(10): 2442–2453
[14]
Meintjes K, Morgan A P. A methodology for solving chemical equilibrium systems. Appl Math Comput1987; 22(4): 333–361
[15]
Polyak B T. The conjugate gradient method in extremal problems. USSR Comput Math and Math Phys1969; 9(4): 94–112
[16]
Polak E, Ribière G. Note sur la convergence de méthodes de directions conjuguées. Rev Francaise Informat Recherche Opérationnelle1969; 3(16): 35–43
[17]
SolodovM VSvaiterB F. A globally convergent inexact Newton method for systems of monotone equations. In: Reformulation: Non-smooth, Piecewise Smooth, Semismooth and Smoothing Methods. Dordrecht: Springer, 1999, 355–369
[18]
Wu X Y, Sai N E Z, Zhang H L. On three-term HS projection algorithm for solving nonlinear monotone equations. Southwest China Normal Univ Natur Sci Ed2016; 41(5): 41–47
[19]
Xiao Y H, Zhu H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J Math Anal Appl2013; 405(1): 310–319
[20]
Zhao Y B, Li D H. Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J Optim2000; 11(4): 962–973
[21]
Zhou G, Toh K C. Superline convergence of a Newton-type algorithm for monotone equations. J Optim Theory Appl2005; 125(1): 205–221
[22]
Zhou W J, Li D H. Limited memory BFGS method for nonlinear monotone equations. J Comput Math2007; 25(1): 89–96
[23]
Zhou W J, Li D H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math Comp2008; 77(264): 2231–2240
RIGHTS & PERMISSIONS
Higher Education Press 2023
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.