Arc-search in numerical optimization

Yiguang YANG

Front. Math. China ›› 2023, Vol. 18 ›› Issue (5) : 313 -326.

PDF (321KB)
Front. Math. China ›› 2023, Vol. 18 ›› Issue (5) : 313 -326. DOI: 10.3868/s140-DDD-023-0022-x
SURVEY ARTICLE

Arc-search in numerical optimization

Author information +
History +
PDF (321KB)

Abstract

Determining the search direction and the search step are the two main steps of the nonlinear optimization algorithm, in which the derivatives of the objective and constraint functions are used to determine the search direction, the one-dimensional search and the trust domain methods are used to determine the step length along the search direction. One dimensional line search has been widely discussed in various textbooks and references. However, there is a less-known technique—arc-search method, which is relatively new and may generate more efficient algorithms in some cases. In this paper, we will survey this technique, discuss its applications in different optimization problems, and explain its potential improvements over traditional line search method.

Keywords

Arc-search / numerical optimization / linear programming and convex quadratic programming / unconstrained optimization / constrained optimization

Cite this article

Download citation ▾
Yiguang YANG. Arc-search in numerical optimization. Front. Math. China, 2023, 18(5): 313-326 DOI:10.3868/s140-DDD-023-0022-x

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Optimization techniques are highly active branch of applied mathematics and have many applications in practical problems. Optimization and numerical computation are strongly related. One of the main features of optimization algorithms is to use the derivatives of the objective, constraints, and related information to find the search direction and determine the search step, where the most important method for determining the step is one-dimensional linear search and trust domain methods. The method starts from a feasible point and looks for a new feasible point in the constraint region along the decreasing direction of the objective function and improves the objective value. Then from the new feasible point, search for the next feasible point, and the cycle repeats until the optimal solution is found. The one-dimensional linear search is the earliest proposed method, which is simple and easy to implement and has been proven to be effective over the years. However, it has also been noticed that in some cases, linear search may not be the best option. Therefore, the arc-search method has been proposed, and analytical, experimental and comparative research has been performed. Theoretical analysis and numerical calculations show that arc search is promising in many cases. In this paper, different arc search techniques and their applications in numerical optimization are presented.

The general problem of constrained nonlinear programming can be expressed as

minf(x) s.t.h( x)=0 ,g( x)0 .

Here xRn, f(x) is a linear or nonlinear differentiable objective function with respect to x , h=0 is the equational vector constraint that is linear or nonlinearly differentiable with respect to x, and g(x) 0 is the inequality vector constraint that is linear or nonlinearly differentiable with respect to x. Many textbooks such as [7, 21, 38, 49] discussed this general problem and give many workable algorithms and profound theoretical results. However, all these popular textbooks do not discuss the relatively new arc-search algorithms. For this reason, we will discuss the application of arc-search techniques to this problem.

Most optimization problems are special cases of Problem (1). Some special problems are particularly important theoretically or in applications and thus require special attention. Usually for each special problem, the special algorithm is more effective than the general algorithm for the general problem. Therefore, this paper will discuss several special optimization problems and the application of arc-search techniques in solving these problems.

The mathematical expression for the first special problem is as follows:

minf(x) s.t.h( x)=0 ,

here xRn. It is clear that the main difference between Problem (2) and Problem (1) is the absence of the inequality constraint g(x)0. If h(x) is a vector differentiable function, then the equation constraint h(x)= 0 forms a Riemannian manifold [28]. Noting that Rn is the special Riemannian manifold, it is easy to see that Problem (2) is the natural extension of the following unconstrained optimization problem:

minxRnf(x ).

Thus, the optimization problem on the Riemannian manifold is at least locally equivalent to the differentiable equationally constrained optimization Problem (2) on the Euclidean space. We can expect the natural generalization of the algorithm for solving the unconstrained optimization Problem (3) to obtain the solution of the equationally constrained optimization Problem (2).

In this review paper, we will also discuss a special case that is simpler than (3), i.e., the case where both the objective function and the equation constraint are linear functions and the inequality is x0. This type of problem is often called linear programming problem. Linear programming problems were firstly proposed and systematically studied by Kantorovich [30] in 1939. Dantzig discovered the famous simplex algorithm in 1947 [16]. Since then, linear programming has become an important branch of optimization problems. Many researchers have studied this problem in depth and obtained a large number of in-depth results and many effective algorithms. The standard form of linear programming can be expressed as follows:

mincTxs. t.A x=b,x 0,

where c,x Rn, b Rm, ARm×n, m<n. There are two major classes of practical linear programming algorithms, namely the simplex algorithm and the interior point method. Since the simplex method was discovered by Dantzig, many researchers have analyzed and proposed various improved simplex methods to make the simplex method better and more practical. However, Klee and Minty [32] gave an example to prove that the simplex method doesn’t have the convergence speed of polynomials. Therefore Karmarkar [31] introduced the interior point method to solve linear programming problems. Currently, the most efficient interior point method is the path search method. This method searches for the optimal solution along a central path in the trust domain. Since the central path is a curve in the m+2n dimensional space, arc-search methods are naturally available for this type of method.

Slightly different from linear programming is the convex quadratic programming problem. Its standard expression is as in [6]:

minxTQxs. t.A x=b,x 0,

where QRn× n is a positive definite matrix, c Rn, xRn,b Rm, ARm× n. The convex quadratic programming problem can also be solved by the simplex algorithm and interior point methods [6]. Therefore, the arc-search method can be naturally used for the interior point method to solve convex quadratic programming problems.

In this paper, other special cases of (1) will also be discussed. Before going further into the discussion, we outline the topics of each section. In Section 2, we focus on the application of arc-search in linear and convex quadratic programming, mainly elliptic approximation of the central path; Section 3 introduces the application of arc-search in unconstrained nonlinear programming; Section 4 focuses on the application of arc search to constrained nonlinear programming; Section 5 gives the main conclusion of this paper.

2 Arc-search method of linear programming and convex quadratic programming problems

This section discusses the simplest class of optimization problems: linear programming. There are several equivalent expressions for linear programming, and we concentrate on the solutions of the standard form defined by (4). These solutions can be easily extended to solve the convex quadratic programming Problem (5). Although both linear programming and convex quadratic programming can be solved by the simplex and interior point methods, we focus on the application of the arc-search method to the interior point method. The best algorithm for solving (4) by the interior point method relies on considering also its dual. The mathematical expression for the dual problem of the linear programming Problem (4) is as follows:

maxbTys.t. ATy+s=c,s0.

It is well known that Problem (4) has a global optimal solution when and only when the KKT condition or Slater condition holds [33]:

Ax=b, ATy+s =c,(x ,s) 0,xisi=0, i=1,2 ,,n.

Assuming that Set (7) has a strictly positive trust domain and is not empty, i.e., the set

F 0={ (x,y,s): Ax=b, AT y+s=c ,(x,s)>0}

is not empty and A is non-singular matrix. Meggido [43] suggests searching for the optimal solution of (7) along the central path, where the central path is defined as follows:

Ax=b, ATy+s =c,(x ,s)> 0,xisi=μ, i=1,2 ,,n.

It is clear from Eq. (9) that the central path is a nonlinear curve in the 2n+m dimensional space, which implies that (x (μ),y(μ),s( μ)) R2n +m, i.e., the best search method should be along the nonlinear curve (x(μ),y (μ), s(μ)) rather than along a straight line. However, because the cost of computing the central path is too high, the vast majority of interior point methods use linear search methods [59, 70]. Taking into account the advantages of the arc-search method and disadvantages of the central path method, Monteiro et al. [46] proposed a method to approximate the central path. Let ω =( x,y,s),ω˙=(x˙,y˙, s˙ ), ω¨=(x¨, y¨,s¨), find the first- and second-order derivatives of μ in Eq. (9), and solve for ω˙=( x˙ (μ), y˙(μ),s˙( μ)) ,ω ¨=( x¨ (μ), y¨(μ),s¨( μ)), they suggested that the search for the optimal solution proceeds along the following curve:

ωαω˙+ α2ω¨,

where αis the search step. This is a typical representation of search curve, i.e., the curve is a polynomial of step α. It is also mentioned in Section 3. For the search curve of the arc-search (10), Monteiro et al. [46] prove that it not only can find the solution of the linear programming Problem (4), but can also be used to find the solution of the convex quadratic programming Problem (5). A similar idea has been finely discussed in [18]. However, a conservative polynomial complexity bound estimate on the computational efficiency of the search method show that the arc-search method may not be as computationally efficient as some linear search methods [46]. At the same time, extensive computational experience also shows that the arc-search method is not as computationally efficient as the best known interior point method—the Mehrotra method [44] (Mehrotra’s original method is also an arc-search method, but the modified Mehrotra method is a linear search method. Neither the original nor the improved Mehrotra method can be proved to be convergent). Therefore, for the past 20 years, arc-search methods have not received enough attention in the interior point methods of linear programming.

A meaningful result appeared in 2009, when Yang [63] proposed a different arc-search method. He suggested to approximate the central path curve by partially elliptical arcs. Unlike the approach in [46], where Yang considered the large approximation of the central path curve, but Monteiro et al. emphasized the small approximation of the central path curve. Let S=diag(s )X=diag(x). (x˙, y˙,s˙) and (x¨, y¨,s¨) can be obtained from solving the following two linear systems of equations [63]:

[A000AT0 S0 X][ x˙ y˙s˙]=[0 0xs],

[A000AT0 S0 X][ x¨ y¨s¨]=[0 02x˙s˙],

here xs and x˙s˙ are the products from two vector elements to the element. The ellipse for approximating the central path curve can be constructed as follows.

Let (x,y,s)Fo, (x (α),y(α),s( α)) be the arc that approximates the ellipse of the central path curve and (x,y ,s) (x (α),y(α),s( α)). Then, the arc of the ellipse can be expressed as

{x( α)=x x˙sin (α)+ x¨(1cos (α)), y(α)=y y˙sin(α)+y¨(1cos(α)),s (α)=ss˙sin(α)+s¨(1cos(α)).

Applying the arc-search technique and combining the predictor-corrector algorithm suggested by Mizuno [45], Yang [65] proved that the interior point algorithm using second order derivatives for solving linear programming problems is able to achieve the optimal polynomial bounds. This is a significant improvement in the theory, as the optimal polynomial bounds could only be achieved by using first-order derivatives of interior point algorithms previously. A similar arc-search interior point algorithm for solving convex quadratic programming was shown to reach the corresponding polynomial bounds [64]. Recently, Yang et al. [60] proposed a different arc-search algorithm for solving linear programming problems.

To demonstrate the advantages of the arc-search algorithm from the perspective of numerical computation, Yang [68] proposed the non-feasible interior point algorithm similar to the Mehrotra algorithm [44], and tested the arc-search algorithm and the Mehrotra algorithm using the standard problem in Netlib [48]. For the fair comparison, the two algorithms set the same preprocessing methods, initial conditions, parameters, and convergence criteria. The only difference is that the arc-search algorithm searches along a curve, while the Mehrotra algorithm searches along a straight-line. After carefully comparing the calculation results of the two algorithms, Yang [68] believes that the arc-search is better.

3 Arc-search algorithm in unconstrained nonlinear programming

Arc-search may have originated from a research monograph by Fiacco and McCormick in 1968 [20]. The first research paper may be McCormick’s article where he applied arc-search to effectively solve unconstrained optimization problems in 1977 [42]. Unlike the commonly used optimization algorithms for searching along a straight-line, McCormick recommends considering two different directions for arc searching. The first direction sk is the descending direction of the current iteration point. Similar to the straight-line search method, this direction satisfies the following formula:

sk Tf(xk)0 .

The second direction dk is related to the curvature of the objective function and is a negative curvature direction that satisfies the following relationship:

dk Tf(xk)0 .

Similar to the Armijo’s search method, McCormick suggests starting from the current xk and trying to continuously increase i, so that the next point xk+1 reduce the objective function along the following formula:

xk +1= xk+s k2i+dk 2 i2.

This is actually a generalization of Armijo’s search method on arcs. McCormick’s idea and motivation is that search directions are required to be saddle-shaped in the vicinity of the objective function. Under some reasonable conditions, McCormick proves that the arc-based search algorithm converges to an equilibrium point (satisfying the necessary conditions for optimal point). In a short period of time, Moré and Sorensen [47] extended McCormick’s work. They proposed more general search formulation to solve the unconstrained optimization Problem (3). The formula is as follows:

xk +1= xk+αk2sk+α kdk .

Moré and Sorensen tested their search algorithm and proved the convergence of the algorithm. On this basis, some ideas similar to (17) have been proposed and studied by different scholars, such as Ferris et al. [19], Lucidi et al. [36], Zhou and Sun [55, 72]. These scholars have rigorously proved the convergence of their respective algorithms and provided the corresponding test results.

Shortly after Moré and Sorensen published their results, Goldfarb [23] proposed the polynomial combination formulation of arc-search that differs from (17) as follows:

xk +1= xk+α ksk +αk2dk.

Goldfarb’s main idea was based on the following considerations: the arc-search of (17), although very effective for searches near saddle points. However, the search is not effective in locally convex regions. For the latter case, the search in (18) is more effective. Obviously, a method that combines (17) and (18) would be attractive, therefore, Goldfarb et al. [24] proposed weighting algorithm for Eqs. (17) and (18).

Recently, Olivares et al. [50] discussed a more general arc-search for the unconstrained nonlinear nonconvex optimization Problem (3). The arc-search formulation can be expressed as

x(α)=x(0 )+ ϕ1 (α)sk+ϕ2( α)dk,

where ϕ1 and ϕ2 are non-negative weighting functions. This more general arc-search provides more flexibility in the design of the algorithm.

So far, all the algorithms discussed in this section are based on the combination of two search directions: the descent direction and the non-negative curvature direction. The computation of the non-negative curvature relies on the computation of the Hessian matrix, which is computationally intensive. Therefore, Apostolopoulou et al. [4] proposed that apply the BFGS formulation to approximate Hessian matrix and calculate the approximate negative curvature direction. In this way, the calculation of the Hessian matrix is avoided. This approach is therefore promising for large-scale problem solving. Computational experience currently supports this judgment [4].

Grandinetti applied a completely different idea [25], where he considered a curve from the initial point to the optimal solution in a multidimensional space defined by an ordinary differential equation. Dennis et al. [17] generalized Grandinetti’s idea. They obtained a new algorithm using regularization and proved the global convergence of their new algorithm. Combining Goldfarb’s formulation and Grandinetti’s arc search algorithm, Conforti et al. [14] proposed the following search curves:

x(α)=x(0) +α x˙(0) +12αx ¨( 0),

where x˙(0) is the first derivative at the initial point α=0, and x¨(0) is the second derivative at the initial point α= 0. Conforti et al. [15] extended this method to more general situations.

All the above methods are proposed for Problem (3). Martinez and Santos [41] discussed the following more specific unconstrained nonlinear nonconvex optimization problems:

minxRnf(x)2.

Given a pair of search directions d1 and d2, it follows the following formula to search for the optimal solution and prove the global convergence of this algorithm:

xk +1= xk+α 2d1 +αt(1t)d2 .

The same problem has also been considered by other researchers [4]. Bader et al. [5] proposed an algorithm expressed in tensors.

4 Arc-search algorithms in constraint nonlinear programming

Assuming that all nonlinear equations in (2) are smooth functions and noting that this set forms a Riemannian manifold, Luenberger [37] was the first to show that the set of nonlinear equation constraints is somehow analogous to a surface in three-dimensional space from the point of view of the manifold in 1972. Therefore, theoretically solving a nonlinear programming problem with equational constraints may be searched along geodesics on the Riemannian manifold. However, due to the complexity of the geodesic computation in general, Luenberger thinks the method is infeasible. The first algorithm to search for the optimal solution along the geodesic was proposed by Botsaris [9]. His algorithm does not compute the geodesic line directly, but computes the approximate curve of the geodesic line and then searches along the approximate curve. Noting that the n-dimensional Euclidean space is the simplest manifold, Gabay [22] realized that, at least in theory, algorithms for unconstrained optimization problems can be naturally generalized to constrained optimization algorithms within the framework of Riemannian manifolds. Thus, in general, the search for optimal solutions on Riemannian manifolds is along curves rather than along straight lines. Therefore, arc-search is a natural choice of optimization on Riemann manifolds. Gabay [22] proposed several unified algorithms on Riemann manifolds, but he did not implement his algorithms and there are no relevant computational results. However, his idea inspired many researchers to move forward along this line, and progress was slow at first. For about a decade, only a few papers were published (see [10, 13, 51]). By 1994, Udrişte published the first monograph on optimization over Riemann manifolds [57].

An important breakthrough occurred in 1993. Smith systematically studied the problem in his Ph.D. thesis [53]. He generalized the unconstrained maximum velocity descent method, the conjugate gradient method and the Newton method using the precise Riemannian manifold-theory, so that the algorithms have a clear geometrical concept (geodesic and parallel shift) on the manifold. He introduced convergence results for these algorithms. Most importantly, he pointed out that there are very simple expressions for the geodesics and parallel shifts of certain manifolds, so that these algorithms are very effective for these manifolds. He also wrote computer programs to solve such problems and proved that his algorithms are very effective for such problems. For example, the geodesic line on the sphere can be simply expressed as:

s(t)=xsin (t)+gcos (t),

where x is the starting point of the geodesic, the unit vector g is the search direction in the tangent plane of x. Smith’s paper not only revealed intrinsically the connection between unconstrained and equationally constrained optimization problems extremely well mathematically, but also gave specific implementations of certain flow forms. Thus, his work soon attracted wide attention. Many generalizations emerged in short time. For example, Yang [61] in 1996 dropped the assumption condition about one-dimensional global optimality in Smith’s algorithm [53, 54] and introduced an Armijo-like search. This makes the improved algorithm easier to implement, and Yang [61] proved that the improved algorithm is also globally convergent. In addition, he also showed that the improved algorithm can be directly used for engineering problems, such as the robust pole configuration problem. Their main results were published in 2007 [62]. Currently, the work of Edelman [18] is the most famous, because he extended Smith’s work to a special class of constraints: the orthogonal constraints. As of November 2016, their work has been cited more than 1700 times by Google Scholar statistics. Shortly thereafter, Manton [40] published a similar result. To simplify Smith’s method, Adler et al. [3] introduced the concept of contraction to replace the computationally intensive exponential mapping. Absil et al. [1] introduced the trust domain approach to optimality on manifolds. Ring et al. [52] introduced convergence results for the BFGS algorithm on manifolds. In 2008, Absil et al. [2] published the second monograph in this area.

As theoretical research progresses, it is found more problems can be solved by Riemannian manifold optimization, and optimization algorithms on manifolds are more effective for certain problems. It is impossible to list all the examples, but we mention some applications in different fields. Ring et al. [52] studied the application of BFGS algorithms on manifolds in shape space. Ma et al. [39] discussed the application of manifold optimization algorithms in the estimation of object motion and structure. Lee et al. [34] discussed the application of manifold optimization algorithms in the study of computer vision. James et al. [29] applied the optimization algorithm on manifolds to the precoding problem in information theory. Cai et al. [11] suggested to solve linear discriminant analysis (LDA) problems by manifold optimization algorithm. Boscolo et al. [8] applied manifold optimization algorithm to solve independent component analysis (ICA) problems. Vasconcelos et al. [58] applied manifold optimization algorithm to optimize the instrumentation settings. Yang used the conjugate gradient method on manifolds as a subroutine to solve the unconstrained nonlinear optimization problem of global and second-order convergent [66], and demonstrated the Newtonian method on manifolds is more effective than other existing methods for estimating satellite attitude [67]. Li et al. [35] studied the Riemann manifold-based planar target identification. Zhang et al. [71] discussed the gradient algorithm on Stiefel-manifolds and its application in feature extraction. Chen et al. [12] proposed a geometric algorithm for a class of convex programming problems on manifolds.

Recently, Henderson [26] considered the nonlinear optimization problem with only linear constraints, which is a special case of (1):

minf (x)s.t.Axb,

here ARm× n is a matrix, bRm is a vector, and f is a quadratic differentiable function. Henderson introduced a very general expression for the arc-search:

xk+1= xk+ ϕi( αk),

here ϕk C2:RRn is a quadratic differentiable arc, αk is the search step. This search arcs include McCormick’s search arc and Goldfarb’s search arc. In addition, Henderson also defined a NEM search arc. He proposed several optimization algorithms based on the search arc of (25) to solve (24). Meanwhile, he also proved that the optimization algorithm is convergence [26] and gave the results of numerical tests.

Tits et al. [56] considered the most general case of the nonlinear optimization Problem (1), i.e., the constrained nonlinear optimization problem including nonlinear equations and nonlinear inequality equation. Their approach extends the interior point method for linear programming (in Section 3) to nonlinear programming problems. Similar to many nonlinear programming interior point method algorithms, they use logarithmic barrier functions to iteratively improve the objective function values. Unlike other interior point algorithms for nonlinear programming, they employed the arc-search technique. Let (Δx,Δz) be the proposed Newtonian search direction and (Δ x~,Δ z~) be the auxiliary search direction associated with higher order derivatives. Tits et al. suggested the following arc formulation to search for the optimal solution:

x+=x+αΔx +α 2Δ x~,

here α ( 0,1] is the search step size. They also proved the global convergence and local quadratic convergence speed of their algorithm, and these theoretical results make their algorithm attractive. Tits et al. also tested their algorithm using the nonlinear optimization problem in [27], showing impressive computational results.

This section is expanded and modified from the authors' previous work [69].

5 Conclusions and prospects

This paper overviewed the arc-search in numerical optimization. Although arc-search techniques are not as widely applied in optimization algorithms as linear search, the results of this paper show that arc-search techniques outperform linear search techniques in many cases. An important step in arc-search is choosing a suitable search arc. We expect that more people will choose arc-search techniques in optimization algorithms.

We believe that arc-search has the potential to be extended to more problems, such as linear complementary problems and nonlinear programming problems, in the application of the interior point method. Theoretically. We expect to prove that the arc-search algorithm has better convergence results than the linear search algorithm. For nonlinear unconstrained optimization algorithms, there are many arc-search algorithms. However, there is a lack of careful and comprehensive numerical comparisons between algorithms. In particular, the comparison with the excellent linear search algorithm. As for nonlinear equationally constrained optimization algorithms, the optimization algorithms on Riemannian manifolds focus on the problem of orthogonal constraint sets, and the current results show that such algorithms outperform the traditional linear search. We expect researchers to find more problems suitable for Riemannian manifold optimization.

References

[1]

Absil P-A, Baker C G, Gallivan K A. Trust-region methods on Riemannian manifolds. Found Comput Math 2007; 7(3): 303–330

[2]

AbsilP-AMahony RSepulchreR. Optimization Algorithms on Matrix Manifolds. Princeton NJ: Princeton Univ Press, 2008

[3]

Adler R L, Dedieu J P, Margulies J Y, Martens M, Shub M. Newton’s method on Riemannian manifolds and a geometric model for the human spine. IMA J Numer Anal 2002; 22(3): 359–390

[4]

Apostolopoulou M S, Sotiropoulos D G, Botsaris C A. A curvilinear method based on minimal-memory BFGS updates. Appl Math Comput 2010; 217(2): 882–892

[5]

Bader B W, Schnabel R B. Curvilinear linesearch for tensor methods. SIAM J Sci Comput 2003; 25(2): 604–622

[6]

Beale E M L. On minimizing a convex function subject to linear inequalities. J R Stat Soc Ser B 1955; 17(2): 173–184

[7]

BertsekasD P. Nonlinear Programming, 2nd ed. Belmont, MA: Athena Scientific, 1999

[8]

Boscolo R, Pan H, Roychowdhury V P. Independent component analysis based on nonparametric density estimation. IEEE Trans Neural Netw 2004; 15(1): 55–65

[9]

Botsaris C A. Constrained optimization along geodesics. J Math Anal Appl 1981; 79(2): 295–306

[10]

BrockettR W. Differential geometry and the design of gradient algorithms. In: Differential Geometry: Partial Differential Equations on Manifolds. Proc Sympos Pure Math, Vol 54, Part 1. Providence, RI: AMS, 1993, 69–92

[11]

CaiDHeX F HanJ W. Semi-supervised discriminant analysis. In: IEEE 11th International Conference on Computer Vision. IEEE, 2007, https://doi: 10.1109/ICCV.2007.4408856

[12]

Chen X Y, Zhang S G. A geometric method for a class of convex programs. J Fujian Norm Univ (Nat Sci Ed) 2012; 28(2): 7–10

[13]

Chu M T. Curves on S(n-1) that lead to eigenvalues or their means of a matrix. SIAM J Algebraic Discrete Methods 1986; 7(3): 425–432

[14]

Conforti D, Grandinetti L, Musmanno R. A parallel tensor algorithm for nonlinear optimization. Optim Method Softw 1994; 3(1/2/3): 125–142

[15]

Conforti D, Mancini M. A curvilinear search algorithm for unconstrained optimization by automatic differentiation. Optim Method Softw 2001; 15(3/4): 283–297

[16]

DantzigG B. Maximization of a linear function of variables subject to linear inequalities. In: Activity Analysis of Production and Allocation. Cowles Commission Monograph, No 13. New York: John Wiley & Sons, 1951, 339–347

[17]

Dennis J E Jr, Echebest N, Guardarucci M T, Martinez J M, Scolnik H D, Vacchino C. A curvilinear search using tridiagonal secant updates for unconstrained optimization. SIAM J Optim 1991; 1(3): 333–357

[18]

Edelman A, Arias T A, Smith S T. The geometry of algorithms with orthogonality constraints. SIAM J Matrix Anal Appl 1998; 20(2): 303–353

[19]

Ferris M C, Lucid S, Roma M. Nonmonotone curvilinear line search methods for unconstrained optimization. Comput Optim Appl 1996; 6(2): 117–136

[20]

FiaccoA VMcCormick G P. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. New York: John Wiley & Sons, 1968

[21]

FletcherR. Practical Methods of Optimization, 2nd ed. Chichester: John Wiley & Sons, 1987

[22]

Gabay D. Minimizing a differentiable function over a differentiable manifold. J Optim Theory Appl 1982; 37(2): 177–219

[23]

Goldfarb D. Curvilinear path steplength algorithms for minimization which use directions of negative curvature. Math Program 1980; 18(1): 31–40

[24]

Gould N I M, Lucidi S, Roma M, Toint Ph L. Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optim Method Softw 2000; 14(1/2): 75–98

[25]

GrandinettiL. Nonlinear optimization by a curvilinear path strategy. In: System Modeling and Optimization. Lect Notes Control Inf Sci, Vol 59. Berlin: Springer-Verlag, 1984, 289–298

[26]

HendersonN W. Arc search methods for linearly constrained optimization. Ph D Thesis. Stanford, CA: Stanford University, 2012

[27]

HockWSchittkowski K. Test Examples for Nonlinear Programming Codes. Lecture Notes in Econom and Math Systems, Vol 187. Berlin: Springer-Verlag, 1981

[28]

IsidoriA. Nonlinear Control Systems, 3rd ed. Berlin: Springer-Verlag, 1995

[29]

James L D, Heath R W. Limited feedback unitary precoding for spatial multiplexing systems. IEEE Trans Inform Theory 2005; 51(8): 2967–2976

[30]

Kantorovich L V. Mathematical methods of organizing and planning production. Manag Sci 1960; 6(4): 366–422

[31]

Karmarkar N. A new polynomial-time algorithm for linear programming. Combinatorics 1984; 4(4): 373–395

[32]

KleeVMinty G J. How good is the simplex algorithm? In: Inequalities, III (Shisha O ed). New York: Academic Press, 1972, 159–175

[33]

KuhnH WTucker A W. Nonlinear programming. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (Neyman J ed). Berkeley, CA: Univ California Press, 1951, 481–492

[34]

LeeP Y. Geometric optimization for computer vision. Ph D Thesis. Canberra: Austral Nat Univ, 2005

[35]

Li G W, Liu Y P, Yin J, Shi Z L. Planar object recognition based on Riemannian manifold. Acta Autom Sin 2010; 36(4): 465–474

[36]

Lucidi S, Rochetich F, Roma M. Curvilinear stabilization techniques for truncated Newton methods in large scale unconstrained optimization. SIAM J Optim 1998; 8(4): 916–939

[37]

LuenbergerD G. Introduction to Linear and Nonlinear Programming. Reading, MA: Addison Wesley, 1972

[38]

LuenbergerD GYeY Y. Linear and Nonlinear Programming, 3rd ed. New York: Springer Science+Business Media, 2008

[39]

Ma Y, Košecká J, Sastry S. Optimization criteria and geometric algorithms for motion and structure estimation. Int J Comput Vis 2001; 44(3): 219–249

[40]

Manton J H. Optimization algorithms exploiting unitary constraints. IEEE Trans Signal Process 2002; 50(3): 635–650

[41]

Martìnez J M, Santos R F. An algorithm for solving nonlinear least-squares problems with a new curvilinear search. Computing 1990; 44(1): 83–90

[42]

McCormick G P. A modification of Armijo’s step-size rule for negative curvature. Math Program 1977; 13(1): 111–115

[43]

MeggidoN. Pathways to the optimal set in linear programming. In: Progress in Mathematical Programming: Interior-point and Related Methods. New York: Springer-Verlag, 1989, 131–158

[44]

Mehrotra S. On the implementation of a primal-dual interior point method. SIAM J Optim 1992; 2(4): 575–601

[45]

Mizuno S, Todd M, Ye Y Y. On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 1993; 18(4): 964–981

[46]

Monteiro R D C, Adler I, Resende M G C. A polynomial-time primal-dual affine scaling algorithm for linear and convex quadratic programming and its power series extension. Math Oper Res 1990; 15(2): 191–214

[47]

Moré J J, Sorensen D C. On the use of directions of negative curvature in a modified Newton method. Math Program 1979; 16(1): 1–20

[48]

Netlib. Netlib linear programming library, 2016

[49]

NocedalJWright S J. Numerical Optimization. New York: Springer-Verlag, 1999

[50]

Olivares A, Moguerza J M, Prieto F J. Nonconvex optimization using negative curvature within a modified linesearch. European J Oper Res 2008; 189(3): 706–722

[51]

Rapcsák T. Geodesic convexity in nonlinear optimization. J Optim Theory Appl 1991; 69(1): 169–183

[52]

Ring W, Wirth B. Optimization methods on Riemannian manifolds and their application to shape space. SIAM J Optim 2012; 22(2): 596–627

[53]

SmithS T. Geometric optimization methods for adaptive filtering. Ph D Thesis. Cambridge, MA: Harvard University, 1993

[54]

SmithS T. Optimization techniques on Riemannian manifolds. In: Hamiltonian and Gradient Flows, Algorithms and Control (Bloch A ed). Fields Inst Commun, Vol 3. Providence, RI: AMS, 1994, 113–136

[55]

Sun W Y, Zhou Q Y. An unconstrained optimization method using nonmonotone second order Goldstein’s line search. Sci China Math 2007; 50(10): 1389–1400

[56]

Tits A L, Wachter A, Bakhtiari S, Urban T J, Lawrence C T. A primal-dual interior-point method for nonlinear programming with strong global and local convergence properties. SIAM J Optim 2003; 14(1): 173–199

[57]

UdrişteC. Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and Its Applications, Vol 297. Dordrecht: Kluwer Acad Publ, 1994

[58]

Vasconcelos J F, Elkaim G, Silvestre C, Oliveira P, Cardeira B. Geometric approach to strapdown magnetometer calibration in sensor frame. IEEE Trans Aerosp Electron Syst 2011; 47(2): 1293–1306

[59]

WrightS J. Primal-dual Interior-point Methods. Philadelphia, PA: SIAM, 1997

[60]

Yang X M, Liu H W, Liu C H. Arc-search interior-point algorithm. J Jilin Univ Sci 2014; 52(4): 693–697

[61]

YangY G. Robust system design: pole assignment approaches. Ph D Thesis. College Park, MD: Univ Maryland, 1996

[62]

Yang Y G. Globally convergent optimization algorithms on Riemannian manifolds: uniform framework for unconstrained and constrained optimization. J Optim Theory Appl 2007; 132(2): 245–265

[63]

YangY G. Arc-search path-following interior-point algorithms for linear programming. Optimization Online, 2009

[64]

Yang Y G. A polynomial arc-search interior-point algorithm for convex quadratic programming. European J Oper Res 2011; 215(1): 25–38

[65]

Yang Y G. A polynomial arc-search interior-point algorithm for linear programming. J Optim Theory Appl 2013; 158(3): 859–873

[66]

Yang Y G. A globally and quadratically convergent algorithm with efficient implementation for unconstrained optimization. Comput Appl Math 2015; 34(3): 1219–1236

[67]

Yang Y G. Attitude determination using Newton’s method on Riemannian manifold. Proc IMechE Part G. J Aerosp Eng 2015; 229(14): 2737–2742

[68]

YangY G. Curve LP—a MATLAB implementation of an infeasible interior-point algorithm for linear programming. Numer Algorithms, 2016, doi: 10.1007/s11075-016-0180-1

[69]

Yang YiG. Uniform framework for unconstrained and constrained optimization: optimization on Riemannian manifolds. In: 2010 International Conference on E-Product E-Service and E-Entertainment. Piscataway, NJ: IEEE, 2010 (in Chinese)

[70]

YeY Y. Interior-point Algorithms: Theory and Analysis. New York: John Wiley & Sons, 1997

[71]

Zhang J J, Cao J, Wang Y Y. Gradient algorithm on Stiefel manifold and application in feature extraction. J Radars 2013; 2(3): 309–313

[72]

Zhou Q Y, Sun W Y. A nonmonotone second-order steplength method for unconstrained minimization. J Comput Math 2007; 25(1): 104–112

RIGHTS & PERMISSIONS

Higher Education Press 2023

AI Summary AI Mindmap
PDF (321KB)

834

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/