A discussion of objective function representation methods in global optimization

Panos M. PARDALOS , Mahdi FATHI

Front. Eng ›› 2018, Vol. 5 ›› Issue (4) : 515 -523.

PDF (229KB)
Front. Eng ›› 2018, Vol. 5 ›› Issue (4) : 515 -523. DOI: 10.15302/J-FEM-2018044
RESEARCH ARTICLE
RESEARCH ARTICLE

A discussion of objective function representation methods in global optimization

Author information +
History +
PDF (229KB)

Abstract

Non-convex optimization can be found in several smart manufacturing systems. This paper presents a short review on global optimization (GO) methods. We examine decomposition techniques and classify GO problems on the basis of objective function representation and decomposition techniques. We then explain Kolmogorov’s superposition and its application in GO. Finally, we conclude the paper by exploring the importance of objective function representation in integrated artificial intelligence, optimization, and decision support systems in smart manufacturing and Industry 4.0.

Keywords

global optimization / decomposition techniques / multi-objective / DC programming / Kolmogorov’s superposition / space-filling curve / smart manufacturing and Industry 4.0

Cite this article

Download citation ▾
Panos M. PARDALOS, Mahdi FATHI. A discussion of objective function representation methods in global optimization. Front. Eng, 2018, 5(4): 515-523 DOI:10.15302/J-FEM-2018044

登录浏览全文

4963

注册一个新账户 忘记密码

Global optimization (GO) methods

Global non-convex programs can be solved using several approaches according to recent advances in GO literature (Pardalos and Rosen, 1986; Pardalos, 1991; Bomze et al., 1997; Pardalos and Wolkowicz, 1998; Horst et al., 2000; Nowak, 2005; Floudas and Pardalos, 2013; Horst and Pardalos, 2013; Floudas and Pardalos, 2014). These approaches can be divided into exact methods that can find and verify global solutions and heuristic methods, which only seek global solutions without checking optimality. Heuristics achieve a critical function in the optimization of large-scale non-convex problems and can be applied to provide upper bounds for global optimum, generate cuts and relaxations, and partition feasible sets. Approximation algorithms are kinds of heuristics, wherein performance guarantee is considered estimated error (Fisher, 1980; Hochbaum et al., 1999; Ausiello et al., 2012; Vazirani V V, 2013). MIP approximation techniques work by approximating univariate functions to piecewise linear function with a performance guarantee for MINLP method. Goemans and Williamson (1995) solved a quadratic binary program using the MaxCut heuristic as first approximation algorithm.

In GO, an algorithm is called finite if it obtains and verifies a global solution in a finite number of step. The exact methods are finite in finding and verifying solution. Moreover, simplex, active set, and enumeration methods are finite for solving LPs, convex QPs, and bounded integer or concave problems. However, interior point and solution methods for SQP as a nonlinear convex program are not finite.

All GO methods create a rough model of the program for finding global solutions. A GO method is called a sampling heuristic if the method uses a crude model based on a finite set of points. The considered regions of interest in sampling heuristic methods are bounded set. The distribution of points in this region is usually denser and should consider random behavior to obtain all possible solutions. In the continuous feasible region, the possible random sample is infinite, and a GO solution is not guaranteed. Moreover, the sample can prove that the method converges with probability that is arbitrarily close to 1. A GO method is called a relaxation-based method if the method uses relaxation as a crude model, such as a mathematical model, which is easier to solve than the original problem. The crude model influences the problem description. Modeling the problem in an aggregated form is efficient for sampling heuristics with few variables and a simple, feasible set in a disaggregated form for relaxation-based method with objective functions and constraints that can be relaxed.

Relaxation-based heuristics are classified into three relaxation-based methods classes, which include branch-and-bound methods. This method divides the GO problem into subproblems based on partitioning of the feasible set. Successive relaxation methods successively improve an initial relaxation without dividing it into subproblems. Heuristics retrieve potential solutions from a given relaxation without modifying the relaxation.

The MINLP solver technology should be further developed, and additional details on GO (Pardalos and Rosen, 1987; Pintér, 1996; Horst et al., 2000; Neumaier, 2004; Schichl, 2010; Horst and Pardalos, 2013; Horst and Tuy, 2013;), MINLP methods (Floudas et al., 1989; Grossmann and Kravanja, 1997; Grossmann, 2002; Tawarmalani and Sahinidis, 2002; Floudas, 2013), and sampling heuristics (Torn and Zilinskas, 1989; Boender and Romeijn, 1995; Strongin and Sergeyev, 2000) should be identified. In summary, GO methods can be classified as follows:

Sampling heuristics: 1. Multistar (Strongin and Sergeyev, 2000), 2. Clustering method (Becker and Lago, 1970; Dixon and Szegö, 1974; Torn and Zilinskas, 1989), 3. Evolutionary algorithm (Forrest, 1993), 4. Simulated annealing (Metropolis et al., 1953; Kirkpatrick et al., 1983; Locatelli M, 2002), 5. Tabu search (Glover and Laguna, 1997; Mart et al., 2018), 6. Statistical GO (Mockus J, 2012), 7. Greedy randomized adaptive search procedure (Resende and Ribeiro, 2003; Hirsch et al., 2007)

Branch-and-bound methods: 1. Branch-and-bound (Smith and Pantelides, 1996; Vaidyanathan and El-Halwagi, 1996; Smith and Pantelides, 1999; Horst and Tuy, 2013), 2. Branch-and-cut (Padberg and Rinaldi, 1991), 3. Branch-and-reduce (Sahinidis, 1996), 4. Branch-and-price, 5. Branch-cut-and-price, 6. Branch-and-infer (Van Hentenryck et al., 1997; Bliek, 1998; Boddy and Johnson, 2002; Sellmann and Fahle, 2003; Hooker, 2011)

Successive approximation method: 1. Extended cutting-plan method (Westerlund and Pettersson, 1994, 1995; Westerlund et al., 2001), 2. Generalized bender decomposition (Geoffrion, 1972; Floudas et al., 1989; Paules and Floudas, 1989), 3. Outer approximation (Duran and Grossmann, 1986; Kocis and Grossmann, 1987; Viswanathan and Grossmann, 1990; Fletcher and Leyffer, 1994; Zamora and Grossmann, 1998a, 1998b; Grossmann, 2002; Kesavan et al., 2004), 4. Logic-based approach (Türkay and Grossmann, 1996; Vecchietti and Grossmann, 1999), 5. Generalized cross decomposition (Holmberg, 1990), 6. Successive semidefinite relaxation (Lasserre, 2001; Henrion and Lasserre, 2002; Kojima et al., 2003), 7. Lagrangian and domain cut method (Li et al., 2009)

Relaxation-based heuristics: 1. Rounding heuristics (Mawengkang and Murtagh, 1986; Goemans and Williamson, 1995; Burkard et al., 1997; Zwick, 1999), 2. Lagrangian heuristics (Holmberg and Ling, 1997; Nowak and Römisch, 2000), 3. Deformation heuristics (Moré and Wu, 1997; Schelstraete et al., 1999; Alperin and Nowak, 2005), 4. MIP approximation (Neumaier, 2004), 5. Successive linear programming (Palacios-Gomez et al., 1982)

Decomposition theory

Large-scale problems can be solved by splitting them into subproblems, which are coupled by a master problem either in parallel or in sequence. The Dantzig–Wolfe decomposition employs separability to decompose a GO problem to subproblems; this method is one of the first decomposition approaches for linear programming that could be optimized in parallel (Dantzig and Wolfe, 1960). This method considers dual problem as a master problem, which coordinates the solutions and iterative modifications of the subproblems. The extension of Dantzig–Wolfe decomposition was applied to the nonlinear convex problem, and the Lagrangian dual is solved by using the cutting plane method. Details regarding decomposition methods in convex and non-convex GO problems are found in (Kelly et al., 1998; Bertsekas, 1999; Horst et al., 2000; Babayev and Bell, 2001; Svanberg, 2002; Palomar and Chiang, 2006; Zhang and Wang, 2006; Boyd et al., 2007; Chiang et al., 2007; Zheng et al., 2013; Rockafellar, 2016; Rahmaniani et al., 2017; Nowak et al., 2018). In general, decomposition techniques can be classified into dual and primal decomposition methods.

Primal decomposition

The following program with objective function is considered:

{maxy, xi i fi (xi); subject to : x i Xi},

wherei Aix iy , and y Y. Primal decomposition can be applied wherever a coupling variable is set to a fixed value. Thereafter, the GO problem is decoupled into several subproblems for each i as:

{maxxif i(x i); subje ct to:xiXi,Aix iy}.

The master problem updates the coupling variable by solving:

{maxy i fi * (y); subject to : y Y},

where λ, fi*(y) is the optimal objective value in (2). Therefore, Problems (2) and (3) are convex optimization problems if Problem (1) is convex. The gradient method solves Problem (3). Therefore, the optimal Lagrange multiplier, λi *(y ) in (2), the subgradient for each fi*(y) obtained by si(y) =λi*(y)), and problem (2) can be solved by y, where s(y)=i si(y)= iλi*(y) is the global subgradient.

Dual decomposition

Dual decomposition is suitable when a coupling constraint and its relaxation exist. The GO problem is divided into several subproblems.

{maxxii fi(xi); s ubjec tto: xiXi,ii hi(xi) c}.

The following equation is obtained by applying Lagrangian relaxation to the coupling constraint in Problem (4):

{maxxii f i (xi) λT( i h i (xi)- c); sub ject to : xi Xi i}.

The Lagrangian subproblem for each i decouples Problem (5)

{ max x i fi(x i)λThi(x i)-c; subje ct to : xi Xi }.

The dual variables are updated from the master dual problem as follows:

{minλ=i gi(λ ) +λ Tc; subject to: λ 0},

where gi(λ ) is the dual function obtained as the maximum value of the Lagrangian solved in Problem (6) for a given l. Thus, a gradient method can solve Problem (7), and the subgradient for each gi( λ) obtained by si(λ )= h i (xi*(λ)), where (λ)i is the optimal solution of Problem (6) for a given l. The global subgradient is s(λ)= isi(y )+c=c i h(xi*(λ)). Problem (6) can be independently and locally solved with knowledge of l.

Objective function representation based on decomposition methods

Separable optimization

The choice of decomposition (of objective function) influences the choice of the algorithm for solving the corresponding mathematical program.

Definition 1: Separable optimization Problem (Horst et al., 2000)

{mi nx nF0( x) subje ctto :Fi(x) bi,li xi ui,i=1, ,m},

where Fi(x)= j=1n Fij(xj) ,i=0,1, ...,m.

Factorable optimization

McCormick (1983, 1974, 1976) introduced factorable programming. A factorable program takes the following form

{minx nXL(x )s ubjec tto: liX i( x) ui ,i=1,L1},

where Xi:n

Xi(x) = xi for i = 1,...,n and Xp(x), p = 1,...,i- 1, function Xi is Xi(x) = p=1i-1 Tpi(Xp(x)) + p=1i1 q=1p Vq, pi (Xp(x)) . Up,q ( Xq(x)), where T’s, U’s, and V’s are the transformation functions of a single variable. The lower and upper bounds liui are given constants. The function Xi (x),i= 1,...,L can be written as factorable functions. McCormick (1974) developed a factorable programming language integrated with SUMT (Mylander et al., 1971) for NLPs. The functions Xi (x),i= 1,...,L are called concomitant variable functions (cvfs). The cvfs includes separable and quadratic terms.

Almost block separable optimization

The following problem is considered:

m inx Rnf(x) = f1 (u,y) +f2(v,y ),

wherex= (u,v,y) n and u n1,v n2,y n3, n1+ n2+n3=n, and y are called complicated variables [usually n 1,n2 n3]

Letφ1(y) =minuf1(u,y ),φ2(y) =mi nvf 2(v,y). The problem is equivalent to:

m iny φ1 (y) +φ 2(y).

If f1 and f2 are convex, then φ1(y) and φ2(y ) are convex.

DC optimization problems

Continuous DC programming

One of the special non-convex programs is DC programming. DC function and dual DC programming are defined as follows:

Definition 2: DC function (Horst et al., 2000; Wu et al., 2018)

A real-valued function f : n {+ ,} subject to:

{f(x) =g(x) h (x),x n} ,

where g,h: n + is a convex function and is a DC function for any h and g.

Definition 3: DC program (Horst et al., 2000; Wu et al., 2018)

The following model is called a DC program

{min f0(x) subj ect to:fi(x) 0, i= 1,2...,n}

if fi(x ) are DC functions (i = 0,1,2,...,n) and it is the same as the following DC program. Then,

i nfx<nf (x) = g(x ) h(x ).

Hartman Theorem 1. The following DC programs are equal:

{ sup f( x) :xC, f,C: convexinf g(x )h(x) :x n,g, h: convex inf g (x)h(x) :xC, f1 (x)f 2(x) 0,g,h, f1,f 2,C: all convex

Hartman Theorem 2. A function f is locally DC if an -ball on which DC exists. Every function that is locally DC is considered a DC proposition. Let fi be DC functions for i= 1,...,m. Thus, { i λif i( x)for λi };{ma xi fi(x)};{mi ni fi(x)}; {Πifi(x)}; and {fi } are twice continuously differentiable DC. Moreover, (gof) is DC if f is DC and g is convex, and every continuous function on C (convex set) is the limit of a sequence of uniformly converging DC functions.

Definition 4: Subgradient of convex function (Horst et al., 2000; Wu et al., 2018)

A vector x is a subgradient of a convex function h at a point x if h(z) h (x)+ x,zx, where x ,y= i=1nxiyi is the inner product of two vectors with the same dimension. The subdifferential of h(x) is the set of all subgradients.

Definition 5: Conjugate functions (Horst et al., 2000; Wu et al., 2018)

A conjugate function h: n + of a convex function h: n + is:

h*(p):= supyny,xh(x).

Theorem 3: The conjugate function h(y) of h(x) is convex. If h (x) is a closed proper convex function, then the bi-conjugate of h is itself, that is, h =h.

Theorem 4 (Toland–Singer duality): Given closed convex functionsg ,h: n +, then:

i nfx n {g (x) h (x)} =infp n {h ( p) g (p)}.

Definition 6: DC algorithm (Horst et al., 2000; Wu et al., 2018)

The following algorithm is used for obtaining a local optimal solution for the DC program.

Continuous relaxations for discrete DC programming

The positive support of x Zn is presented as follows: sup p+(x) := {i {1,2,...,n} :xi>0 }.

The indicator vector χ S is defined by:

χs (i)={1 iS0 iS}.

M-convex and L-convex are two common discrete functions:

1) M-convex functions are defined as x ,ynand isu pp+ (x y), function h: n +is M-convex if it satisfies:

h (x) + h(y ) min{h( Xχ i) +h(x+ χi)} ,

m inj supp+(xy)h(x χi+χi) +h(y+χiχj).

2) L-convex functions are defined as x ,yn,h: n + is L -convex if it satisfies:

h (x) + h(y ) h( x+y2 ) +h( x+y 2).

Consider the following discrete DC program:

{Inf f( x) =g( x) h( x)s ubjec t to:xn}.

The four kinds of discrete DC programs include

M-L, M-M, L-L, and L-M, wherein the first three are NP-hard, and the last one on {0,1}n is in P, can be defined on the basis of M and L-convex function definitions (Kobayashi, 2014; Maehara et al., 2018) .

We assume functions g ,h:n {+}. The effective domain of g isd omZg := {xn:g(x)<+}

The convex closure g(x) : n {+} of g is:

g(x) = sup{ s(x ) : s is an affine fun ction ,s( y) g( y)(yn)}.

A convex extension g: n {+} of g is a convex function with the same function value onx n .

We assumef˜(x) : = g (x) h^(x). Th en f˜ (x) :=g(x) h(x ), xn. Thus:

i nfx n {g(x) h (x)} =in fx nf˜(x) in fxnf˜(x).

Theorem 5: For convex extensible functions g,h:n + with d omZg bounded and dom zgdomZh:

i nfz n {g(z) h (z)} =in fx n{g(x) h ^(x) },

where g(x) is the linear closure of g(x), and h^ (x) is any convex extension of h(x).

We found that the discrete DC programming (20) is equivalent to the corresponding continuous relaxation DC programming based on Theorem 5.

DI optimization problems

Total and partial monotonicity are related to monotonicity for all and some variables with many GO applications. The d.i. monotonic optimization with increasing functions in +n can be generally described as follows:

{min f(x) g(x) subject to:fi( x) gi (x) 0, i= 1,..., m}.

Let g(x) = 0, and,

i,fi(x) gi(x) 0 max1im{fi(x) gi(x)} 0 F(x) G (x) 0

with increasing F, and G ( F (x) =maxi{fi(x) + ij gj(x)} ,G(x) = igi( x) ) .

Then, the problem is reduced to:

{minf(x); s ubjec t to : F(x)+t F(b),G(x) +tF(b) ,0tF( b)F(0),x [0,b]+n}.

For any x, x where x x, if x G, then xG, a set G+n is normal.

Many GO problems, including polynomial, multiplicative, Lipschitz optimization problems, and non-convex quadratic programming, can be considered monotonic optimization problems.

Decomposition and multi-objective optimization

We consider the following problems:

P1:minx D nF(x) =f1(x) +...+fk( x)

P 2:minx Dnf(x) = (f1(x),...,fk(x ))

Objective function F(x) in many GO problems can be represented by the summation of k relatively simple functions as F (x) = f1 (x) + f2 (x) + ...+fk(x). P2 is a multi-objective optimization problem. Let E(f,D ) D be the set of all Pareto optimal solutions in D. We obtain the following theorems for optimal solutions of P1 and the optimal Pareto frontier of P2.

Theorem 6: If x is an optimal solution of P1, then x E(f,D) of P2.

Theorem 7: Let hi(t) be a monotonic increasing function for i = 1 ,...,k. We consider the multi-objective optimization problemm inx Dnh(x) = (h1(f1(x )),..., hm(fk (x))) . Then,E (f,D) =E(h,D). (Miettinen, 1999; Chinchuluun and Pardalos, 2007; Pardalos et al., 2008; Du and Pardalos, 2013; Migdalas et al., 2013; Pardalos et al., 2017)

Theorems 1 and 2 show that the extended Pareto optimal frontier set E(h,D) can be obtained by solving P2 and searching for the optimal x of P1 from E(h,D).

P2 can be a multi-objective optimization problem (MaOP). The algorithms for solving MaOPs can be classified as: 1. Algorithm adaptation methods, which modify/extend the classical EMO algorithms for solving MaOPs, including preference-based MOEA (PICEA; PBEA), Pareto-based MOEA (NSGA-II; SPEA2), indicator-based MOEA (HypE; SMSEMOA), decomposition-based MOEA (MOEA/D; M2M); and 2. Problem transformation methods, which transform the MaOP into a problem with few objectives, including objective selection (s-MOSS; k-EMOSS; L-PCA) and objective extraction (Gu, 2016) . Refer to Gu (2016) and Mane and Rao(2017), for a review of solution algorithms and real-world applications of MaOPs, such as flight control system, engineering design, data mining, nurse scheduling, car controller optimization, and water supply portfolio planning.

MOEA/D is a mostly used method for solving P2. Its goals can be categorized as: 1) convergence to detect solutions close to the Pareto frontier; 2) diversity to determine well-distributed solutions; and 3) coverage to cover the entire Pareto frontier. Several MOEAs for these goals are found in literature, which can be broadly categorized under three categories, namely, 1) domination-, 2) indicator-, and 3) decomposition-based frameworks (Ehrgott and Gandibleux, 2000; Trivedi et al., 2017).

In MOEA/D literature, three decomposition methods, including the weighted sum (WS), the weighted Tchebycheff (TCH), and penalty based boundary intersection (PBI) approaches.

The ith subproblem of the WS approach is given as:

m in g ws(x| λi) = j=1m λij fj(x).

This method is efficient for solving convex Pareto solutions with min objective function.

The ith subproblem of the TCH approach is defined as follows:

min gte( x|λ i,z*)= max1jm{λ ij| fj( x) zj*|},

wherez*=(z1 *,..., zm*)T is the ideal reference point with zj*<min { fj(x)|xΩ} for j=1,2,...,m.

The ith subproblem of the PBI approach is defined as follows:

m in g pbi (x|λi,z ) = d1+θd2,

whered1= ((F(x )z *)Tλi λi and d2= F(x)(z*d1 λi λ i). z is the reference point shown in (32), and q is a penalty parameter that should be tuned properly.

Kolmogorov’s Superposition

Kolmogorov (1956) presents the following theorem as Kolmogorov’s superposition:

Theorem 8: Continuous real functions ψp ,q(x) (for any integer n≥2) on the closed unit interval E1 = [0,1] exists similar to continuous real function f(x1,... ,xn ) on the n-dimensional unit cube En, which can be shown as:

f (x 1,... ,xn ) = q=12n+1χq[ p=1n ψpq (xp)],

yq= p=1n ψpq (xp),

where cq(y) is a continuous real function (refer to (Arnol’d, 1959; Tikhomirov, 1991) for a brief proof of the theorem). The following equation is obtained for n = 3, by setting, φq(x1,x2) =ψ 1q(x 1) +ψ 2q(x 2) and hq (y,x3) =χqy+ψ3q (x3): f( x1,x2,x3)= q=1 7[ϕq(x 1,x2),x3].

The application of Kolmogorov theorem in GO in space-filling curve is an example of its efficient optimizing functions based on their projection from n dimensions to one dimension (Goertzel, 1999; Lera and Sergeyev, 2010; Sergeyev et al., 2013). Sprecher (Sprecher and Draghici, 2002; Sprecher, 2013; Sprecher, 2014) explored the link between the aforementioned theorem and the space-filling curves from computational algorithms for real-valued continuous functions.

Conclusions

This paper reviewed different GO and decomposition methods on the basis of objective function representation. Many GO methods are derived from the branch and bound method, which are inefficient for finding a remarkable solution. This paper provides opportunity for additional research on decomposition techniques based on objective function representation, multi-objective optimization, and Kolmogorov’s superposition. The development of other parallel decomposition-based GO methods based on the objective function representation for MINLP, such as Decogo solver (Nowak et al., 2018), can be a challenging area in MINLP solver development. Kolmogorov theorem in GO will be discussed in future studies.

Industry 4.0 is known as the future of smart manufacturing and industrial revolution. Making decentralized decision is critical in Industry 4.0 (Marques et al., 2017). Horizontal and vertical integrations are two principal characteristics in Industry 4.0. Decentralized decision support systems are needed depending on the different types of decisions, including operational, tactical, real-time, and strategic. Many optimization problems are integrated with artificial intelligence in Industry 4.0, in which decision makers (DMs) should make a decentralized decision. This paper will help DMs in Industry 4.0 represent their objective function based on different GO techniques, such as Kolmogorov’s superposition and DC programming, which can be solved separately. Finally, Khakifirooz, Pardalos, et al. (2018) and Khakifirooz, Chien, et al. (2018) reported that applications of non-convex optimization in decision support system development for smart manufacturing and Industry 4.0 can be a challenging direction for future research.

References

[1]

Alperin H, Nowak I (2005). Lagrangian smoothing heuristics for max-cut. Journal of Heuristics, 11(5–6): 447–463

[2]

Arnol’d V I (1959). On the representation of continuous functions of three variables by superpositions of continuous functions of two variables. Matematicheskii Sbornik, 90(1): 3–74

[3]

Ausiello G, Crescenzi P, Gambosi G, Kann V, Marchetti-Spaccamela A, Protasi M (2012). Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties.Berlin: Springer Science & Business Media

[4]

Babayev D A, Bell G I (2001). An optimization problem with a separable non-convex objective function and a linear constraint. Journal of Heuristics, 7(2): 169–184

[5]

Becker R W, Lago G (1970). A global optimization algorithm. In: Proceedings of the 8th Allerton Conference on Circuits and Systems Theory. 3–12

[6]

Bertsekas D P (1999). Nonlinear Programming.Belmont: Athena Scientific

[7]

Bliek C (1998). Coconut deliverable d1-algorithms for solving nonlinear and constrained optimization problems. The COCONUT Project

[8]

Boddy M S, Johnson D P (2002). A new method for the global solution of large systems of continuous constraints. In: Bliek C, Jermann C, Neumaier A, eds. International Workshop on Global Optimization and Constraint Satisfaction.Berlin: Springer

[9]

Boender C G E, Romeijn H E (1995). Stochastic methods. In: Pardalos P M, Romeijin H E, eds. Handbook of global optimization. Berlin: Springer

[10]

Bomze I M, Csendes T, Horst R, Pardalos P M (1997). Developments in Global Optimization.Berlin: Springer Science & Business Media

[11]

Boyd S, Xiao L, Mutapcic A, Mattingley J (2007). Notes on Decomposition Methods.Stanford: Stanford University

[12]

Burkard R E, Kocher M, Rüdolf R (1997). Rounding strategies for mixed integer programs arising from chemical production planning. Yugoslav Journal of Operations Research

[13]

Chiang M, Low S H, Calderbank A R, Doyle J C (2007). Layering as optimization decomposition: A mathematical theory of network architectures. Proceedings of the IEEE, 95(1): 255–312

[14]

Chinchuluun A, Pardalos P M (2007). A survey of recent developments in multiobjective optimization. Annals of Operations Research, 154(1): 29–50

[15]

Dantzig G B, Wolfe P (1960). Decomposition principle for linear programs. Operations Research, 8(1): 101–111

[16]

Dixon L C W, Szegö G P (1974). Towards global optimisation. In: Proceedings of a workshop at the university of Cagliari, Italy

[17]

Du D Z, Pardalos P M (2013). Handbook of Combinatorial Optimization: Supplement, Vol. 1.Berlin: Springer Science & Business Media

[18]

Duran M A, Grossmann I E (1986). An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Mathematical Programming, 36(3): 307–339

[19]

Ehrgott M, Gandibleux X (2000). A survey and annotated bibliography of multiobjective combinatorial optimization. OR-Spektrum, 22(4): 425–460

[20]

Fisher M L (1980). Worst-case analysis of heuristic algorithms. Management Science, 26(1): 1–17

[21]

Fletcher R, Leyffer S (1994). Solving mixed integer nonlinear programs by outer approximation. Mathematical Programming, 66(1–3): 327–349

[22]

Floudas C, Aggarwal A, Ciric A (1989). Global optimum search for nonconvex nlp and minlp problems. Computers & Chemical Engineering, 13(10): 1117–1132

[23]

Floudas C A (2013). Deterministic Global Optimization: Theory, Methods and Applications, Vol. 37.Berlin: Springer Science & Business Media

[24]

Floudas C A, Pardalos P M (2013). State of the Art in Global Optimization: Computational Methods and Applications, Vol. 7.Berlin: Springer Science & Business Media

[25]

Floudas C A, Pardalos P M (2014). Recent Advances in Global Optimization.Princeton: Princeton University Press

[26]

Forrest S (1993). Genetic algorithms: principles of natural selection applied to computation. Science, 261(5123): 872–878

[27]

Geoffrion A M (1972). Generalized benders decomposition. Journal of Optimization Theory and Applications, 10(4): 237–260

[28]

Glover F, Laguna M (1997). Tabu Search.Berlin: Springer

[29]

Goemans M X, Williamson D P (1995). Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the Association for Computing Machinery, 42(6): 1115–1145 (JACM)

[30]

Goertzel B (1999). Global optimization with space-filling curves. Applied Mathematics Letters, 12(8): 133–135

[31]

Grossmann I E (2002). Review of nonlinear mixed-integer and disjunctive programming techniques. Optimization and Engineering, 3(3): 227–252

[32]

Grossmann I E, Kravanja Z (1997). Mixed-integer nonlinear programming: A survey of algorithms and applications. In: Biegler L T, Colleman T F, Conn A R, Samtosa F N, eds. Large-scale Optimization with Applications. Berlin: Springer

[33]

Gu F Q (2016). Many objective optimization: Objective reduction and weight design. Dissertation for the Doctoral Degree.Hongkong: HKBU

[34]

Henrion D, Lasserre J B (2002). Solving global optimization problems over polynomials with gloptipoly 2.1. In: Proceedings of International Workshop on Global Optimization and Constraint Satisfaction.Berlin: Springe

[35]

Hirsch M J, Meneses C, Pardalos P M, Resende M G (2007). Global optimization by continuous grasp. Optimization Letters, 1(2): 201–212

[36]

Hochbaum D, Jansen K, Rolim J D, Sinclair A (1999). Randomization, Approximation, and Combinatorial Optimization. Algorithms and Techniques: In: Proceedings of Third International Workshop on Randomization and Approximation Techniques in Computer Science, and Second International Workshop on Approximation Algorithms for Combinatorial Optimization Problems.Berlin: Springer Science & Business Media

[37]

Holmberg K (1990). On the convergence of cross decomposition. Mathematical Programming, 47(1–3): 269–296

[38]

Holmberg K, Ling J (1997). A lagrangean heuristic for the facility location problem with staircase costs. European Journal of Operational Research, 97(1): 63–74

[39]

Hooker J (2011). Logic-Based Methods for Optimization: Combining Optimization and Constraint Satisfaction, Vol. 2.Hoboken: John Wiley & Sons

[40]

Horst R, Pardalos P M (2013). Handbook of Global Optimization, Vol. 2.Berlin: Springer Science & Business Media

[41]

Horst R, Pardalos P M, Van Thoai N (2000). Introduction to Global Optimization.Berlin: Springer Science & Business Media

[42]

Horst R, Tuy H (2013). Global Optimization: Deterministic Approaches.Berlin: Springer Science & Business Media

[43]

Kelly F P, Maulloo A K, Tan D K (1998). Rate control for communication networks: Shadow prices, proportional fairness and stability. Journal of the Operational Research Society, 49(3): 237–252

[44]

Kesavan P, Allgor R J, Gatzke E P, Barton P I (2004). Outer approximation algorithms for separable nonconvex mixed-integer nonlinear programs. Mathematical Programming, 100(3): 517–535

[45]

Khakifirooz M, Chien C-F, Pardalos F M, Panos M (2018). Management suggestions on semiconductor manufacturing engineering: An operations research and data science perspective.Berlin: Springer

[46]

Khakifirooz M, Pardalos P M, Fathi M, Power D J (2018). Decision support for smart manufacturing. Encyclopedia of IST, 5th Edition, IGI Global Book

[47]

Kirkpatrick S, Gelatt C D Jr, Vecchi M P (1983). Optimization by simulated annealing. Science, 220(4598): 671–680

[48]

Kobayashi Y (2014). The complexity of maximizing the difference of two matroid rank functions, METR2014–42. University of Tokyo

[49]

Kocis G R, Grossmann I E (1987). Relaxation strategy for the structural optimization of process flow sheets. Industrial & Engineering Chemistry Research, 26(9): 1869–1880

[50]

Kojima M, Kim S, Waki H (2003). A general framework for convex relaxation of polynomial optimization problems over cones. Journal of the Operations Research Society of Japan, 46(2): 125–144

[51]

Kolmogorov A (1956). On the representation of continuous functions of several variables as superpositions of functions of smaller number of variables.Berlin: Springer

[52]

Lasserre J B (2001). Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3): 796–817

[53]

Lera D, Sergeyev Y D (2010). Lipschitz and Hölder global optimization using space-filling curves. Applied Numerical Mathematics, 60(1–2): 115–129

[54]

Li D, Sun X, Wang J, McKinnon K I (2009). Convergent lagrangian and domain cut method for nonlinear knapsack problems. Computational Optimization and Applications, 42(1): 67–104

[55]

Locatelli M (2002). Simulated annealing algorithms for continuous global optimization. In: Horst R, Pardalos P M, eds. Handbook of global optimization.Berlin: Springer

[56]

Maehara T, Marumo N, Murota K (2018). Continuous relaxation for discrete DC programming. Mathematical Programming, 169(1): 199–219

[57]

Mane S U, Rao M N (2017). Many-objective optimization: Problems and evolutionary algorithms–A short review. International Journal of Applied Engineering Research, 12(20): 9774–9793

[58]

Marques M, Agostinho C, Zacharewicz G, Jardim-Goncalves R (2017). Decentralized decision support for intelligent manufacturing in industry 4.0. Journal of Ambient Intelligence and Smart Environments, 9(3): 299–313

[59]

Mart R, Panos P, Resende M (2018). Handbook of Heuristics.Berlin: Springer

[60]

Mawengkang H, Murtagh B (1986). Solving nonlinear integer programs with large-scale optimization software. Annals of Operations Research, 5(2): 425–437

[61]

McCormick G P (1974). A Mini-manual for Use of the Sumt Computer Program and the Factorable Programming Language.Stanford: Stanford University

[62]

McCormick G P (1976). Computability of global solutions to factorable nonconvex programs: Part iconvex underestimating problems. Mathematical Programming, 10(1): 147–175

[63]

McCormick G P (1983). Nonlinear Programming: Theory, Algorithms, and Applications.New York: Wiley

[64]

Metropolis N, Rosenbluth A W, Rosenbluth M N, Teller A H, Teller E (1953). Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21(6): 1087–1092

[65]

Miettinen K (1999). Nonlinear Multiobjective Optimization. International Series in Operations Research and Management Science.Berlin: Springer

[66]

Migdalas A, Pardalos P M, Värbrand P (2013). Multilevel Optimization: Algorithms and Applications, Vol. 20.Berlin: Springer Science & Business Media

[67]

Mockus J (2012). Bayesian Approach to Global Optimization: Theory and Applications, Vol. 37.Berlin: Springer Science & Business Media

[68]

Moré J J, Wu Z (1997). Global continuation for distance geometry problems. SIAM Journal on Optimization, 7(3): 814–836

[69]

Mylander W C, Holmes R L, McCormick G P (1971). A guide to sumt-version 4: The computer program implementing the sequential unconstrained minimization technique for nonlinear programming (Technical Report RAC-P-63).Mclean: Research Analysis Corporation

[70]

Neumaier A (2004). Complete search in continuous global optimization and constraint satisfaction. Acta Numerica, 13: 271–369

[71]

Nowak I (2005). Relaxation and decomposition methods for mixed integer nonlinear programming, Vol. 152.Berlin: Springer Science & Business Media

[72]

Nowak I, Breitfeld N, Hendrix E M, Njacheun-Njanzoua G (2018). Decomposition-based inner-and outerrefinement algorithms for global optimization. Journal of Global Optimization, (4–5): 1–17

[73]

Nowak M P, Römisch W (2000). Stochastic lagrangian relaxation applied to power scheduling in a hydrothermal system under uncertainty. Annals of Operations Research, 100(1–4): 251–272

[74]

Padberg M, Rinaldi G (1991). A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Review, 33(1): 60–100

[75]

Palacios-Gomez F, Lasdon L, Engquist M (1982). Nonlinear optimization by successive linear programming. Management Science, 28(10): 1106–1120

[76]

Palomar D P, Chiang M (2006). A tutorial on decomposition methods for network utility maximization. IEEE Journal on Selected Areas in Communications, 24(8): 1439–1451

[77]

Pardalos P M (1991). Global optimization algorithms for linearly constrained indefinite quadratic problems. Computers & Mathematics with Applications (Oxford, England), 21(6–7): 87–97

[78]

Pardalos P M, Migdalas A, Pitsoulis L (2008). Pareto optimality, game theory and equilibria, Vol. 17.Berlin: Springer Science & Business Media

[79]

Pardalos P M, Rosen J B (1986). Methods for global concave minimization: A bibliographic survey. SIAM Review, 28(3): 367–379

[80]

Pardalos P M, Rosen J B (1987). Constrained Global Optimization: Algorithms and Applications.New York: Springer-Verlag

[81]

Pardalos P M, Wolkowicz H (1998). Topics in semidefinite and interior-point methods. American Mathematical Society

[82]

Pardalos P M, Zilinskas A, Zilinskas J (2017). Non-convex multi-objective optimization, Vol. 123.Berlin: Springer

[83]

Paules G E I V IV, Floudas C A (1989). Apros: Algorithmic development methodology for discrete-continuous optimization problems. Operations Research, 37(6): 902–915

[84]

Pintér J D (1996). Global Optimization in Action.Dordrecht: Kluwer Academic Publishers

[85]

Rahmaniani R, Crainic T G, Gendreau M, Rei W (2017). The benders decomposition algorithm: A literature review. European Journal of Operational Research, 259(3): 801–817

[86]

Resende M G C, Ribeiro C C (2003). Greedy randomized adaptive search procedures. In: Glover F, Kochenberger G, eds. Hand Book of Metaheuristics.Dordrecht: Kluwer Academic Publishers

[87]

Rockafellar R T (2016). Problem decomposition in block-separable convex optimization: Ideas old and new. In: Proceedings of the 5th Asian Conference on Nonlinear Analysis and Optimization, Niigata, Japan

[88]

Sahinidis N V (1996). Baron: A general purpose global optimization software package. Journal of Global Optimization, 8(2): 201–205

[89]

Schelstraete S, Schepens W, Verschelde H (1999). Energy minimization by smoothing techniques: A survey.Molecular Dynamics: from Classical to Quantum Methods

[90]

Schichl H (2010). Mathematical Modeling and Global Optimization.Cambridge: Cambridge University Press

[91]

Sellmann M, Fahle T (2003). Constraint programming based lagrangian relaxation for the automatic recording problem. Annals of Operations Research, 118(1–4): 17–33

[92]

Sergeyev Y D, Strongin R G, Lera D (2013). Introduction to Global Optimization Exploiting Space-Filling Curves.Berlin: Springer Science & Business Media

[93]

Smith E M, Pantelides C C (1996). Global optimisation of general process models. In: Grossmann I E, eds. Global Optimization in Engineering Design.Berlin: Springer

[94]

Smith E M, Pantelides C C (1999). A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex minlps. Computers & Chemical Engineering, 23(4–5): 457–478

[95]

Sprecher D (2013). Kolmogorov superpositions: A new computational algorithm. In: Igelnik B, eds. Efficiency and Scalability Methods for Computational Intellect.New York: IGI Global

[96]

Sprecher D (2014). On computational algorithms for real-valued continuous functions of several variables. Neural Networks, 59: 16–22

[97]

Sprecher D A, Draghici S (2002). Space-filling curves and Kolmogorov superposition-based neural networks. Neural Networks, 15(1): 57–67

[98]

Strongin R, Sergeyev Y D (2000). Global Optimization with Non-Convex Constraints.Dordrecht: Kluwer Academic Publishers

[99]

Svanberg K (2002). A class of globally convergent optimization methods based on conservative convex separable approximations. SIAM Journal on Optimization, 12(2): 555–573

[100]

Tawarmalani M, Sahinidis N V (2002). Convexification and Global Optimization in Continuous and Mixedinteger Nonlinear Programming: Theory, Algorithms, Software, and Applications, Vol. 65.Berlin: Springer Science & Business Media

[101]

Tikhomirov V (1991). On the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition. In: Kolmogorov A N, Shiryaeu A, eds. Selected Works of AN Kolmogorov.Berlin: Springer

[102]

Torn A, Zilinskas A (1989). Global Optimization.New York: Springer-Verlag

[103]

Trivedi A, Srinivasan D, Sanyal K, Ghosh A (2017). A survey of multiobjective evolutionary algorithms based on decomposition. IEEE Transactions on Evolutionary Computation, 21(3): 440–462

[104]

Türkay M, Grossmann I E (1996). Logic-based minlp algorithms for the optimal synthesis of process networks. Computers & Chemical Engineering, 20(8): 959–978

[105]

Vaidyanathan R, El-Halwagi M (1996). Global optimization of nonconvex minlps by interval analysis. In: Grossmann I E, eds. Global Optimization in Engineering Design.Berlin: Springer

[106]

Van Hentenryck P, Michel L, Deville Y (1997). Numerica: A Modeling Language for Global Optimization.Boston: MIT Press

[107]

Vazirani V V (2013). Approximation Algorithms.Berlin: Springer Science & Business Media

[108]

Vecchietti A, Grossmann I E (1999). Logmip: A disjunctive 0–1 non-linear optimizer for process system models. Computers & Chemical Engineering, 23(4–5): 555–565

[109]

Viswanathan J, Grossmann I E (1990). A combined penalty function and outer-approximation method for minlp optimization. Computers & Chemical Engineering, 14(7): 769–782

[110]

Westerlund T, Lundqvist K (2001). Alpha-ECP, version 5.01: An interactive MINLP-solver based on the extended cutting plane method.

[111]

Westerlund T, Pettersson F (1995). An extended cutting plane method for solving convex minlp problems. Computers & Chemical Engineering, 19: 131–136

[112]

Westerlund T, Pettersson F, Grossmann I E (1994). Optimization of pump configurations as a minlp problem. Computers & Chemical Engineering, 18(9): 845–858

[113]

Wu C, Wang Y, Lu Z, Pardalos P M, Xu D, Zhang Z, Du D Z (2018). Solving the degree-concentrated fault-tolerant spanning subgraph problem by dc programming. Mathematical Programming, 169(1): 255–275

[114]

Zamora J M, Grossmann I E (1998a). A global minlp optimization algorithm for the synthesis of heat exchanger networks with no stream splits. Computers & Chemical Engineering, 22(3): 367–384

[115]

Zamora J M, Grossmann I E (1998b). Continuous global optimization of structured process systems models. Computers & Chemical Engineering, 22(12): 1749–1770

[116]

Zhang H, Wang S (2006). Global optimization of separable objective functions on convex polyhedra via piecewise-linear approximation. Journal of Computational and Applied Mathematics, 197(1): 212–217

[117]

Zheng Q P, Wang J, Pardalos P M, Guan Y (2013). A decomposition approach to the two-stage stochastic unit commitment problem. Annals of Operations Research, 210(1): 387–410

[118]

Zwick U (1999). Outward rotations: A tool for rounding solutions of semidefinite programming relaxations, with applications to max cut and other problems. In: Proceedings of the 31st annual ACM symposium on Theory of computing, ACM. 679–687

RIGHTS & PERMISSIONS

The Author(s) 2018. Published by Higher Education Press. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)

AI Summary AI Mindmap
PDF (229KB)

6377

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/