This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

∎ ∎

11institutetext: Wei-tian Wu 22institutetext: College of Mathematics, Sichuan University, Chengdu 610065, China
22email: [email protected]
🖂Xin-min Yang
33institutetext: National Center for Applied Mathematics of Chongqing 401331, China
School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
33email: [email protected]

The Hybridization of Branch and Bound with Metaheuristics for Nonconvex Multiobjective Optimization

Wei-tian Wu1    Xin-min Yang2
(Received: date / Accepted: date)
Abstract

A hybrid framework combining the branch and bound method with multiobjective evolutionary algorithms is proposed for nonconvex multiobjective optimization. The hybridization exploits the complementary character of the two optimization strategies. A multiobjective evolutionary algorithm is intended for inducing tight lower and upper bounds during the branch and bound procedure. Tight bounds such as the ones derived in this way can reduce the number of subproblems that have to be solved. The branch and bound method guarantees the global convergence of the framework and improves the search capability of the multiobjective evolutionary algorithm. An implementation of the hybrid framework considering NSGA-II and MOEA/D-DE as multiobjective evolutionary algorithms is presented. Numerical experiments verify the hybrid algorithms benefit from synergy of the branch and bound method and multiobjective evolutionary algorithms.

Keywords:
Multiobjective optimization global optimization nonconvex optimization branch and bound algorithm evolutionary algorithm
MSC:
90C2690C2990C30

1 Introduction

Many optimization problems in the fields of scienceref1 and engineeringref2 need to consider minimizing multiple and conflicting objectives simultaneously. Researchers refer to such problems as multiobjective optimization problems (MOPs). For them, a set of mathematically equally good solutions which are known as Pareto optimal solutionsref3 can be identified. Generally, multiobjective optimization methods aim at finding one or a set of Pareto optimal solutions (Pareto set for short) depending on the decision maker’s preference.

Following a classification by Hwang and Masudref4 , multiobjective optimization methods can be classified into three classes according to the intervention of the decision maker in the solution process: the a priori methods, the interactive methods and the a posteriori methods. For the latter methods, an entirely approximation of the Pareto set is generated first and then the decision maker is supposed to select the most preferred one from candidate solutions. The a posteriori methods are usually considered to be computationally expensive; however, with the vast increase in computer performances, the a posteriori methods have shown some significant advantages. On the one hand, having an approximation of the whole Pareto set means that all the trade-offs associated with the problem are presented to the decision maker, which reinforces the decision maker’s confidence on the final decision. On the other hand, no re-optimizations and further interaction are needed for the a posteriori methods.

Most multiobjective evolutionary algorithmsref5 ; ref6 ; ref7 ; ref8 ; ref9 ; ref10 ; ref11 ; ref12 can be classified as a posteriori. These algorithms generate an approximate of the whole Pareto front by the way of reproducing operations (crossover, mutation and selection) among the candidate solutions. In fact, the reproducing operations are stochastic search strategies inspired by biological evolution, which makes it difficult to discuss the convergence of MOEAs. Nevertheless, many researchers still choose MOEAs to solve practical problemsref13 ; ref14 ; ref15 , because these algorithms show to be very flexible in handling problems with different types of variables and objectivesref6 ; ref10 ; ref12 .

Other a posteriori methods are based on the divide-and-conquer paradigm, for example, ref16 ; ref17 ; ref18 ; ref19 ; ref20 . Among these approaches, the branch and bound methodref18 ; ref19 ; ref20 ; ref21 ; ref22 ; ref23 have been verified to be effective in handling nonconvex MOPs. The solution process of the branch and bound method implicitly searches for all possible solutions to the problem in a tree structure. The nodes in the tree generate children by subdividing corresponding search region into smaller ones (i.e., branching), with the help of upper and lower bounds for small regions (i.e., bounding), discarding tests are used to prune off the small regions that are provably suboptimal (i.e., pruning). Any branch and bound algorithm for MOPs consists of these three components. For instance, Fernández and Tóthref18 design a branch and bound algorithm for bi-objective optimization problems that employs bisection and uses interval analysis as the bounding method, and proposes three discarding tests to enhance the performance of the algorithm. Žilinskas et al.ref22 use trisection, Lipschitz lower bounds, and dominance-based discarding test as part of new branch and bound algorithm. In the branch and bound algorithm presented by Niebling and Eichfelderref20 , a new discarding test which combines α\alphaBB methodref24 with an extension of Benson’s outer approximation techniquesref25 is presented for their branch and bound algorithm.

The challenge of the branch and bound algorithms raises when facing the exponential-size number of nodes. Hillermeierref26 proves that the Pareto set of a continuous MOP is a low-dimensional manifold in the variable space, so that the majority of nodes at the same depth of the search tree are of equal importance, which leads to one branch and bound algorithm increasing the computation time when the discarding test is not able to prevent all nodes from exploration. It is important to note that the performance of the discarding test can be improved by means of obtaining tight bounds. On the other hand, if the tight bounds are not available, the branch and bound algorithm can still be parallelized to reduce computation time, not only because the Pareto solutions may be distributed in any nodes at the same depth level but also because nodes at the same depth level denote disjunct subproblems.

In this paper, we present a parallel algorithmic framework which hybridizes the branch and bound method with an MOEA (PBB-MOEA for short). The aim of the MOEA is to improve the solution quality including searching for tight upper bounds and inducing tight lower bounds during branch and bound procedure, and further the MOEA is favor of obtaining feasible upper bounds when handling MOPs with nonconvex constraints. Tight bounds derived in this way can reduce the number of subproblems that have to be solved. The branch and bound method not only guarantees the global convergence of the hybrid framework but also improves the search capability of the MOEA, which enables the resulting hybrid algorithm to handle some problems where pure MOEAs may fail. Furthermore, a technology called elitism is integrated in PBB-MOEA in order to reduce computational cost.

This paper is organized as follows. We begin in Section 2 by recalling some basics of multiobjective optimization. Section 3 focuses on new ideas for PBB-MOEA including the use of an MOEA, the improvement of lower bounds and the implementation of the elitism. We devote Section 4 to some theoretical results. In Section 5 PBB-MOEA is compared to MOEAs and a branch and bound algorithm in order to indicates its performance.

2 Basics of Multiobjective Optimization

In this paper we consider the following constrained multiobjective optimization problems:

(MOP)minxΩF(x)=(f1(x),,fm(x))T\displaystyle({\rm MOP})\qquad\min\limits_{x\in\Omega}\quad F(x)=(f_{1}(x),\ldots,f_{m}(x))^{T}

with

Ω={xn:gj(x)0,j=1,,p,xi(L)xixi(U),i=1,,n}\displaystyle\Omega=\{x\in\mathbb{R}^{n}:g_{j}(x)\geq 0,\;j=1,\ldots,p,\;x_{i}^{(L)}\leq x_{i}\leq x_{i}^{(U)},\;i=1,\ldots,n\}

where fi:nf_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R} (i=1,,mi=1,\ldots,m) are Lipschitz continuous, and gj:ng_{j}:\mathbb{R}^{n}\rightarrow\mathbb{R} (j=1,,pj=1,\ldots,p) are continuous. For a feasible point xΩx\in\Omega, the objective vector F(x)F(x) is said to be the image of xx, while xx is called the preimage of F(x)F(x).

The concept of Pareto optimality for (MOP) is based on the partial ordering of m\mathbb{R}^{m} defined for z1,z2mz^{1},z^{2}\in\mathbb{R}^{m} by

z1z2\displaystyle z^{1}\preccurlyeq z^{2} (z1weaklydominatesz2)zi1zi2,i{1,,m},\displaystyle\;\;(z^{1}\;weakly\;dominates\;z^{2})\Longleftrightarrow\;z^{1}_{i}\leq z^{2}_{i},\;i\in\{1,\ldots,m\},
z1z2\displaystyle z^{1}\preceq z^{2} (z1dominatesz2)z1z2z1z2,\displaystyle\qquad(z^{1}\;dominates\;z^{2})\qquad\Longleftrightarrow\;z^{1}\preccurlyeq z^{2}\;\wedge\;z^{1}\neq z^{2},
z1z2\displaystyle z^{1}\prec z^{2} (z1strictlydominatesz2)zi1<zi2,i{1,,m}.\displaystyle\;\;(z^{1}\;strictly\;dominates\;z^{2})\Longleftrightarrow\;z^{1}_{i}<z^{2}_{i},\;i\in\{1,\ldots,m\}.

When z1z2,z2z1z^{1}\npreceq z^{2},z^{2}\npreceq z^{1} and z1z2z^{1}\neq z^{2}, we say that z1z^{1} is indifferent to z2z^{2} (z1z2z^{1}\sim z^{2}). A set SmS\subseteq\mathbb{R}^{m} is called a nondominated set if for any z1,z2Sz^{1},z^{2}\in S, z1z2z^{1}\npreceq z^{2}.

A point xΩx^{*}\in\Omega is said to be Pareto optimal solution of (MOP) if there is no point xΩx\in\Omega such that F(x)F(x)F(x)\preceq F(x^{*}). The set of all Pareto optimal solutions is called the Pareto optimal set and is denoted by XX^{*} or PS. The image under the map FF of XX^{*} is known as the Pareto front of (MOP) and is denoted by PF.

The ideal point zmz^{*}\in\mathbb{R}^{m} is defined as the objective vector whose components are the global minimum of each objective function on Ω\Omega

z=(minxΩf1(x),minxΩf2(x),,minxΩfm(x))T.z^{*}={\big{(}}\min\limits_{x\in\Omega}f_{1}(x),\min\limits_{x\in\Omega}f_{2}(x),\ldots,\min\limits_{x\in\Omega}f_{m}(x){\big{)}}^{T}.

A nonempty set BΩB\subseteq\Omega is called a subregion if B={xn:x¯i<xi<x¯i,i=1,,n}B=\{x\in\mathbb{R}^{n}:\underline{x}_{i}<x_{i}<\overline{x}_{i},\;i=1,\ldots,n\}, and its midpoint is defined as c(B):=(x¯+x¯)/2c(B):=(\overline{x}+\underline{x})/2. The Euclidean norm is denoted by 2\|\cdot\|_{2}, l1l_{1}-norm by 1\|\cdot\|_{1} and ll_{\infty}-norm by \|\cdot\|_{\infty}. We let ω(B)=x¯x¯\omega(B)=\overline{x}-\underline{x}, then width of BB with respect to l()l_{(\cdot)}-norm is defined as ω():=ω(B)()\omega_{(\cdot)}:=\|\omega(B)\|_{(\cdot)}.

We use d(a,c)=ac2d(a,c)=\|a-c\|_{2} to quantify the distance between two points aa and cc, and the distance between the point aa and a nonempty finite set CC is defined as d(a,C):=mincCd(a,c).d(a,C):=\min_{c\in C}d(a,c). Let AA be another nonempty finite set, we define the Hausdorff distance between AA and CC by

dH(A,C):=max{dh(A,C),dh(C,A)},\displaystyle d_{H}(A,C):=\max\{d_{h}(A,C),d_{h}(C,A)\},

where dh(A,C)d_{h}(A,C) is the directed Hausdorff distance from AA to CC as defined by dh(A,C):=maxaAd(a,C).d_{h}(A,C):=\max_{a\in A}d(a,C). In addition, the magnitude of a set is denoted by |||\cdot|.

3 A Hybrid Framework for MOPs

In this section, we present the hybrid framework which consists of the branch and bound method and an MOEA. For a simple discussion we introduce the branch and bound algorithm for box-constrained problem first, i.e., we allow j=0j=0 in (MOP), this leads to the following problem

(P)minx[a,b]F(x)=(f1(x),,fm(x))T.({\rm P})\qquad\min\limits_{x\in[a,b]}\quad F(x)=(f_{1}(x),\ldots,f_{m}(x))^{T}.

Algorithm 1 gives an implementation of the basic branch and bound algorithm for (P) (also see ref18 ).

Algorithm 1 can output a covering of the Pareto optimal set and an approximation of the entire Pareto front of (P). At the kk-th iteration, a subregion BB is selected from the subregion collection k\mathcal{B}_{k} and then is bisected into two new subregions B1B_{1} and B2B_{2} along the jj-th coordinate, where j=min{argmaxi=1,,n{wi}}.j=\min\big{\{}\mathop{\arg\max}_{i=1,\ldots,n}\{w_{i}\}\big{\}}. Algorithm 1 generates a nondominated set 𝒰nds\mathcal{U}^{nds} to store upper bounds. For the subregion B1B_{1}, we check if its upper bound u(B1)u(B_{1}) is dominated by any other upper bound in 𝒰nds\mathcal{U}^{nds}. If so u(B1)u(B_{1}) will not be stored into 𝒰nds\mathcal{U}^{nds}; otherwise, u(B1)u(B_{1}) will be add to 𝒰nds\mathcal{U}^{nds} and all upper bounds dominated by u(B1)u(B_{1}) will be removed. A discarding test is used to check whether a subregion contains any Pareto solution. A common type of discarding test is based on the Pareto dominance relation: if there exists an upper bound u𝒰ndsu\in\mathcal{U}^{nds} such that uu dominates l(B1)l(B_{1}), then B1B_{1} does not contain any Pareto solution. In this case B1B_{1} will be stored in k+1\mathcal{B}_{k+1}.

In what follows, the set XBX^{*}_{B} denotes the Pareto set of (P) with respect to ΩB\Omega\cap B. The collection of the Pareto sets of (P) with respect to k\mathcal{B}_{k} is denoted by Xk:=BkXBX^{*}_{k}:=\bigcup_{B\in\mathcal{B}_{k}}X^{*}_{B}. The lower and upper bounds for F(XB)F(X^{*}_{B}) will be denoted by l(B)l(B) and u(B)u(B), respectively, and two bounds are also said to be the bounds with respect to BB. The lower bound set with respect to k\mathcal{B}_{k} is defined as k:=Bkl(B)\mathcal{L}_{k}:=\bigcup_{B\in\mathcal{B}_{k}}l(B), while the upper bound set 𝒰k\mathcal{U}_{k} with respect to k\mathcal{B}_{k} is defined similarly.

Input : (P), termination criterion;
Output : k\mathcal{B}_{k}, 𝒰nds\mathcal{U}^{nds};
1 0[a,b]\mathcal{B}_{0}\leftarrow[a,b], 𝒰nds\mathcal{U}^{nds}\leftarrow\emptyset, k=0k=0;
2 while termination criterion is not satisfied do
3      k+1\mathcal{B}_{k+1}\leftarrow\emptyset;
4       while k\mathcal{B}_{k}\neq\emptyset do
5             Select a subregion BB in k\mathcal{B}_{k} and remove it from k\mathcal{B}_{k} and divide BB into B1B_{1} and B2B_{2};
6             for i=1,2i=1,2 do
7                  Calculate the lower bound l(Bi)l(B_{i}) and upper bound u(Bi)u(B_{i}) for BiB_{i};
8                   if BiB_{i} can not be discarded then
9                         Update 𝒰nds\mathcal{U}^{nds} by u(Bi)u(B_{i}) and store BiB_{i} into k+1\mathcal{B}_{k+1};
10                   end if
11                  
12             end for
13            
14       end while
15      kk+1k\leftarrow k+1.
16 end while
Algorithm 1 Basic branch and bound algorithm for (P)

In what follows, the set XBX^{*}_{B} denotes the local Pareto set of (P) with respect to ΩB\Omega\cap B. The collection of the local Pareto sets of (P) with respect to k\mathcal{B}_{k} is denoted by Xk:=BkXBX^{*}_{k}:=\bigcup_{B\in\mathcal{B}_{k}}X^{*}_{B}. The lower and upper bounds for F(XB)F(X^{*}_{B}) will be denoted by l(B)l(B) and u(B)u(B), respectively, and two bounds are also said to be the bounds with respect to BB. The lower bound set with respect to k\mathcal{B}_{k} is defined as k:=Bkl(B)\mathcal{L}_{k}:=\bigcup_{B\in\mathcal{B}_{k}}l(B), while the upper bound set 𝒰k\mathcal{U}_{k} with respect to k\mathcal{B}_{k} is defined similarly.

3.1 Improved upper bounds by MOEA

In single-objective optimization, a global minimal value of objective function over feasible region is restricted from above by the smallest upper bound searched with branch and bound algorithms. Obviously the notion of the smallest upper bound does not make sense to represent an entire Pareto front of a multiobjective optimization problem. Therefore, in the course of the algorithms for multiobjective case, a provisional nondominated set is used to store and update upper bounds found so far. The source of upper bounds may be any points in Ω\Omega, more often, the images of the midpointsref18 or vertexesref23 of subregions can be treated as upper bounds.

In PBB-MOEA, an MOEA is used to calculate upper bounds for each subproblem. And the upper bounds will be stored into the provisional nondominated set 𝒰nds\mathcal{U}^{nds}. Considering the computational costs of the MOEA, we prefer a mini MOEA to its full version. The “mini” implies a small initial population and a few generation.

Distinguish from other bounding methods, the advantages of the mini MOEA are as follows: (i) it generates a large quantity of upper bounds to approximate the Pareto front instead of only one for each subproblem, which is beneficial to the discarding test to reduce the number of subregions; (ii) its global search capability is in favor of finding tight upper bounds; (iii) it is able to obtain feasible upper bounds when solving nonconvex constrained MOPs (see Subsection 3.5). The comparative experiments in Section 5 will validate these advantages.

3.2 Improved lower bounds

A class of lower bounds based on Lipschitz constant for single-objective optimization problems was proposed by Jones et al. in ref27 . As they mentioned, the lower bound for the Lipschitz continuous objective function f:nf:\mathbb{R}^{n}\rightarrow\mathbb{R} with Lipschitz constant LL on subregion BB can be calculated by

l(B)=f(c(B))L2ω(2).\displaystyle l(B)=f(c(B))-\frac{L}{2}\omega_{(2)}. (3.1)

Furthermore, Paulavičius and Žilinskasref28 discussed the influence of different norms and corresponding Lipschitz constants on the performance of branch and bound algorithms. As they suggested, a combination of two extreme norms could replace the Euclidean norm in (3.1) in order to enhance the performance, which is as follows

l(B)=f(c(B))12min{L1ω(),Lω(1)},\displaystyle l(B)=f(c(B))-\frac{1}{2}\min\{L_{1}\omega_{(\infty)},L_{\infty}\omega_{(1)}\}, (3.2)

where L1=sup{f(x)1:xB}L_{1}=\sup\big{\{}\|\nabla f(x)\|_{1}:x\in B\big{\}} and L=sup{f(x):xB}L_{\infty}=\sup\big{\{}\|\nabla f(x)\|_{\infty}:x\in B\big{\}} are Lipschitz constants. In this paper, Lipschitz constants are estimated by the natural interval extensionref27 of f(x)\nabla f(x) on BB.

As the approach proposed in ref18 ; ref20 ; ref23 , a lower bound for the vector-valued function F(x)=(f1(x),,fm(x))TF(x)=(f_{1}(x),\ldots,f_{m}(x))^{T} on BB is the vector whose component is the lower bound for each fjf_{j} (i=1,,m)(i=1,\ldots,m). Therefore, by (3.2), we choose the point

l(B)=(l1,,lm)Twithli=fi(c(B))12min{Li,1ω(),Li,ω(1)},i=1,,m\displaystyle l(B)=(l_{1},\ldots,l_{m})^{T}\quad with\quad l_{i}=f_{i}(c(B))-\frac{1}{2}\min\{L_{i,1}\omega_{(\infty)},L_{i,\infty}\omega_{(1)}\},i=1,\ldots,m (3.3)

as the lower bound for FF on BB, where Li,1L_{i,1} and Li,L_{i,\infty} are Lipschitz constants of fif_{i} corresponding to l1l_{1}-norm and ll_{\infty}-norm, respectively.

For the discarding test, tight lower bounds are favor of reducing the number of subproblems that must be solved. Before discussing the approach to improve the lower bound (3.3), we introduce a classification of dominating points considered by Skulimowski ref30 .

Definition 1.

ref30 Let ymy^{*}\in\mathbb{R}^{m} be an objective vector.

  1. (i)

    yy^{*} is called a totally dominating point with respect to F(XB)F(X^{*}_{B}) if F(XB)y++mF(X^{*}_{B})\subseteq y^{*}+\mathbb{R}^{m}_{+}. The set of totally dominating points will be denoted by 𝒯𝒟(B)\mathcal{TD}(B);

  2. (ii)

    yy^{*} is called a partly dominating point with respect to F(XB)F(X^{*}_{B}) if (y++m)F(XB)(y^{*}+\mathbb{R}^{m}_{+})\cap F(X^{*}_{B})\neq\emptyset. The set of partly dominating points will be denoted by 𝒫𝒟(B)\mathcal{PD}(B).

Based on Definition 1, now we can introduce two kinds of lower bounds for Pareto front.

Definition 2.

A nonempty set (B)\mathcal{L}(B) is called an overall lower bound set for F(XB)F(X^{*}_{B}) if for every xXBx\in X^{*}_{B} there exists a point l(B)l\in\mathcal{L}(B) such that lF(x)l\preccurlyeq F(x).

Remark 1.

Observe that a totally dominating point must be an overall lower bound set, but the converse is not true. This is because an overall lower bound set may consist of multiple partly dominating points. However, an overall lower bound set is a totally dominating point when it is a singleton. Therefore, the lower bound (3.3) is not only a totally dominating point with respect to F(XB)F(X^{*}_{B}) but also an overall lower bound set for F(XB)F(X^{*}_{B}).

Figure 1(a) illustrates the discuss in Remark 1 with an example of a bi-objective problem. In what follows, a singleton is still called a totally dominating point with respect to F(XB)F(X^{*}_{B}) and denoted by l(B)l(B) if it satisfies Definition 2.

By the definition of partly dominating point, we can present a new class of lower bounds, called partial lower bound.

Definition 3.

Let zz^{*} be the ideal point of F(XB)F(X^{*}_{B}). An objective vector lml\in\mathbb{R}^{m} is called a partial lower bound for F(XB)F(X^{*}_{B}) if l𝒫𝒟(B)(z++m\{0})l\in\mathcal{PD}(B)\cap(z^{*}+\mathbb{R}_{+}^{m}\backslash{\{0\}}). The set of partial lower bounds will be denoted by 𝒫(B)\mathcal{P}(B).

The next definition gives an equivalent characterization.

Lemma 1.

Let zz^{*} be the ideal point of F(XB)F(X^{*}_{B}). An objective vector lml\in\mathbb{R}^{m} is called a partial lower bound for F(XB)F(X^{*}_{B}) if

  1. (i)

    there exists a point xXBx\in X^{*}_{B}, such that lF(x)l\preccurlyeq F(x);

  2. (ii)

    there exists a point xXBx\in X^{*}_{B}, such that lF(x)l\sim F(x);

  3. (iii)

    zlz^{*}\preceq l.

Proof.

The proof is obvious and so is omitted.

Remark 2.

According to Definitions 3, it is easy to see that a partial lower bound is a partly dominating point, thus there exists a subset X¯XB\bar{X}\subseteq X^{*}_{B} such that lF(x¯)l\sim F(\bar{x}) for every x¯X¯\bar{x}\in\bar{X} and lF(x)l\preccurlyeq F(x) for every xXB\X¯x\in X^{*}_{B}\backslash\bar{X}. Moreover, we have the following property

𝒫(B)=(F(XB)+m)(z++m\{0}).\displaystyle\mathcal{P}(B)=(F(X^{*}_{B})-\mathbb{R}_{+}^{m})\cap{(z^{*}+\mathbb{R}_{+}^{m}\backslash{\{0\}})}. (3.4)

By Remark 1 and Figure 1(a), we know that an overall lower bound set can consist of multiple partly dominating points, which may be closer to the Pareto front than a totally dominating point. This inspires us whether a totally dominating point can be improved to such an overall lower bound set.

Lemma 2.

Let a totally dominating point l(B)=(l1,,lm)Tl(B)=(l_{1},\ldots,l_{m})^{T} with respect to F(XB)F(X^{*}_{B}) be given. If l^(B)=(l^1,,l^m)T\hat{l}(B)=(\hat{l}_{1},\dots,\hat{l}_{m})^{T} is a partial lower bound for F(XB)F(X^{*}_{B}), then the set

(B)={l(i):l(i)=(l1,,l^i,,lm)T,i=1,,m}\mathcal{L}(B)=\big{\{}l^{(i)}:l^{(i)}=(l_{1},\ldots,\hat{l}_{i},\ldots,l_{m})^{T},\;i=1,\ldots,m\big{\}}

is an overall lower bound set for F(XB)F(X^{*}_{B}).

Proof.

By Remark 2, we know that there exists a set X¯XB\bar{X}\subseteq X^{*}_{B} such that l^(B)F(x¯)\hat{l}(B)\sim F(\bar{x}) for every x¯X¯\bar{x}\in\bar{X} and l^(B)F(x)\hat{l}(B)\preccurlyeq F(x) for every x¯XB\X¯\bar{x}\in X^{*}_{B}\backslash\bar{X}.

Assume for every x¯X¯\bar{x}\in\bar{X} there exists an index set {1,2,,m}\mathcal{I}\subseteq\{1,2,\ldots,m\} such that l^j<fj(x¯)\hat{l}_{j}<f_{j}(\bar{x}) for every jj\in\mathcal{I} and fk(x¯)l^kf_{k}(\bar{x})\leq\hat{l}_{k} for every k{1,2,,m}\k\in\{1,2,\ldots,m\}\backslash\mathcal{I}. Note that l(B)l(B) is a totally dominating point with respect to F(XB)F(X^{*}_{B}), it follows that lkfk(x¯)l^kl_{k}\leq f_{k}(\bar{x})\leq\hat{l}_{k} for each k{1,2,,m}\k\in\{1,2,\ldots,m\}\backslash\mathcal{I}. In this way, we can construct an objective vector l¯=(l¯1,,l¯m)T\bar{l}=(\bar{l}_{1},\ldots,\bar{l}_{m})^{T} such that l¯F(x¯)\bar{l}\preceq F(\bar{x}), where

l¯i={l^i,i;li,i{1,2,,m}\,\bar{l}_{i}=\left\{\begin{array}[]{ll}\hat{l}_{i},&i\in\mathcal{I};\\ l_{i},&i\in\{1,2,\ldots,m\}\backslash\mathcal{I},\end{array}\right.

and further it is easy to see that for all ii\in\mathcal{I} such that l(i)l¯F(x¯)l^{(i)}\preccurlyeq\bar{l}\preceq F(\bar{x}) where l(i)(B)l^{(i)}\in\mathcal{L}(B).

On the other hand, we have l^(B)F(x)\hat{l}(B)\preccurlyeq F(x) for each xXB\X¯x\in X^{*}_{B}\backslash\bar{X}. Furthermore, it is easy to see that l(i)l^(B)l^{(i)}\preceq\hat{l}(B) for each i{1,,m}i\in\{1,\ldots,m\}. Due to the transitivity of Pareto domination relation, we have l(i)F(x)l^{(i)}\preceq F(x) for each i{1,,m}i\in\{1,\ldots,m\} and xXB\X¯x\in X^{*}_{B}\backslash\bar{X}.

In Lemma 2, an overall lower bound set can be obtained by successively exchanging each component of a totally dominating point and a partial lower bound. However, this is not an efficient way to improve a totally dominating point, which can be illustrated with the following example.

Example 1.

Consider the following multiobjective optimization problem:

F(x)=(x1x2x1(1x2)1x1)withxi[0,1],i=1,2.F(x)=\begin{pmatrix}x_{1}x_{2}\\ x_{1}(1-x_{2})\\ 1-x_{1}\end{pmatrix}\quad with\quad x_{i}\in[0,1],\;i=1,2.

It is easy to see that the Pareto front of this problem is contained in the hyperplane {(f1,f2,f3)T:f1+f2+f3=1,fi0,i=1,2,3}\{(f_{1},f_{2},f_{3})^{T}:f_{1}+f_{2}+f_{3}=1,f_{i}\geq 0,i=1,2,3\}, and its ideal point zz^{*} is (0,0,0)T(0,0,0)^{T}. Furthermore, according to the Lemma 1, we know that the objective vector l=(0,0.2,0.2)Tl=(0,0.2,0.2)^{T} is a partial lower bound. Based on Lemma 2, we can convert zz^{*} into an overall lower bound set

={(0,0,0)T,(0,0.2,0)T,(0,0,0.2)T}.\displaystyle\mathcal{L}=\{(0,0,0)^{T},(0,0.2,0)^{T},(0,0,0.2)^{T}\}.

However, it is clear that \mathcal{L} is not an improved lower bound set because zz^{*}\in\mathcal{L}.

The above example shows that if the totally dominating point and the partial lower bound have the same value at one coordinate then the totally dominating point will be an element of the overall lower bound set. This means that Lemma 2 can not improve the totally dominating point in this situation. To avoid the invalid exchange operation, we should check the values of the totally dominating point and the partial lower bound at each coordinate. As a result of the above discussion, we have the following theorem.

Theorem 3.1.

Let a totally dominating point l(B)=(l1,,lm)Tl(B)=(l_{1},\ldots,l_{m})^{T} and a partial lower bound l^(B)=(l^1,,l^m)T\hat{l}(B)=(\hat{l}_{1},\ldots,\hat{l}_{m})^{T} for F(XB)F(X^{*}_{B}) be given. If |τ(l^(B),l(B))|=0|\tau(\hat{l}(B),l(B))|=0, then l(B)l(B) can be improved to an overall lower bound set

(B)={l(i):l(i)=(l1,,l^i,,lm)T,i=1,,m},\displaystyle\mathcal{L}(B)=\{l^{(i)}:l^{(i)}=(l_{1},\ldots,\hat{l}_{i},\ldots,l_{m})^{T},\;i=1,\ldots,m\},

where τ(l^(B),l(B))={i:l^i=li,i=1,,m}\tau(\hat{l}(B),l(B))=\{i:\hat{l}_{i}=l_{i},i=1,\ldots,m\}.

Proof.

The proof is analogous to the proof of Lemma 2.

Figure 1(b) shows the improvement on a totally dominating point by Theorem 3.1 for a bi-objective problem. However, the improvement relies on a known partial lower bound that is not attainable for conflict objectives. In order to find a partial lower bound for Theorem 3.1, we present the following lemma.

Lemma 3.

Let zmz^{*}\in\mathbb{R}^{m} be the ideal point of F(XB)F(X^{*}_{B}), and let z^\hat{z}^{*} be the ideal point of a nonempty compact subset SS of F(B)F(B). If z^𝒫(B)\hat{z}^{*}\in\mathcal{P}(B), then we have

(z^+m\{0})(F(XB)++m)=.\displaystyle(\hat{z}^{*}-\mathbb{R}_{+}^{m}\backslash{\{0\}})\cap(F(X^{*}_{B})+\mathbb{R}_{+}^{m})=\emptyset. (3.5)
Proof.

Assume there is an objective vector z(z^+m\{0})(F(XB)++m)z^{\prime}\in(\hat{z}^{*}-\mathbb{R}_{+}^{m}\backslash{\{0\}})\cap(F(X^{*}_{B})+\mathbb{R}_{+}^{m}). On the one hand, we have zz^+m\{0}z^{\prime}\in\hat{z}^{*}-\mathbb{R}_{+}^{m}\backslash{\{0\}}, it follows that zz^z^{\prime}\preceq\hat{z}^{*}. On the other hand, from zF(XB)++mz^{\prime}\in F(X^{*}_{B})+\mathbb{R}_{+}^{m}, we know that there exists a point xXBx^{*}\in X^{*}_{B} such that F(x)zF(x^{*})\preccurlyeq z^{\prime}. Thus we have F(x)z^F(x^{*})\preceq\hat{z}^{*}, which is a contradiction to Lemma 1.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 1: Details about improving the lower bound: (a) An example of Remark 1; (b) Improve an overall lower bound by a partial lower bound; (c) A partial lower bound induced by upper bound set; (d) The ideal point of upper bound set does not satisfy (3.5).

Recall that the upper bound set, which has been already found by mini MOEA, is a nonempty compact set. Figure 1(c) shows an example for a partial lower bound induced by upper bound set. Therefore, according to Lemma 3, whether the ideal point z^\hat{z}^{*} of the upper bound set is a partial lower bound can be determined by checking whether (3.5) is satisfied. Inspired by ref31 , this can be done by exploring the search region

C(z^,B):=(z^+m\{0})(F(XB)++m)C(\hat{z}^{*},B):=(\hat{z}^{*}-\mathbb{R}_{+}^{m}\backslash{\{0\}})\cap(F(X^{*}_{B})+\mathbb{R}_{+}^{m})

which is in order to determine whether C(z^,B)C(\hat{z}^{*},B) contains feasible points, and if so we assert z^\hat{z}^{*} is not a partial lower bound. Such an exploration can be achieved by solving the following mathematical program associated to the search region C(z^,B)C(\hat{z}^{*},B):

P(z^,B)min\displaystyle P(\hat{z}^{*},B)\qquad\min\quad j=1mfj(x)\displaystyle\sum_{j=1}^{m}f_{j}(x)
s.t.\displaystyle s.t.\quad F(x)z^,\displaystyle F(x)\preceq\hat{z}^{*},
xB.\displaystyle x\in B.

Figure 1(d) shows that z^\hat{z}^{*} is not a partial lower bound because C(z^,B)C(\hat{z}^{*},B) contains feasible points. Note that a minimal solution x~\tilde{x} of P(z^,B)P(\hat{z}^{*},B) will be output by a solver even if C(z^,B)C(\hat{z}^{*},B) is empty, hence we should determine the Pareto dominance relation between F(x~)F(\tilde{x}) and z^\hat{z}^{*}. If F(x~)F(\tilde{x}) dominates z^\hat{z}^{*}, then z^\hat{z}^{*} is not a partial lower bound; otherwise, z^\hat{z}^{*} is a partial lower bound.

Algorithm 2 gives an implementation of Theorem 3.1.

Input : BB, l(B)=(l1,,lm)Tl(B)=(l_{1},\ldots,l_{m})^{T}, z^=(z^1,,z^m)T\hat{z}^{*}=(\hat{z}^{*}_{1},\ldots,\hat{z}^{*}_{m})^{T};
Output : (B)\mathcal{L}(B);
1 (B)\mathcal{L}(B)\leftarrow\emptyset;
2 Obtain a minimal solution x~\tilde{x} of P(z^,B)P(\hat{z}^{*},B);
3 τ(z^,l(B))={i:z^il(B)i,i=1,,m}\tau(\hat{z}^{*},l(B))=\{i:\hat{z}^{*}_{i}\neq l(B)_{i},i=1,\ldots,m\};
4
5if F(x~)z^and|τ(z^,l(B))|=0F(\tilde{x})\npreceq\hat{z}^{*}\;and\;|\tau(\hat{z}^{*},l(B))|=0 then
6       for i=1,,mi=1,\ldots,m do (B)(B)(l1,,z^i,,lm)T\mathcal{L}(B)\leftarrow\mathcal{L}(B)\cup(l_{1},\ldots,\hat{z}^{*}_{i},\ldots,l_{m})^{T};
7      ;
8      
9 end if
Algorithm 2 Improve lower bounds

3.3 Elitism

In PBB-MOEA, the solvers (mini MOEA and the solver for P(z^,B)P(\hat{z}^{*},B)) must be called multiple times in order to obtain tight bounds. As the number of subregions increases, the computational cost is undoubtedly expensive. To circumvent this problem, we introduce a strategy called elitism to reduce the number of the solver calls.

For a clearer presentation, we first introduce data tracking in PBB-MOEA. A subregion after subdivision is stored with an index that will be inherited by the data stemming from this subregion. For instance, if we set the index of a subregion to jj, then the indexes of each upper bound and lower bound with respect to this subregion will be jj, besides, the ideal point of the upper bound set and the improved lower bound set will also be jj.

For upper bounds, the elitism is to determine some subregions that are easy to search for tight upper bounds. The elite subregion collection can be backtracked by the indexes stored in 𝒰nds\mathcal{U}^{nds} and is denoted by U\mathcal{E}_{U}. In this way it is only necessary to apply mini MOEA to U\mathcal{E}_{U}.

For lower bounds, the elitism is to find subregions whose lower bounds are more likely to be dominated by 𝒰nds\mathcal{U}^{nds}. From the view of geometry, the elite lower bounds are closer to 𝒰nds\mathcal{U}^{nds} than others, which can be found in k\mathcal{L}_{k} by Pareto dominance relation and are denoted by \mathcal{L}^{-}. Then, by the indexes stored in \mathcal{L}^{-}, the collection of these “worst” subregions L\mathcal{E}_{L} can be constructed. As a result, we only need to apply Algorithm 2 to L\mathcal{E}_{L}.

In addition, considering the stability of mini MOEA, PBB-MOEA will reselects U\mathcal{E}_{U} from all subregions when a criterion is satisfied. This is called a repair operation. More details about the elitism are discussed in subsection 3.5.

3.4 Discarding test

PBB-MOEA still uses the dominance relation-based discarding test.

Theorem 3.2.

Let 𝒰nds\mathcal{U}^{nds} be a nondominated upper bound set with respect to the current subregion collection \mathcal{B}, let (B)\mathcal{L}(B) be an overall lower bound set with respect to subregion BB generated by Algorithm 2. If for every l(B)l\in\mathcal{L}(B) there exists an upper bound u𝒰ndsu\in\mathcal{U}^{nds} such that ulu\preceq l, then the subregion BB does not contain any Pareto solution.

Proof.

Because (B)\mathcal{L}(B) is an overall lower bound set with respect to subregion BB, for each point xBx\in B, there exists a lower bound l(B)l\in\mathcal{L}(B) such that lF(x)l\preccurlyeq F(x). Then it is easy to see that for each point xBx\in B, there exists an upper bound u𝒰ndsu\in\mathcal{U}^{nds} such that uF(x)u\preceq F(x). Because uu is a feasible objective vector, BB does not contain any Pareto solution.

The discarding test is described in Algorithm 3, where 𝒰nds\mathcal{U}^{nds} is a nondominated upper bound set, \mathcal{B} is a subregion collection and \mathcal{L} is a lower bound set. As we plan to track data by indexes, in line 1 the indexes in the subregion collection \mathcal{B} will be collected and stored in the set \mathcal{I}. In the for-loop a considered subregion will be removed from \mathcal{B} as well as its lower bound set from \mathcal{L} if each point of its lower bound set is dominated by 𝒰nds\mathcal{U}^{nds}.

Input : ,𝒰nds,\mathcal{L},\mathcal{U}^{nds},\mathcal{B};
Output : ,\mathcal{L},\mathcal{B};
1 Collect all indexes from \mathcal{B} and store the indexes in \mathcal{I};
2 for each ii\in\mathcal{I} do
3       if for every l(i)l\in\mathcal{L}^{(i)} there exists u𝒰ndsu\in\mathcal{U}^{nds} such that ulu\preceq l then
4            \(i)\mathcal{L}\leftarrow\mathcal{L}\backslash\mathcal{L}^{(i)}, \(i)\mathcal{B}\leftarrow\mathcal{B}\backslash\mathcal{B}^{(i)}.
5       end if
6      
7 end for
Algorithm 3 DiscardingTest(,𝒰nds,)(\mathcal{L},\mathcal{U}^{nds},\mathcal{B})

3.5 The complete framework

PBB-MOEA is given in Algorithm 4. The repair operation (lines 7-10) will be performed if the iteration counter kk reaches 3n3n and the elitism (lines 12-18) is applied otherwise. PBB-MOEA stops when the Hausdorff distance between 𝒰nds\mathcal{U}^{nds} and k\mathcal{L}_{k} drops below the desired accuracy (0,0.02](0,0.02] or iteration counter kk reaches 6n6n. An approximation of the Pareto optimal set can also be obtained if preimages of 𝒰nds\mathcal{U}^{nds} are output by mini MOEA. In addition, at each iteration 𝒰nds\mathcal{U}^{nds} will be set to an empty set to release memory for storing new upper bounds.

Input : problem (P), termination criteria, repair criterion;
Output : k\mathcal{B}_{k}, k\mathcal{L}_{k}, 𝒰nds\mathcal{U}^{nds};
1 Set the iteration counter kk to 11, set the flag of 0\mathcal{B}_{0} to 1;
2 while termination criteria are not satisfied do
3      𝒰k\mathcal{U}_{k}\leftarrow\emptyset, k\mathcal{L}_{k}\leftarrow\emptyset, 𝒵k\mathcal{Z}_{k}^{*}\leftarrow\emptyset, 𝒰nds\mathcal{U}^{nds}\leftarrow\emptyset;
4       Construct ^k\mathcal{\hat{B}}_{k} by bisecting all subregions in k1\mathcal{B}_{k-1};
5       Calculate the lower bound set k\mathcal{L}_{k} for the collection ^k\mathcal{\hat{B}}_{k};
6       if repair criterion is satisfied then
7            Set all flags to 1;
8             Obtain 𝒰k\mathcal{U}_{k} and the ideal point set 𝒵k\mathcal{Z}_{k}^{*} by applying mini MOEA to ^k\mathcal{\hat{B}}_{k};
9             Obtain the collection of improved lower bound sets ^\hat{\mathcal{L}} by applying Algorithm 2 to (^k,k,𝒵k)(\mathcal{\hat{B}}_{k},\mathcal{L}_{k},\mathcal{Z}_{k}^{*});
10             Update lower bound set k^\mathcal{L}_{k}\leftarrow\hat{\mathcal{L}};
11      else
12            Construct U\mathcal{E}_{U} according to flags ;
13             Determine \mathcal{L}^{-} from k\mathcal{L}_{k};
14             Construct L\mathcal{E}_{L} according to flags in \mathcal{L}^{-};
15             Obtain the upper bound set 𝒰k\mathcal{U}_{k} by applying mini MOEA to U\mathcal{E}_{U};
16             Obtain the ideal point set 𝒵k\mathcal{Z}^{*}_{k} by applying mini MOEA to L\mathcal{E}_{L};
17             Obtain the collection of improved lower bound sets ^\hat{\mathcal{L}} by applying Algorithm 2 to (L,,𝒵k)(\mathcal{E}_{L},\mathcal{L}^{-},\mathcal{Z}^{*}_{k});
18             Update lower bound set k(k\)^\mathcal{L}_{k}\leftarrow(\mathcal{L}_{k}\backslash\mathcal{L}^{-})\cup\hat{\mathcal{L}};
19       end if
20      Determine the nondominated upper bound set 𝒰nds\mathcal{U}^{nds} from 𝒰k\mathcal{U}_{k};
21       Update the flags according to 𝒰nds\mathcal{U}^{nds};
22       k,k\mathcal{L}_{k},\mathcal{B}_{k}\leftarrowDiscardingTest(k,𝒰nds,^k\mathcal{L}_{k},\mathcal{U}^{nds},\mathcal{\hat{B}}_{k});
23       kk+1k\leftarrow k+1.
24 end while
Algorithm 4 PBB-MOEA

Next we address several issues left open during the implementation of PBB-MOEA.

Elitism. Besides an index, each subregion is stored with a boolean called flag stating if it is an elite. The flag is set to 1 if a subregion is an elite and to 0 otherwise. When a subregion is bisected, two new subregions inherit its flag. In order to perform the repair operation, in line 7, the flags of all subregions are set to 1. In line 21 the flags of the elite subregions whose upper bounds are not in 𝒰nds\mathcal{U}^{nds} will be set to 0.

Parallelization. To achieve parallelization, in line 4 all subregions are bisected simultaneously. Note that all subregions have the same width, so the bisection will always be along the same coordinate. The parallelization of PBB-MOEA is without communication, i.e., there is no need to design a communication protocol to provide the processors with rules for requesting subproblems and exchanging data. For example, in line 8, algorithm first creates an initial pool of subregions which are corresponding to several subproblems, then distributes each processor one subproblem at a time and requires each processor to perform mini MOEA on the subproblem assigned to it, and finally collects the upper bounds. The parallelization of lines 9, 15, 16 and 17 is similar.

Constraint handling. Two difficulties need to be overcome when PBB-MOEA is employed for solving (MOP): (i) determining the feasibility of subregions; (ii) finding feasible upper bounds.

To overcome the first difficulty, the feasibility test which is same as the one mentioned in ref18 will be added between lines 4 and 5.

To overcome the second one, l1l_{1} exact penalty function is used in the mini MOEA to calculate the fitness values (see ref6 ). For example, if the Tchebycheff metric is used to calculate the fitness values of individuals for (P):

FV(x)=max1im{λi|fi(x)zi|},\displaystyle FV(x)=\mathop{\max}\limits_{1\leq i\leq m}\{\lambda_{i}|f_{i}(x)-z_{i}^{*}|\},

then for (MOP), the fitness value can be calculated by

FV(x)=max1im{λi|fi(x)zi|}+ρj=1p|min{gj(x),0}|,\displaystyle FV(x)=\mathop{\max}\limits_{1\leq i\leq m}\{\lambda_{i}|f_{i}(x)-z_{i}^{*}|\}+\rho\mathop{\sum}\limits_{j=1}^{p}|\min\{g_{j}(x),0\}|,

where ρ>0\rho>0 is penalty coefficient.

Although the exact penalty function is employed, mini MOEA does not guarantee the feasibility of upper bounds. Hence, in lines 8 and 15 of Algorithm 4, a filtering operation should be added to remove upper bounds whose preimages are infeasible. The preimages can be output together with corresponding upper bounds by mini MOEA.

Furthermore, PBB-MOEA still calculate lower bounds by (3.3) for constrained subproblems, because the overall lower bound for unconstrained subproblem is also the overall one for constrained subproblem. In addition, 𝒰nds\mathcal{U}^{nds} will not be set to an empty set to increase the quantity of solutions.

4 Convergence Results

In this section, we analyze the convergence properties of PBB-MOEA. First, two lemmas which plays a crucial role in the convergence proof are introduced.

Lemma 4.

Let BB be a subregion in the collection k\mathcal{B}_{k}. If limkwk(B)i=0\lim\limits_{k\rightarrow\infty}w_{k}(B)_{i}=0 for i{1,,m}i\in\{1,\ldots,m\}, then for each xXBx^{*}\in X^{*}_{B}, we have

limkd(l(B),F(x))=0,\displaystyle\lim\limits_{k\rightarrow\infty}d(l(B),F(x^{*}))=0,

where l(B)l(B) is the lower bound for F(XB)F(X^{*}_{B}) calculated by (3.3).

Proof.

Since limkwk(B)i=0\lim\limits_{k\rightarrow\infty}w_{k}(B)_{i}=0, we have limkωk,(1)=0\lim\limits_{k\rightarrow\infty}\omega_{k,(1)}=0 and limkωk,()=0\lim\limits_{k\rightarrow\infty}\omega_{k,(\infty)}=0. Thus it follows that

limkmin{Li,1ωk,(),Li,ωk,(1)}=0.\lim\limits_{k\rightarrow\infty}\min\{L_{i,1}\omega_{k,(\infty)},L_{i,\infty}\omega_{k,(1)}\}=0.

Then, for each xXBx^{*}\in X^{*}_{B}, we have

0limkd(l(B),F(x))limki=1m(min{Li,1ωk,(),Li,ωk,(1))2=0.\displaystyle 0\leq\lim\limits_{k\rightarrow\infty}d(l(B),F(x^{*}))\leq\lim\limits_{k\rightarrow\infty}\sqrt{\sum\limits^{m}_{i=1}(\min\{L_{i,1}\omega_{k,(\infty)},L_{i,\infty}\omega_{k,(1)})^{2}}=0.

The second inequality follows from fif_{i} be Lipschitz continuous and (3.3).

Lemma 5.

Let BB be a subregion in the collection k\mathcal{B}_{k}. If limkwk(B)i=0\lim\limits_{k\rightarrow\infty}w_{k}(B)_{i}=0 for i{1,,m}i\in\{1,\ldots,m\}, then for each xXBx^{*}\in X^{*}_{B}, we have

limkd(F(x),u(B))=0,\displaystyle\lim\limits_{k\rightarrow\infty}d(F(x^{*}),u(B))=0,

where u(B)u(B) is the image of any point in BB.

Proof.

This is a direct consequence of the Lipschitz continuity of fif_{i}.

Remark 3.

Lemmas 4 and 5 show that, for a single subregion, the distance between its Lipschitz lower bound and its Pareto optimal values decreases with the decreasing its width as well as the distance between its upper bounds and the Pareto optimal values.

Remark 4.

The conclusion of Lemma 4 also holds for an improved lower bound set.

With the help of the preceding lemmas, we can now prove some convergence results.

Theorem 4.1.

Let {k}k\{\mathcal{B}_{k}\}_{k\in\mathbb{N}} be the sequence of subregion collection generated by PBB-MOEA, and let {k}k\{\mathcal{L}_{k}\}_{k\in\mathbb{N}} and {𝒰k}k\{\mathcal{U}_{k}\}_{k\in\mathbb{N}} be the sequence of lower and upper bound sets, respectively. If limkwi=0\lim\limits_{k\rightarrow\infty}w_{i}=0 for i{1,,m}i\in\{1,\ldots,m\}, then we have

limkdH(F(Xk),k)=0\displaystyle\lim\limits_{k\rightarrow\infty}d_{H}(F(X^{*}_{k}),\mathcal{L}_{k})=0 (4.1)

and

limkdH(F(Xk),𝒰k)=0.\displaystyle\lim\limits_{k\rightarrow\infty}d_{H}(F(X^{*}_{k}),\mathcal{U}_{k})=0. (4.2)
Proof.

First, let us prove (4.1). It is easy to see that for each xXkx^{*}\in X^{*}_{k}, there exists a subregion BkB\in\mathcal{B}_{k}, such that xBx^{*}\in B. Then Lemma 4 is applicable to concluding that

0limkd(F(x),k)limkd(F(x),l(B))=0,xXk,\displaystyle 0\leq\lim\limits_{k\rightarrow\infty}d(F(x^{*}),\mathcal{L}_{k})\leq\lim\limits_{k\rightarrow\infty}d(F(x^{*}),l(B))=0,\quad\forall x^{*}\in X^{*}_{k},

thus we obtain limkdh(F(X),k)=0\lim\limits_{k\rightarrow\infty}d_{h}(F(X^{*}),\mathcal{L}_{k})=0.

On the other hand, for each lkl\in\mathcal{L}_{k}, there exists a subregion BkB\in\mathcal{B}_{k}, such that l(B)=ll(B)=l. Using Lemma 4, we have

0limkd(l,F(Xk))limkd(l(B),F(x^))=0,x^XB.\displaystyle 0\leq\lim\limits_{k\rightarrow\infty}d(l,F(X^{*}_{k}))\leq\lim\limits_{k\rightarrow\infty}d(l(B),F(\hat{x}))=0,\quad\forall\hat{x}\in X^{*}_{B}.

which means that limkdh(k,F(Xk))=0\lim\limits_{k\rightarrow\infty}d_{h}(\mathcal{L}_{k},F(X^{*}_{k}))=0. As a result, we prove that

limkdH(k,F(Xk))=0.\lim\limits_{k\rightarrow\infty}d_{H}(\mathcal{L}_{k},F(X^{*}_{k}))=0.

The proof of (4.2) is similar to the proof above and so is omitted.

Corollary 1.

Let {k}k\{\mathcal{B}_{k}\}_{k\in\mathbb{N}} be the sequence of subregion collection generated by PBB-MOEA, and let {k}k\{\mathcal{L}_{k}\}_{k\in\mathbb{N}} and {𝒰k}k\{\mathcal{U}_{k}\}_{k\in\mathbb{N}} be the sequence of lower and upper bound sets, respectively. If limkwi=0\lim\limits_{k\rightarrow\infty}w_{i}=0 for i{1,,m}i\in\{1,\ldots,m\}, then we have

limkdH(k,𝒰k)=0.\lim\limits_{k\rightarrow\infty}d_{H}(\mathcal{L}_{k},\mathcal{U}_{k})=0.
Proof.

The conclusion can be derived from Lemmas 4 and 5.

Now, we show that PBB-MOEA creates a covering of the Pareto set in each iteration.

Lemma 6.

Let {k}k\{\mathcal{B}_{k}\}_{k\in\mathbb{N}} be a sequence of subregion collections generated by PBB-MOEA. Then, for the Pareto set XX^{*} of (P), we have

XBkBB1BB0B.\displaystyle X^{*}\subset\cdots\subset\bigcup_{B\in\mathcal{B}_{k}}B\subset\cdots\subset\bigcup_{B\in\mathcal{B}_{1}}B\subset\bigcup_{B\in\mathcal{B}_{0}}B.
Proof.

This conclusion is guaranteed by Theorem 3.2 and the way k\mathcal{B}_{k} is constructed.

Based on Lemma 6, we have the following theorem.

Theorem 4.2.

Let {k}k\{\mathcal{B}_{k}\}_{k\in\mathbb{N}} be a sequence of subregion collections generated by PBB-MOEA. For each Pareto solution xXx^{*}\in X^{*}, for all iteration counters kk\in\mathbb{N}, there exists a subregion B(x,k)kB(x^{*},k)\in\mathcal{B}_{k}, such that xB(x,k)x^{*}\in B(x^{*},k). Furthermore, Xk=0xXB(x,k).X^{*}\subset\bigcap\limits_{k=0}^{\infty}\bigcup\limits_{x\in X^{*}}B(x,k).

Proof.

The existence of B(x,k)B(x^{*},k) is generated by Lemma 6. The first conclusion can be proved by mathematical induction. The second conclusion is derived from the first one.

Theorem 4.3.

Let {k}k\{\mathcal{B}_{k}\}_{k\in\mathbb{N}} be the sequence of subregion collections generated by PBB-MOEA. If limkwk(B)i=0\lim\limits_{k\rightarrow\infty}w_{k}(B)_{i}=0 for i{1,,m}i\in\{1,\ldots,m\} and BkB\in\mathcal{B}_{k}, then there must exist a subcollection kk\mathcal{B}^{*}_{k}\subset\mathcal{B}_{k} for all kk\in\mathbb{N}, such that the following hold:

limkdH(X,k)=0,\displaystyle\lim\limits_{k\rightarrow\infty}d_{H}(X^{*},\mathcal{B}^{*}_{k})=0, (4.3)
limkdH(F(X),𝒰(k))=0,\displaystyle\lim\limits_{k\rightarrow\infty}d_{H}(F(X^{*}),\mathcal{U}(\mathcal{B}^{*}_{k}))=0, (4.4)
limkdH(F(X),(k))=0,\displaystyle\lim\limits_{k\rightarrow\infty}d_{H}(F(X^{*}),\mathcal{L}(\mathcal{B}^{*}_{k}))=0, (4.5)

where 𝒰(k):=Bku(B)\mathcal{U}(\mathcal{B}^{*}_{k}):=\bigcup_{B\in\mathcal{B}^{*}_{k}}u(B) and (k):=Bkl(B)\mathcal{L}(\mathcal{B}^{*}_{k}):=\bigcup_{B\in\mathcal{B}^{*}_{k}}l(B).

Proof.

Setting k=xXB(x,k)\mathcal{B}^{*}_{k}=\bigcup_{x\in X^{*}}B(x,k). By Theorem 4.2, it is easy to see that

limkd(x,k)=0xX,\displaystyle\lim\limits_{k\rightarrow\infty}d(x^{*},\mathcal{B}^{*}_{k})=0\quad\forall x^{*}\in X^{*},

this entails that limkdh(X,k)=0\lim\limits_{k\rightarrow\infty}d_{h}(X^{*},\mathcal{B}^{*}_{k})=0. On the other hand, for each xkx\in\mathcal{B}_{k}^{*}, there exists a subregion B(x^,k)kB(\hat{x}^{*},k)\in\mathcal{B}_{k}, such that xB(x^,k)x\in B(\hat{x}^{*},k). Therefore, we have

0limkd(x,X)limkd(x,x^)=limki=1mwk(B(x^,k))i2=0,\displaystyle 0\leq\lim\limits_{k\rightarrow\infty}d(x,X^{*})\leq\lim\limits_{k\rightarrow\infty}d(x,\hat{x}^{*})=\lim\limits_{k\rightarrow\infty}\sqrt{\sum_{i=1}^{m}w_{k}(B(\hat{x}^{*},k))_{i}^{2}}=0,

which means that limkdh(k,X)=0\lim\limits_{k\rightarrow\infty}d_{h}(\mathcal{B}_{k}^{*},X^{*})=0. This proves (4.3).

Now, we turn to the relation between the lower bound set of k\mathcal{B}^{*}_{k} and Pareto front. By the construction of k\mathcal{B}^{*}_{k} and Lemma 5, we have the following inequality

0d(F(x),𝒰(k))d(F(x),u(B(x,k)))=0xX.\displaystyle 0\leq d(F(x^{*}),\mathcal{U}(\mathcal{B}^{*}_{k}))\leq d(F(x^{*}),u(B(x^{*},k)))=0\quad\forall x^{*}\in X^{*}.

On the other hand, for each u𝒰(k)u\in\mathcal{U}(\mathcal{B}^{*}_{k}), there exists a point x^X\hat{x}\in X^{*}, such that u(B(x^,k))=uu(B(\hat{x},k))=u. Using Lemma 5 again, we obtain

0d(u,F(X))d(u(B(x^,k)),F(x^))=0u𝒰(k).\displaystyle 0\leq d(u,F(X^{*}))\leq d(u(B(\hat{x},k)),F(\hat{x}))=0\quad\forall u\in\mathcal{U}(\mathcal{B}^{*}_{k}).

This shows (4.4). The proof of (4.5) is similar to the above proof.

Theorem 4.3 shows that, at each iteration, there must exist a subcollection of the subregion collection, such that a sequence constructed by the subcollection per iteration converges to the Pareto set. In practice, it is hard to determine such a subcollection for PBB-MOEA, thus we try to find the elite subregions to simulate the subcollection. Intuitively, the collection of elite subregions will be very similar to the subcollection if the width of subregions is small enough.

5 Experimental Results

PBB-MOEA has been implemented in Python 3.8 with fundamental packages like numpy, scipy and multiprocessing. All experiments have been done on a computer with Intel(R) Core(TM) i7-10700 CPU and 32 Gbytes RAM on operation system WINDOWS 10 PROFESSIONAL.

We hybridize the branch and bound method with MOEA/D-DEref7 and NSGA-IIref10 , respectively. The resulting algorithms are called PBB-MOEA/D and PBB-NSGAII, respectively. For all test instances, the population size of two mini MOEAs is set to 10 and the maximum number of generations is set to 20. Other parameter settings of mini MOEAs can be found in ref7 ; ref10 .

Test instance 5.1 Consider the following optimization problem from ref23 :

F(x)=(x1min(|x11|,1.5x1)+x2+1)withxi[0,2],i=1,2.F(x)=\begin{pmatrix}x_{1}\\ \min(|x_{1}-1|,1.5-x_{1})+x_{2}+1\end{pmatrix}\quad with\quad x_{i}\in[0,2],\;i=1,2.

The Pareto optimal set of this problem is disconnected, which leads to its Pareto front consisting of two segments: one joins the points (0, 2) and (1, 1), and the other, the points (1.5, 1) and (2, 0.5).

Figure 1 shows the results of PBB-MOEA/D on this instance. After 12 iterations, PBB-MOEA/D outputs 96 subregions covering the Pareto set. These subregions are visualized as grey boxes in Figure 2(a). In Figure 2(b), a total of 479 upper bounds (blue dots) found by the mini MOEA and 96 lower bounds (red dots) enclose the Pareto front from above and below.

Refer to caption
Refer to caption
Figure 2: The results of test instance 5.1.: (a) A covering of the PS of test instance 5.1 generated by PBB-MOEA/D; (b) Lower and upper bounds for the PF of test instance 5.1 generated by PBB-MOEA/D.

In the following, we use two instances to illustrate the difference between PBB-MOEA/D and a basic branch and bound (BB) algorithm. The basic BB uses the midpoints of subregions to calculate upper bounds and uses (3.3) to calculate lower bounds. Here the number of subregions (BNV) is a performance measure of tightness of bounds.

Test instance 5.2 Consider the biobjective optimization problem from ref32 :

F(x)=(1ei=1n(xi1n)21ei=1n(xi+1n)2)withxi[2,2],n=3.F(x)=\begin{pmatrix}1-e^{-\sum\limits_{i=1}^{n}(x_{i}-\frac{1}{\sqrt{n}})^{2}}\\ 1-e^{-\sum\limits_{i=1}^{n}(x_{i}+\frac{1}{\sqrt{n}})^{2}}\end{pmatrix}\quad with\quad x_{i}\in[-2,2],\;n=3.

Figure 3 gives a visual comparison of the two algorithms on test instance 5.2. Figure 3(a) shows the evolution of BNVs versus the number of iterations, although the basic BB algorithm can handle this instance well, the curve of PBB-MOEA/D mostly lie to the below of the basic BB algorithm, indicating that the performance of PBB-MOEA/D is still slightly better. Figure 3(b) presents the upper bounds found by two algorithms in the 9-th iteration, respectively. It can be easily seen that the upper bounds found by PBB-MOEA/D (blue dots) mostly dominates the ones found by basic BB algorithm (red cubes), which implies that blue dots are tighter. Figures 3(c) and 3(d) illustrate the influence of different upper and lower bounds on the discarding test. PBB-MOEA/D finds a total of 884 upper bounds, which is obviously more than upper bounds in Figure 3(d). As a result, PBB-MOEA/D is able to remove more subregions in the same discarding test.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: PBB-MOEA/D versus the basic BB algorithm on test instance 5.2.:(a) The curves of BNVs of PBB-MOEA and the basic BB algorithm; (b) Upper bounds generated by two algorithms at the 9-th iteration; (c) Bounds generated by PBB-MOEA/D; (d) Bounds generated by the basic BB algorithm.

Test instance 5.3 Consider the biobjective optimization problem from ref10 and the dimension of the variable space nn\in\mathbb{N} can be arbitrarily chosen:

F(x)=(x1g(x)(1(f1(x)g(x))2))F(x)=\begin{pmatrix}x_{1}\\ g(x)(1-(\frac{f_{1}(x)}{g(x)})^{2})\end{pmatrix}

where g(x)=1+9(i=2nxi)n1g(x)=1+\frac{9(\sum\limits_{i=2}^{n}x_{i})}{n-1} and xi[0,1],n.x_{i}\in[0,1],\;n\in\mathbb{N}. This instance is well-known ZDT2 which has a nonconvex Pareto front, and its Pareto set is x1[0,1]x_{1}\in[0,1], xi=0x_{i}=0, i2i\geq 2.

The illustrations in Figure 4 show the results for this instance with n=10n=10. Figures 4(a) and 4(b) show that PBB-MOEA/D can handle this instance very well. In Figure 4(c), the BNVs grow exponentially when using the basic BB algorithm, which means that adequate bounds are not found, resulting in a failure of solving this instance. In contrast, with the help of the MOEA, PBB-MOEA/D can reduce the number of subregions successfully, which indicates its superior performance.

Refer to caption
Refer to caption
Refer to caption
Figure 4: The results of test instance 5.3 with n=10n=10: (a) A covering of the PS of test instance 5.3 generated by PBB-MOEA/D; (b) Lower and upper bounds for the PF of test instance 5.3 generated by PBB-MOEA/D; (c) The curves of BNVs of PBB-MOEA and the basic BB algorithm.

One may ask that MOEAs can also handle the above instances well, is it necessary to consider the hybridization of the branch and bound method and MOEAs? In the following, we compare PBB-MOEA with two well-known MOEAs: NSGA-II and MOEA/D-DE.

Test instance 5.4 Consider the biobjective optimization problem from ref33 :

F(x)=(i=1nxi1i=1n(1wi(xi)))F(x)=\begin{pmatrix}\sum\limits_{i=1}^{n}x_{i}\\ 1-\sum\limits_{i=1}^{n}(1-w_{i}(x_{i}))\end{pmatrix}

where wj(z)={0.01e(z/20)2.5if j=1,20.01e(z/15)if j>3w_{j}(z)=\left\{\begin{array}[]{ll}0.01e^{-(z/20)^{2.5}}&\mbox{if }j=1,2\\ 0.01e^{-(z/15)}&\mbox{if }j>3\end{array}\right. and xi[0,40]x_{i}\in[0,40], nn\in\mathbb{N}.
This test instance is a multimodal problem, i.e., different Pareto solutions may have the same function value.

Figure 5 provides the comparison of three algorithms on test instance 5.4 with n=3n=3. For both NSGA-II and MOEA/D-DE, the quality of a solution set is evaluated in the objective space, the distribution of solutions in the variable space does not received enough attention. As a result, though they are able to approximate the whole Pareto front well in Figures 5(d) and 5(e), only a part of the Pareto set is approximated by solutions they find which are shown in Figures 5(a) and 5(b). In contrast, the branch and bound method focus on solving the problem from the variable space, thus Figures 5(c) and 5(f) show that PBB-MOEA/D can approximate not only the Pareto front but also the whole Pareto set.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) PF of NSGA-II
Refer to caption
Refer to caption
Figure 5: PBB-MOEA/D versus two MOEAs on test instance 5.4 with n=3n=3: (a) The solutions generated by NSGA-II; (b) The solutions generated by MOEA/D-DE; (c) The solutions generated by PBB-MOEA/D; (d) The upper bounds generated by NSGA-II; (e) The upper bounds generated by MOEA/D-DE; (f) The upper bounds generated by PBB-MOEA/D.

In next test instance we will validate the contribution of the branch and bound method to the search capability of MOEAs.

Test instance 5.5 Consider the three-objective optimization problem from ref34 :

F(x)=(0.5(x12+x22)+sin(x12+x22)(3x12x2+4)28+(x1x2+1)227+151x12+x22+11.1exp(x12x22))withxi[3,3],i=1,2.F(x)=\begin{pmatrix}0.5(x_{1}^{2}+x_{2}^{2})+\sin(x_{1}^{2}+x_{2}^{2})\\ \frac{(3x_{1}-2x_{2}+4)^{2}}{8}+\frac{(x_{1}-x_{2}+1)^{2}}{27}+15\\ \frac{1}{x_{1}^{2}+x_{2}^{2}+1}-1.1\exp(-x_{1}^{2}-x_{2}^{2})\end{pmatrix}\quad with\quad x_{i}\in[-3,3],\;i=1,2.

The comparison results are showed as Figure 6. Compared to two pure MOEAs, two hybrid algorithms are able to approximate the Pareto set and Pareto front perfectly. It is worth noting that the hybrid algorithms still outperform the two pure MOEAs when only the mini MOEAs are used, indicating that the branch and bound method enhances MOEAs’ search capability, which is achieved by limiting the search region.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: PBB-MOEA versus MOEA/D-DE and NSGA-II on test instance 5.5.: (a) The solutions generated by MOEA/D-DE; (b) Upper bounds generated by MOEA/D-DE; (c) The solutions generated by PBB-MOEA/D; (d) Upper bounds generated by PBB-MOEA/D; (e) The solutions generated by NSGA-II; (f) Upper bounds generated by NSGA-II; (g) The solutions generated by PBB-NSGAII; (h) Upper bounds generated by PBB-NSGAII.

Finally, we consider a multiobjective optimization problem with nonconvex constraints.

Test instance 5.6 Consider the constrained optimization problem from ref10 :

F(x)=(x1x2)F(x)=\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}

subject to the constraints

g1=x12+x2210.1cos(16arctan(x1/x2))0,\displaystyle g_{1}=x_{1}^{2}+x_{2}^{2}-1-0.1\cos(16\arctan(x_{1}/x_{2}))\geq 0,
g2=0.5(x10.5)2(x20.5)20,\displaystyle g_{2}=0.5-(x_{1}-0.5)^{2}-(x_{2}-0.5)^{2}\geq 0,
0x1π,\displaystyle 0\leq x_{1}\leq\pi,
0x2π.\displaystyle 0\leq x_{2}\leq\pi.

We compare PBB-MOEA/D with MOEA/D-DE and the basic BB algorithm on this instance. The size of population of MOEA/D-DE is 200 and the maximal number of generations is 300, and the penalty coefficient ρ\rho is set to 1. The natural interval extension mentioned in ref27 is employed in the feasibility test.

Since FF is the identity, the Pareto optimal set and the Pareto front coincide. Figure 7 provides the comparison of three algorithms on this instance. Unlike in the box constrained cases, the basic BB algorithm does not find enough feasible upper bounds to approximate the complete Pareto front. By contrast, with the mini MOEA, PBB-MOEA/D is able to find more feasible points than the basic BB algorithm does, and it performs similarly to MOEA/D-DE. This also verifies the enhancement of the branch and bound method to the MOEAs’ search capability.

Refer to caption
Refer to caption
Refer to caption
Figure 7: The comparison of three algorithms on test instance 5.6.: (a) The solutions/feasible upper bounds generated by MOEA/D-DE; (b) The solutions/feasible upper bounds generated by PBB-MOEA/D; (c) The solutions/feasible upper bounds generated by the basic BB algorithm.

6 Conclusions

An algorithmic framework (PBB-MOEA) which hybrids the branch and bound method with MOEAs is presented for solving nonconvex MOPs. PBB-MOEA brings together the advantages from the branch and bound method and MOEAs. On the one hand, PBB-MOEA has global convergence results and is easy to parallelize which benefits from the branch and bound method. On the other hand, PBB-MOEA is able to obtain tight bounds with the help of the search power of an MOEA, and further the tight bounds can reduce the number of subregions. Numerical experiments show that the resulting hybrid algorithms are more effective in solving multiobjective optimization problems than the branch and bound algorithm or MEOAs.

Acknowledgements.
This work is supported by the Major Program of National Natural Science Foundation of China (Nos. 11991020, 11991024), the General Program of National Natural Science Foundation of China (No. 11971084) and the Natural Science Foundation of Chongqing (No. cstc2019jcyj-zdxmX0016)

References

  • (1) Androulakis, I.P., Maranas, C.D., Floudas, C.A. α\alphaBB: A global optimization method for general constrained nonconvex problems. J. Glob. Optim., 7(4): 337-363 (1995)
  • (2) Chipperfield, A., Fleming, P. Multi-objective gas turbine engine controller design using genetic algorithms. IEEE. Trans. Ind. Electron., 43(5): 583-587 (1996)
  • (3) Coello, C.A.C., Lamont, G.B. Applications of Multi-Objective Evolutionary Algorithms. Singapore: World Scientific, 2004
  • (4) Deb, K., Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE. Trans. Evol. Comput., 18(4): 577-601 (2013)
  • (5) Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, Inc, 2001
  • (6) Deb, K., Pratap, A., Agarwal, S., et al. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE. Trans. Evol. Comput., 6(2): 182-197 (2002)
  • (7) Dellnitz, M., Schütze, O., Hestermeyer, T. Covering Pareto sets by multilevel subdivision techniques. J. Optim. Theory. Appl., 124(1): 113-136 (2005)
  • (8) Ehrgott, M., Shao, L.Z., Schöbel, A. An approximation algorithm for convex multi-objective programming problems. J. Glob. Optim., 50(3): 397-416 (2011)
  • (9) Eichfelder, G., Kirst, P., Meng, L., et al. A general branch-and-bound framework for continuous global multiobjective optimization. J. Glob. Optim., 80(1): 195-227 (2021)
  • (10) Fernández, J., Tóth, B. Obtaining the efficient set of nonlinear biobjective optimization problems via interval branch-and-bound methods. Comput. Optim. Appl., 42(3): 393-419 (2009)
  • (11) Fonseca, C.M., Fleming, P.J. Multiobjective genetic algorithms made easy: selection sharing and mating restriction. First International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications. IEEE Press, Piscataway, NJ: 45-52 (1995)
  • (12) Hillermeier, C. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach. Springer Science & Business Media, 2001
  • (13) Hillermeier, C., Jahn, J. Multiobjective optimization: survey of methods and industrial applications. Surv. Math. Ind., 11: 1-42 (2005)
  • (14) Hwang, C.L., Masud, A.S.M. Multiple Objective Decision Making–Methods and Applications: A State-of-the-art Survey. Springer Science & Business Media, 2012
  • (15) Jahn, J. Multiobjective search algorithm with subdivision technique. Comput. Optim. Appl., 35(2): 161-175 (2006)
  • (16) Jaimes, A.L., Coello, C.A.C. Multi-objective evolutionary algorithms: a review of the state-of-the-art and some of their applications in chemical engineering. MULTI-OBJECTIVE OPTIMIZATION: Techniques and Application in Chemical Engineering, Rangaiah G P, Singapore: World Scientific, 63-92 (2017)
  • (17) Jain, H., Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach. IEEE. Trans. Evol. Comput., 18(4): 602-622 (2013)
  • (18) Jones, D.R., Perttunen, C.D., Stuckman, B.E. Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory. Appl., 79(1): 157-181 (1993)
  • (19) Klamroth, K., Lacour, R., Vanderpooten, D. On the representation of the search region in multi-objective optimization. Eur. J. Oper. Res., 245(3): 767-778 (2015)
  • (20) Li, H., Zhang, Q. Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE. Trans. Evol. Comput., 13(2): 284-302 (2008)
  • (21) Neumaier, A. Interval Methods for Systems of Equations. Cambridge university press, 1990
  • (22) Niebling, J., Eichfelder, G. A branch–and–bound-based algorithm for nonconvex multiobjective optimization. SIAM. J. Optim., 29(1): 794-821 (2019)
  • (23) Pareto, V. Manual of Political Economy: A Critical and Variorum Edition. OUP Oxford, 2014
  • (24) Paulavičius, R., Žilinskas, J. Analysis of different norms and corresponding Lipschitz constants for global optimization. Technol. Econ. Dev. Econ, 12(4): 301-306 (2006)
  • (25) Qi, Y., Ma, X., Liu, F., et al. MOEA/D with adaptive weight adjustment. Evol. Comput., 22(2): 231-264 (2014)
  • (26) Schäffler, S., Schultz, R., Weinzierl, K. Stochastic method for the solution of unconstrained vector optimization problems. J. Optim. Theory. Appl., 114(1): 209-222 (2002)
  • (27) Skulimowski, A.M.J. Classification and properties of dominating points in vector optimization. In Kleinschmidt P, Radermacher F J, Scweitzer W and Wildermann H editors, Methods of Operations Research 58: 99-112. Frankfurt am Main, Germany: Athenum Verlag, 1989
  • (28) Tapia, M.G.C., Coello, C.A.C. Applications of multi-objective evolutionary algorithms in economics and finance: a survey. Proceedings of 2007 IEEE CEC: 532-539 (2007)
  • (29) Vlennet, R., Fonteix, C., Marc, I. Multicriteria optimization using a genetic algorithm for determining a Pareto set. Int. J. Syst. Sci., 27(2): 255-260 (1996)
  • (30) Zhang, Q., Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE. Trans. Evol. Comput., 11(6): 712-731 (2007)
  • (31) Žilinskas, A. A one-step worst-case optimal algorithm for bi-objective univariate optimization. Optim. Lett., 8(7): 1945-1960 (2014)
  • (32) Žilinskas, A., Gimbutienė, G. On one-step worst-case optimal trisection in univariate bi-objective Lipschitz optimization. Commun. Nonlinear. Sci. Numer. Simul., 35: 123-136 (2016)
  • (33) Žilinskas, A., Žilinskas, J. Adaptation of a one-step worst-case optimal univariate algorithm of bi-objective Lipschitz optimization to multidimensional problems. Commun. Nonlinear. Sci. Numer. Simul., 21(1-3): 89-98 (2015)
  • (34) Zitzler, E., Laumanns, M., Thiele, L. SPEA2: Improving the strength Pareto evolutionary algorithm. TIK-report 103, (2001)