This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Nguyen Ngoc Luan 22institutetext: Department of Mathematics and Informatics, Hanoi National University of Education
136 Xuan Thuy, Hanoi, Vietnam
[email protected]
33institutetext: Do Sang Kim 44institutetext: Department of Applied Mathematics, Pukyong National University
Busan, Korea
[email protected]
55institutetext: Nguyen Dong Yen 66institutetext: Institute of Mathematics, Vietnam Academy of Science and Technology
18 Hoang Quoc Viet, Hanoi 10307, Vietnam
[email protected]

Two Optimal Value Functions in Parametric Conic Linear Programming111Dedicated to Professor Franco Giannessi on the occasion of his 85th birthday.

Nguyen Ngoc Luan    Do Sang Kim    Nguyen Dong Yen
(Received: date / Accepted: date)
Abstract

We consider the conic linear program given by a closed convex cone in an Euclidean space and a matrix, where vector on the right-hand-side of the constraint system and the vector defining the objective function are subject to change. Using the strict feasibility condition, we prove the locally Lipschitz continuity and obtain some differentiability properties of the optimal value function of the problem under right-hand-side perturbations. For the optimal value function under linear perturbations of the objective function, similar differentiability properties are obtained under the assumption saying that both primal problem and dual problem are strictly feasible.

Keywords:
Conic linear programming Primal problem Dual problem Optimal value function Lipschitz continuity Differentiability properties Increment estimates
MSC:
49K40 90C31 90C25 90C30

1 Introduction

If the feasible region or the objective function of a mathematical programming problem depends on a parameter, then the optimal value of the problem is a function of the parameter. In general, the optimal value function is a fairly complicated function. Continuity properties (resp., differentiability properties) of the optimal value function in parametric mathematical programming are usually classified as results on stability (resp., on differential stability) of optimization problems. Many results in this direction can be found in the books (Bonnans_Shapiro_2000, , Chapters 4,5), (Mordukhovich_Book_I, , Chapter 4) and (Mordukhovich_Book_II, , Chapter 5), the papers [4-13], the dissertation An_Thesis , and the references therein.

In 2001, Gauvin Gauvin_2001 investigated the sensitivity of the optimal value of a parametric linear programming problem. He gave separated formulas for the increment of the optimal value with respect to perturbations of the right-hand-side vector in the constraint system, the cost vector, and the coefficients of the matrix defining the linear constraint system. In 2016, Thieu Thieu_2016 established some lower and upper bounds for the increment of the optimal value of a parametric linear program, where both objective vector and right-hand-side vector in the constraint system are subject to perturbations. He showed that the appearance of an extra second-order term in these estimates is a must.

Conic linear programming is a natural extension of linear programming. In a linear program, a feasible point usually has nonnegative components, i.e., it belongs to the nonnegative orthant of an Euclidean space. In addition, the constraints are written as linear inequalities. Meanwhile, in a conic linear program, the constraint is written as an inequality with the ordering cone being a closed convex cone in an Euclidean space. Conic linear programs can be used in many applications that cannot be simulated by linear programs (see, e.g., (BN_2001, , Section 2.2) and (Luenberger_Ye_2016, , Chapter 6)). This is the reason why these special convex optimization problems have attracted much attention from researchers (see Ben-Tal and Nemirovski BN_2001 , Bonnans and Shapiro Bonnans_Shapiro_2000 , Luenberger and Ye Luenberger_Ye_2016 , Chuong and Jeyakumar Chuong_Jeyakumar_2017 , and the references therein).

It is of interest to know whether results similar to those of Gauvin Gauvin_2001 and Thieu Thieu_2016 can be obtained for conic linear programs, or not. The aim of this paper is to show that the results of Gauvin_2001 admit certain generalizations in conic linear programming, and some differentiability properties, as well as a locally Lipschitz continuity property, can be established for the two related optimal value functions. The obtained results are analyzed via four concrete examples which show that the optimal value functions in a conic linear program are more complicated than that of the corresponding optimal value functions in linear programming.

The remaining part of our paper has six sections. Section 2 contains some definitions, a strong duality theorem in conic linear programming, and a lemma. Section 3 is devoted to the locally Lipschitz continuity property of the optimal value function of a conic linear program under right-hand-side perturbations. Differentiability properties of this optimal value function are studied in Section 4. Some increment formulas for the function are established in Section 5. The optimal value function of a conic linear program under linear perturbations of the objective function is studied in Section 6. The final section presents some concluding remarks.

2 Preliminaries

One says that a nonempty subset KK of the Euclidean space m\mathbb{R}^{m} is a cone if tKKtK\subset K for all t0t\geq 0. The (positive) dual cone KK^{*} of a cone KmK\subset\mathbb{R}^{m} is given by K={vm:vTy0yK}K^{*}=\{v\in\mathbb{R}^{m}\;:\;v^{T}y\geq 0\quad\forall y\in K\}, where T denotes the matrix transposition. Herein, vectors of Euclidean spaces are written as rows of real numbers in the text, but they are interpreted as columns of real numbers in matrix calculations. The closed ball centered at ama\in\mathbb{R}^{m} with radius ε>0\varepsilon>0 is denoted by B¯(a,ε)\bar{B}(a,\varepsilon). By ¯\overline{\mathbb{R}} we denote the set {±}\mathbb{R}\cup\{\pm\infty\} of extended real values.

Let f:k¯f:\mathbb{R}^{k}\to\overline{\mathbb{R}} be a proper convex function and x¯k\bar{x}\in\mathbb{R}^{k} be such that f(x¯)f(\bar{x}) is finite. According to Theorem 23.1 in Rockafellar_1970 , the directional derivative

f(x¯;h):=limt0+f(x¯+th)f(x¯)tf^{\prime}(\bar{x};h):=\displaystyle\lim\limits_{t\to 0^{+}}\frac{f(\bar{x}+th)-f(\bar{x})}{t}

of ff at x¯\bar{x} w.r.t. a direction hkh\in\mathbb{R}^{k}, always exists (it can take values -\infty or ++\infty), and one has f(x¯;h)=inft>0f(x¯+th)f(x¯)t.f^{\prime}(\bar{x};h)=\inf\limits_{t>0}\dfrac{f(\bar{x}+th)-f(\bar{x})}{t}. The subdifferential of ff at x¯\bar{x} is defined by f(x¯)={pk:pT(xx¯)f(x)f(x¯)xk}.\partial f(\bar{x})=\big{\{}p\in\mathbb{R}^{k}\;:\;p^{T}(x-\bar{x})\leq f(x)-f(\bar{x})\ \,\forall x\in\mathbb{R}^{k}\big{\}}.

For a convex set CC in an Euclidean space k\mathbb{R}^{k}, the normal cone of CC at x¯C\bar{x}\in C is given by N(x¯;C)={pk:pT(xx¯)0xC}.N(\bar{x};C)=\left\{p\in\mathbb{R}^{k}\;:\;p^{T}(x-\bar{x})\leq 0\ \,\forall x\in C\right\}.

Let F:kF:\mathbb{R}^{k}\rightrightarrows\mathbb{R}^{\ell} be a multifunction. The domain and the graph of FF are given, respectively, by the formulas domF={xk:F(x)}{\rm{dom}}\,F=\{x\in\mathbb{R}^{k}\;:\;F(x)\not=\emptyset\} and

gphF={(x,y)k×:yF(x)}.{\rm{gph}}\,F=\left\{(x,y)\in\mathbb{R}^{k}\times\mathbb{R}^{\ell}\;:\;y\in F(x)\right\}.

If gphF{\rm gph}\,F is a convex set in k×\mathbb{R}^{k}\times\mathbb{R}^{\ell}, then FF is said to be a convex multifunction. In that case, the coderivative of FF at (x¯,y¯)gphF(\bar{x},\bar{y})\in{\rm{gph}}\,F is the multifunction DF(x¯,y¯):kD^{*}F(\bar{x},\bar{y}):\mathbb{R}^{\ell}\rightrightarrows\mathbb{R}^{k} with

DF(x¯,y¯)(v):={uk:(u,v)N((x¯,y¯);gphF)}D^{*}F(\bar{x},\bar{y})(v):=\left\{u\in\mathbb{R}^{k}\;:\;(u,-v)\in N\big{(}(\bar{x},\bar{y});{\rm{gph}}\,F\big{)}\right\}

for all vv\in\mathbb{R}^{\ell} (see, e.g., (Mordukhovich_Book_I, , Section 1.2) and An_Yen_2015 ).

From now on, let KK be a closed convex cone in m\mathbb{R}^{m}. For any y1,y2y^{1},y^{2} from m\mathbb{R}^{m}, one writes y1Ky2y^{1}\geq_{K}y^{2} if y1y2Ky^{1}-y^{2}\in K. Similarly, one writes y1>Ky2y^{1}>_{K}y^{2} if y1y2intKy^{1}-y^{2}\in{\rm int}K, where intK{\rm int}K denotes the interior of KK.

Given a matrix Am×nA\in\mathbb{R}^{m\times n}, vectors bmb\in\mathbb{R}^{m} and cnc\in\mathbb{R}^{n}, we consider the primal conic linear optimization problem

min{cTx:xn,AxKb}.\min\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\}. (P)

Following Bental and Nemirovski (BN_2001, , p. 52), we call

max{bTy:ym,ATy=c,yK0}\max\{b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\} (D)

the dual problem of (P). The feasible region, and the solution set, and the optimal value of (P) are denoted respectively by (P)\mathcal{F}{\rm(P)}, 𝒮(P)\mathcal{S}{\rm(P)}, and v(P)v{\rm(P)}. The feasible region, the solution set, and the optimal value of (D) are denoted respectively by (D)\mathcal{F}{\rm(D)}, 𝒮(D)\mathcal{S}{\rm(D)}, and v(D)v({\rm D}). By definitions, one has

v(P)=inf{cTx:xn,AxKb}v({\rm P})=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\}

and v(D)=sup{bTy:ym,ATy=c,yK0}.v({\rm D})=\sup\{b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\}. By a standard convention, inf=+\inf\emptyset=+\infty and sup=\sup\emptyset=-\infty.

Thanks to the weak duality theorem in conic linear programming (BN_2001, , Proposition 2.3.1), one has cTxbTyc^{T}x\geq b^{T}y for any x(P)x\in\mathcal{F}{\rm(P)} and y(D)y\in\mathcal{F}{\rm(D)}. Hence, v(P)v(D)v({\rm P})\geq v({\rm D}). By constructing suitable examples, we can show that the strict inequality v(P)>v(D)v({\rm P})>v({\rm D}) is possible, i.e., a conic linear program may have a duality gap. Thus, the equality v(P)=v(D)v({\rm P})=v({\rm D}) is guaranteed only if one imposes a certain regularity condition. In what follows, we will rely on the regularity condition called strict feasibility, which was intensively employed by Bental and Nemirovski BN_2001 .

Definition 1

(See (BN_2001, , Section 2.4)) If there exists a point x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0>KbAx^{0}>_{K}b, then one says that problem (P) is strictly feasible. If there is some y0my^{0}\in\mathbb{R}^{m} satisfying ATy0=cA^{T}y^{0}=c and y0>K0y^{0}>_{K^{*}}0, then the dual problem (D) is said to be strictly feasible.

The main assertion of the strong duality theorem in conic linear programming can be stated as follows.

Lemma 1

(See (BN_2001, , Theorem 2.4.1 (assertions 3a and 3b))) If (P) is strictly feasible and v(P)>v({\rm P})>-\infty, then (D) has a solution and v(D)=v(P)v({\rm D})=v({\rm P}). If (D) is strictly feasible and one has v(D)<+v({\rm D})<+\infty, then (P) has a solution and v(D)=v(P)v({\rm D})=v({\rm P}).

The following analogues of the Farkas lemma (Rockafellar_1970, , p. 200) will be used repeatedly in the sequel.

Lemma 2

Let there be given a closed convex cone KK in m\mathbb{R}^{m}, a matrix AA in m×n\mathbb{R}^{m\times n}, vectors bmb\in\mathbb{R}^{m} and cnc\in\mathbb{R}^{n}, and a real number α\alpha.
(a) Suppose that there exists a point x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0>KbAx^{0}>_{K}b. Then, the inequality cTxαc^{T}x\geq\alpha is a consequence of the conic-linear inequality AxKbAx\geq_{K}b iff there exists a vector yKy\in K^{*} satisfying ATy=cA^{T}y=c and bTyαb^{T}y\geq\alpha.
(b) Suppose that there exists a point y0my^{0}\in\mathbb{R}^{m} satisfying ATy0=cA^{T}y^{0}=c and y0>K0y^{0}>_{K^{*}}0. Then, the inequality bTyαb^{T}y\leq\alpha is a consequence of the system

ATy=c,yK0A^{T}y=c,\ y\geq_{K^{*}}0

iff there exists a vector xnx\in\mathbb{R}^{n} satisfying AxKbAx\geq_{K}b and cTxαc^{T}x\leq\alpha.

Proof To prove assertion (a), suppose that AxKbAx\geq_{K}b and there exists a vector yKy\in K^{*} satisfying ATy=cA^{T}y=c and bTyαb^{T}y\geq\alpha. Then we have

cTxbTy=xTcbTy\displaystyle c^{T}x-b^{T}y=x^{T}c-b^{T}y =xT(ATy)bTy\displaystyle=x^{T}(A^{T}y)-b^{T}y (1)
=(Ax)TybTy\displaystyle=(Ax)^{T}y-b^{T}y
=(Axb)Ty0,\displaystyle=(Ax-b)^{T}y\geq 0,

where the last inequality in (1) is valid because AxKbAx\geq_{K}b and y>K0y>_{K^{*}}0. Combining this with the assumption bTyαb^{T}y\geq\alpha, yields cTxαc^{T}x\geq\alpha. Conversely, suppose that cTxαc^{T}x\geq\alpha for all xx satisfying AxKbAx\geq_{K}b. It is clear that (P) is strictly feasible and v(P)αv({\rm P})\geq\alpha. By the first assertion of Lemma 1, (D) has a solution and one has v(P)=v(D)v({\rm P})=v({\rm D}). Let yy be a solution of (D). Then we have yKy\in K^{*}, ATy=cA^{T}y=c and bTy=v(D)=v(P)αb^{T}y=v({\rm D})=v({\rm P})\geq\alpha. So, assertion (a) holds true.

Now, to prove assertion (b), suppose that ATy=c,yK0A^{T}y=c,\,y\geq_{K^{*}}0 and there exists a vector xnx\in\mathbb{R}^{n} satisfying AxKbAx\geq_{K}b and cTxαc^{T}x\leq\alpha. Then, we have

bTycTx=bTyxTc\displaystyle b^{T}y-c^{T}x=b^{T}y-x^{T}c =bTyxT(ATy)\displaystyle=b^{T}y-x^{T}(A^{T}y) (2)
=bTy(Ax)Ty\displaystyle=b^{T}y-(Ax)^{T}y
=(Axb)Ty0,\displaystyle=-(Ax-b)^{T}y\leq 0,

where the last inequality in (2) is valid because AxKbAx\geq_{K}b and yK0y\geq_{K^{*}}0. Since cTxαc^{T}x\leq\alpha, this yields bTyαb^{T}y\leq\alpha. Conversely, suppose that bTyαb^{T}y\leq\alpha for all yy satisfying ATy=c,yK0A^{T}y=c,\,y\geq_{K^{*}}0. So, (D) is strictly feasible by our assumption and v(D)αv({\rm D})\leq\alpha. By the second assertion of Lemma 1, (P) has a solution and v(P)=v(D)v({\rm P})=v({\rm D}). Let xx be a solution of (P). Then we have AxKbAx\geq_{K}b and cTx=v(P)=v(D)αc^{T}x=v({\rm P})=v({\rm D})\leq\alpha. This justifies the validity of (b). \hfill\Box

Remark 1

The result in Lemma 2(a) is a known one (see (BN_2001, , Proposition 2.4.3)). The assertions of Lemma 1 and Lemma  2(a) may be false if the assumption on the strict feasiblity of the conic-linear inequality AxKbAx\geq_{K}b is removed (see (BN_2001, , Example 2.4.2)). Similarly, the assertion of Lemma 2(b) may be false if the assumption on the strict feasiblity of the conic-linear inequality yK0y\geq_{K^{*}}0 is removed.

The optimal value v(P)v({\rm P}) depends on the parameters AA, bb, and cc. Following Gauvin Gauvin_2001 , who studied differential stability properties of linear programming problems, we consider the functions φ(b):=inf{cTx:xn,AxKb}\varphi(b):=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\} and ψ(c):=inf{cTx:xn,AxKb}.\psi(c):=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\}. Note that φ()\varphi(\cdot) is the optimal value function of (P) under right-hand-side perturbations of the constraint set and ψ()\psi(\cdot) is the optimal value function of (P) under perturbations of the objective function.

3 Locally Lipschitz Continuity of φ()\varphi(\cdot)

According to a remark given in the paper of An and Yen (An_Yen_2015, , p. 113), we know that φ()\varphi(\cdot) is a convex function. The next proposition presents additional continuity properties of φ\varphi.

Theorem 3.1

Suppose that (P) is strictly feasible and v(P)>v(\rm P)>-\infty. Then, there exists ε>0\varepsilon>0 such that φ(b)\varphi(b^{\prime}) is finite for every bB¯(b,ε)b^{\prime}\in\bar{B}(b,\varepsilon) and φ\varphi is Lipschitz on B¯(b,ε)\bar{B}(b,\varepsilon).

Proof  On one hand, since (P) is strictly feasible, one selects a point x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0>KbAx^{0}>_{K}b, i.e., Ax0bintKAx^{0}-b\in{\rm int}K. Then there exists a convex open neighborhood of bb satisfying Ax0bintKAx^{0}-b^{\prime}\in{\rm int}K for every bVb^{\prime}\in V. So, x0x^{0} is a feasible point of the problem

min{cTx:xn,AxKb}\min\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b^{\prime}\} (Pb{\rm P}_{b^{\prime}})

for all bVb^{\prime}\in V. Moreover φ(b)cTx0<+\varphi(b^{\prime})\leq c^{T}x^{0}<+\infty. On the other hand, applying the weak duality theorem (BN_2001, , Proposition 2.3.1) for the problems (Pb{\rm P}_{b^{\prime}}) and

max{(b)Ty:ym,ATy=c,yK0}\max\{(b^{\prime})^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\} (Db{\rm D}_{b^{\prime}})

one has cTx(b)Tyc^{T}x\geq(b^{\prime})^{T}y for all xnx\in\mathbb{R}^{n} satisfying AxKbAx\geq_{K}b^{\prime} and for every ymy\in\mathbb{R}^{m} satisfying ATy=cA^{T}y=c, yK0y\geq_{K^{*}}0. Therefore,

inf{cTx:xn,AxKb}sup{(b)Ty:ym,ATy=c,yK0}.\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b^{\prime}\}\geq\sup\{(b^{\prime})^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\}.

This means that

φ(b)sup{(b)Ty:ym,ATy=c,yK0}.\varphi(b^{\prime})\geq\sup\{(b^{\prime})^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\}.

So, we have φ(b)>\varphi(b^{\prime})>-\infty because

sup{(b)Ty:ym,ATy=c,yK0}sup{(b)Ty:y𝒮(D)}>,\begin{array}[]{rl}&\sup\{(b^{\prime})^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\}\\ &\geq\sup\{(b^{\prime})^{T}y\;:\;y\in\mathcal{S}({\rm D})\}>-\infty,\end{array} (3)

where the last inequality in (3) is valid as 𝒮(D)\mathcal{S}({\rm D}) is nonempty (see Lemma 1). It follows that φ(b)\varphi(b^{\prime}) is finite for every bVb^{\prime}\in V, i.e., Vint(domφ)V\subset{\rm int}({\rm dom}\varphi). Combining this with the convexity of φ\varphi, by (Rockafellar_1970, , Theorem 24.7) we can asserts that φ\varphi is locally Lipschitz on VV. \hfill\Box

4 Differentiability Properties of φ()\varphi(\cdot)

First, let us show that the solution set of (D) possesses a remarkable stability property, provided that the assumptions of Theorem 3.1 are satisfied.

Proposition 1

Suppose that (P) is strictly feasible and v(P)>v(\rm P)>-\infty. Then there exists ε>0\varepsilon>0 such that 𝒮(Db)\mathcal{S}({\rm D}_{b^{\prime}}) is nonempty and compact for every bb^{\prime} in B(b,ε)B(b,\varepsilon).

Proof Since (P) is strictly feasible, there is x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0bintKAx^{0}-b\in{\rm int}K. Therefore, there is a real number ε>0\varepsilon>0 satisfying Ax0bintKAx^{0}-b^{\prime}\in{\rm int}K for all bB(b,ε)b^{\prime}\in B(b,\varepsilon). According to Theorem 3.1, without loss of generality we can assume that φ(b)\varphi(b^{\prime}) is finite for all bB(b,ε)b^{\prime}\in B(b,\varepsilon). Fix a vector bB(b,ε)b^{\prime}\in B(b,\varepsilon). Then, the problem (Pb{\rm P}_{b^{\prime}}) is strictly feasible and v(Pb)>v\eqref{P_b'}>-\infty. So, applied for the pair problems (Pb{\rm P}_{b^{\prime}}) and (Db{\rm D}_{b^{\prime}}) Lemma 1 asserts that 𝒮(Db)\mathcal{S}\eqref{D_b'} is nonempty and

𝒮(Db)={y(Db):(b)Ty=φ(b)}.\mathcal{S}\eqref{D_b'}=\big{\{}y\in\mathcal{F}\eqref{D_b'}\;:\;(b^{\prime})^{T}y=\varphi(b^{\prime})\big{\}}.

Since (Db)=(D)\mathcal{F}\eqref{D_b'}=\mathcal{F}({\rm D}), this implies 𝒮(Db)={y(D):(b)Ty=φ(b)}.\mathcal{S}\eqref{D_b'}=\big{\{}y\in\mathcal{F}({\rm D})\;:\;(b^{\prime})^{T}y=\varphi(b^{\prime})\big{\}}. Clearly, 𝒮(Db)\mathcal{S}\eqref{D_b'} is a closed set. To prove that 𝒮(Db)\mathcal{S}\eqref{D_b'} is compact by contradiction, let us suppose that 𝒮(Db)\mathcal{S}\eqref{D_b'} is unbounded. Select a sequence {yk}\{y^{k}\} in 𝒮(Db)\mathcal{S}\eqref{D_b'} satisfying the condition limkyk=+\lim\limits_{k\to\infty}\|y^{k}\|=+\infty. Define y~k=yk1yk\widetilde{y}^{k}=\|y^{k}\|^{-1}y^{k}. Since y~k=1\|\widetilde{y}^{k}\|=1, without loss of generality we can assume that the sequence {y~k}\{\widetilde{y}^{k}\} converges to a vector y~\widetilde{y} with y~=1\|\widetilde{y}\|=1. Since the sequence {yk}\{y^{k}\} is contained in the closed cone KK^{*}, one has y~K\widetilde{y}\in K^{*}. Observe that

ATy~=limkATy~k\displaystyle A^{T}\widetilde{y}=\lim\limits_{k\to\infty}A^{T}\widetilde{y}^{k} =limkATykyk\displaystyle=\lim\limits_{k\to\infty}\dfrac{A^{T}y^{k}}{\|y^{k}\|}
=limkcyk=0\displaystyle=\lim\limits_{k\to\infty}\dfrac{c}{\|y^{k}\|}=0

and

(b)Ty~=limk(b)Ty~k\displaystyle(b^{\prime})^{T}\widetilde{y}=\lim\limits_{k\to\infty}(b^{\prime})^{T}\widetilde{y}^{k} =limk(b)Tykyk\displaystyle=\lim\limits_{k\to\infty}\dfrac{(b^{\prime})^{T}y^{k}}{\|y^{k}\|}
=limkφ(b)yk=0.\displaystyle=\lim\limits_{k\to\infty}\dfrac{\varphi(b^{\prime})}{\|y^{k}\|}=0.

It follows that (Ax0b)Ty~=(x0)TATy~(b)Ty~=0(Ax^{0}-b^{\prime})^{T}\widetilde{y}={(x^{0})}^{T}A^{T}\widetilde{y}-(b^{\prime})^{T}\widetilde{y}=0. Meanwhile, since Ax0bintKAx^{0}-b^{\prime}\in{\rm int}K and y~K{0}\widetilde{y}\in K^{*}\setminus\{0\}, (Ax0b)Ty~>0(Ax^{0}-b^{\prime})^{T}\widetilde{y}>0. We have obtained a contradiction. Thus, 𝒮(Db)\mathcal{S}\eqref{D_b'} is bounded; hence it is compact. \hfill\Box

The following question arises naturally from Proposition 1: If (D) is strictly feasible and v(D)<+v(\rm D)<+\infty, then the solution set 𝒮(P)\mathcal{S}({\rm P}) is nonempty and compact, or not? By the second assertion of Lemma 1, if (D) is strictly feasible and v(D)<+v(\rm D)<+\infty, then 𝒮(P)\mathcal{S}({\rm P}) is nonempty. However, 𝒮(P)\mathcal{S}({\rm P}) may not be a compact set. The next example is an illustration for this statement.

Example 1

Consider the problem (P) with n=m=2n=m=2, K=+2K=\mathbb{R}^{2}_{+}, A=[1000]A=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}, b=(1,1)Tb=(1,-1)^{T}, c=(1,0)Tc=(1,0)^{T}. Here we have K=K=+2K^{*}=K=\mathbb{R}^{2}_{+}. For y0:=(1,1)Ty^{0}:=(1,1)^{T}, one has ATy0=cA^{T}y^{0}=c and y0>K0y^{0}>_{K^{*}}0. Thus, (D) is strictly feasible. Meanwhile, 𝒮(P)={1}×\mathcal{S}({\rm P})=\{1\}\times\mathbb{R} is unbounded.

The following result gives a formula for the directional derivative of φ\varphi at bb in a direction dd.

Theorem 4.1

Suppose that (P) is strictly feasible and (P) has a solution. Then, for every direction dmd\in\mathbb{R}^{m}, one has

φ(b;d)=maxy𝒮(D)yTd.\varphi^{\prime}(b;d)=\max\limits_{y\in\mathcal{S}({\rm D})}y^{T}d. (4)

Proof Fix a direction dmd\in\mathbb{R}^{m}, consider the function g:¯g:\mathbb{R}\to\overline{\mathbb{R}} given by

g(t)=inf{cTx:xn,AxKb+td},g(t)=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b+td\},

where tt\in\mathbb{R}. Let f(x,t)=cTxf(x,t)=c^{T}x for all (x,t)n×(x,t)\in\mathbb{R}^{n}\times\mathbb{R}. Clearly, ff is convex and continuous on n×\mathbb{R}^{n}\times\mathbb{R}. Define the multifunction G:nG:\mathbb{R}\rightrightarrows\mathbb{R}^{n} by setting

G(t)={xn:AxKb+td}.G(t)=\big{\{}x\in\mathbb{R}^{n}\;:\;Ax\geq_{K}b+td\big{\}}.

Note that GG is a convex multifunction. Consider the parametric convex optimization problem min{f(x,t):xG(t)},\min\{f(x,t)\;:\;x\in G(t)\}, which depends on the parameter tt\in\mathbb{R}, and observe that g()g(\cdot) is the optimal value function of this problem. By a remark given in (An_Yen_2015, , p. 113) we know that g()g(\cdot) is a convex function. Applying a result of An and Yen (An_Yen_2015, , Theorem 4.2) for t¯:=0\bar{t}:=0 and for a solution x¯\bar{x} of (P)({\rm P}), we have

g(0)=(x,t)f(x¯,t¯){t+DG(t¯,x¯)(x)}.\partial g(0)=\bigcup\limits_{(x^{*},t^{*})\in\partial f(\bar{x},\bar{t})}\big{\{}t^{*}+D^{*}G(\bar{t},\bar{x})(x^{*})\big{\}}.

Since f(x,t)={[c0]}\partial f(x,t)=\left\{\begin{bmatrix}c\\ 0\end{bmatrix}\right\}, this equality yields

g(0)=DG(0,x¯)(c).\partial g(0)=D^{*}G(0,\bar{x})(c). (5)

By the definition of coderivative for convex multifunctions,

DG(0,x¯)(c)\displaystyle D^{*}G(0,\bar{x})(c) ={t:(t,c)N((0,x¯),gphG)}\displaystyle=\{t^{*}\in\mathbb{R}\;:\;(t^{*},-c)\in N((0,\bar{x}),\rm{gph}\,G)\} (6)
={t:ttcT(xx¯)0(t,x)gphG}\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;t^{*}t-c^{T}(x-\bar{x})\leq 0\ \,\forall(t,x)\in\rm{gph}\,G\big{\}}
={t:cTxttcTx¯(t,x) with AxKb+td}\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;c^{T}x-t^{*}t\geq c^{T}\bar{x}\ \,\forall(t,x)\textrm{ with }Ax\geq_{K}b+td\big{\}}
={t:cTxttcTx¯(x,t) with AxtdKb}.\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;c^{T}x-t^{*}t\geq c^{T}\bar{x}\ \,\forall(x,t)\textrm{ with }Ax-td\geq_{K}b\big{\}}.

Let

c~:=[ct],A~:=[Ad],x~:=[xt],α:=cTx¯.\widetilde{c}:=\begin{bmatrix}c\\ -t^{*}\end{bmatrix},\quad\widetilde{A}:=\begin{bmatrix}A\quad-d\end{bmatrix},\quad\widetilde{x}:=\begin{bmatrix}x\\ t\end{bmatrix},\quad\alpha:=c^{T}\bar{x}.

Clearly, the inequality AxtdKbAx-td\geq_{K}b is equivalent to A~x~Kb\widetilde{A}\widetilde{x}\geq_{K}b. So, from (6) one gets

DG(0,x¯)(c)={t:c~Tx~αx~n× with A~x~Kb}.\displaystyle D^{*}G(0,\bar{x})(c)=\big{\{}t^{*}\in\mathbb{R}\;:\;\widetilde{c}^{T}\widetilde{x}\geq\alpha\ \,\forall\widetilde{x}\in\mathbb{R}^{n}\times\mathbb{R}\textrm{ with }\widetilde{A}\widetilde{x}\geq_{K}b\big{\}}. (7)

Since (P) is strictly feasible, there exists x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0>KbAx^{0}>_{K}b. Setting x~0=[x00]\widetilde{x}^{0}=\begin{bmatrix}x^{0}\\ 0\end{bmatrix}, we have A~x~0>Kb\widetilde{A}\widetilde{x}^{0}>_{K}b. Then, applying the Farkas-type result in Lemma 2(a) to the inequality system A~x~Kb\widetilde{A}\widetilde{x}\geq_{K}b and the vector c~\widetilde{c}, we can assert that c~Tx~α\widetilde{c}^{T}\widetilde{x}\geq\alpha for all x~n×\widetilde{x}\in\mathbb{R}^{n}\times\mathbb{R} such that A~x~Kb\widetilde{A}\widetilde{x}\geq_{K}b iff there exists a vector yKy\in K^{*} satisfying A~Ty=c~\widetilde{A}^{T}y=\widetilde{c} and bTyαb^{T}y\geq\alpha. Since

A~Ty=[ATdT]y=[ATydTy],\widetilde{A}^{T}y=\begin{bmatrix}A^{T}\\ -d^{T}\end{bmatrix}y=\begin{bmatrix}A^{T}y\\ -d^{T}y\end{bmatrix},

the equality A~Ty=c~\widetilde{A}^{T}y=\widetilde{c} is equivalent to the conditions ATy=cA^{T}y=c and dTy=td^{T}y=t^{*}. Therefore, combining (7) with the description of the feasible region of the dual problem (D), we have

DG(0,x¯)(c)\displaystyle D^{*}G(0,\bar{x})(c) ={t:yKs.t.A~Ty=c~,bTyα}\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;\exists y\in K^{*}\;\textrm{s.t.}\;\widetilde{A}^{T}y=\widetilde{c},\;b^{T}y\geq\alpha\big{\}} (8)
={t:yKs.t.ATy=c,dTy=t,bTycTx¯}\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;\exists y\in K^{*}\;\textrm{s.t.}\;A^{T}y=c,\;d^{T}y=t^{*},\;b^{T}y\geq c^{T}\bar{x}\big{\}}
={dTy:yK,ATy=c,bTycTx¯}\displaystyle=\big{\{}d^{T}y\;:\;y\in K^{*},\;A^{T}y=c,\;b^{T}y\geq c^{T}\bar{x}\big{\}}
={dTy:y(D),bTycTx¯}\displaystyle=\big{\{}d^{T}y\;:\;y\in\mathcal{F}({\rm D}),\;b^{T}y\geq c^{T}\bar{x}\big{\}}
={dTy:y(D),bTyv(P)}.\displaystyle=\big{\{}d^{T}y\;:\;y\in\mathcal{F}({\rm D}),\;b^{T}y\geq v({\rm P})\big{\}}.

Since (P) is strictly feasible and (P) has a solution, by the strong duality theorem in Lemma 1, (D) has a solution and v(D)=v(P)v({\rm D})=v({\rm P}). Hence, by (8),

DG(0,x¯)(c)\displaystyle D^{*}G(0,\bar{x})(c) ={dTy:y(D),bTyv(D)}\displaystyle=\big{\{}d^{T}y\;:\;y\in\mathcal{F}({\rm D}),\;b^{T}y\geq v({\rm D})\big{\}} (9)
={dTy:y(D),bTy=v(D)}\displaystyle=\big{\{}d^{T}y\;:\;y\in\mathcal{F}({\rm D}),\;b^{T}y=v({\rm D})\big{\}}
={dTy:y𝒮(D)}.\displaystyle=\big{\{}d^{T}y\;:\;y\in\mathcal{S}({\rm D})\big{\}}.

Combining (9) with (5) gives g(0)={dTy:y𝒮(D)}.\partial g(0)=\{d^{T}y\;:\;y\in\mathcal{S}({\rm D})\}. Then, by the well-known result on the relationships between the directional derivative of a convex function (Rockafellar_1970, , Theorem 23.4), we obtain

g(0;1)=supsg(0)s=supy𝒮(D)yTd=maxy𝒮(D)yTd,g^{\prime}(0;1)=\sup\limits_{s\in\partial g(0)}s=\sup\limits_{y\in\mathcal{S}({\rm D})}y^{T}d=\max\limits_{y\in\mathcal{S}({\rm D})}y^{T}d, (10)

where the last equality in (10) is valid because 𝒮(D)\mathcal{S}({\rm D}) is compact. Finally, using (10) and the fact that φ(b;d)=g(0;1)\varphi^{\prime}(b;d)=g^{\prime}(0;1), we get the desired formula (4). \hfill\Box

Based on Theorem 4.1, the next proposition provides us with a formula for the subdifferential of φ\varphi at bb.

Proposition 2

If (P) is strictly feasible and (P) possesses a solution, then φ(b)=𝒮(D)\partial\varphi(b)=\mathcal{S}({\rm D}).

Proof Take a vector pφ(b)p\in\partial\varphi(b). For any dmd\in\mathbb{R}^{m} and t>0t>0, from the inequality φ(b+td)φ(b)pT(td)\varphi(b+td)-\varphi(b)\geq p^{T}(td) it follows that 1t(φ(b+td)φ(b))pTd.\dfrac{1}{t}\left(\varphi(b+td)-\varphi(b)\right)\geq p^{T}d. Letting t0+t\to 0^{+}, one has φ(b;d)pTd.\varphi^{\prime}(b;d)\geq p^{T}d. Combining this with the equation (4) gives

maxy𝒮(D)yTdpTd.\max\limits_{y\in\mathcal{S}({\rm D})}y^{T}d\geq p^{T}d. (11)

To show that pp belongs to 𝒮(D)\mathcal{S}({\rm D}), suppose the contrary: p𝒮(D)p\notin\mathcal{S}({\rm D}). By 𝒮(D)\mathcal{S}({\rm D}) is a nonempty convex set, by the strongly separation theorem (see, e.g., (Rockafellar_1970, , Corollary 11.4.2)), there exists a vector dmd\in\mathbb{R}^{m} such that maxy𝒮(D)yTd<pTd\max\limits_{y\in\mathcal{S}({\rm D})}y^{T}d<p^{T}d. This contradicts with (11). We have thus proved that φ(b)𝒮(D)\partial\varphi(b)\subset\mathcal{S}({\rm D}). To obtain the opposite inclusion, take any p𝒮(D)p\in\mathcal{S}({\rm D}). For all dmd\in\mathbb{R}^{m}, by (4),

φ(b;d)=maxy𝒮(D)yTdpTd.\varphi^{\prime}(b;d)=\max\limits_{y\in\mathcal{S}({\rm D})}y^{T}d\geq p^{T}d. (12)

Besides, by (Rockafellar_1970, , Theorem 23.1) one gets

φ(b+d)φ(b)φ(b;d).\varphi(b+d)-\varphi(b)\geq\varphi^{\prime}(b;d). (13)

Combining (12) with (13) we have φ(b+d)φ(b)pTd\varphi(b+d)-\varphi(b)\geq p^{T}d for all pmp\in\mathbb{R}^{m}. So, pφ(b)p\in\partial\varphi(b). The proof is complete. \hfill\Box

5 Increment Estimates for φ()\varphi(\cdot)

Roughly speaking, the following statement gives an analogue of (Gauvin_2001, , Theorem 1) for conic linear programs.

Theorem 5.1

Suppose that (P) is strictly feasible and v(P)>v{\rm(P)}>-\infty. Then, for every dnd\in\mathbb{R}^{n} and t>0t>0, one has

φ(b+td)φ(b)+tsup{dTy:ym,ATy=c,yK0,bTy=φ(b)}.\varphi(b+td)\geq\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathbb{R}^{m},\ \,A^{T}y=c,\ y\geq_{K^{*}}0,\ b^{T}y=\varphi(b)\}.

In addition, for every dnd\in\mathbb{R}^{n}, there exists τ>0\tau>0 such that

φ(b+td)φ(b)+tsup{dTy:y(D)}\varphi(b+td)\leq\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathcal{F}{\rm(D)}\} (14)

for all t(0,τ]t\in(0,\tau].

Proof By Lemma 1, (D) has a solution and v(D)=v(P)v({\rm D})=v({\rm P}); hence v(D)=φ(b)v({\rm D})=\varphi(b). On one hand, applying the weak duality theorem (BN_2001, , Proposition 2.3.1) for the problems

min{cTx:xn,AxKb+td}\min\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b+td\} (Pt{\rm P}_{t})

and

max{(b+td)Ty:ym,ATy=c,yK0}\max\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0\} (Dt{\rm D}_{t})

one has cTx(b+td)Tyc^{T}x\geq(b+td)^{T}y for all xnx\in\mathbb{R}^{n} satisfying AxKb+tdAx\geq_{K}b+td and for every ymy\in\mathbb{R}^{m} satisfying ATy=cA^{T}y=c, yK0y\geq_{K^{*}}0. Consequently,

inf{cTx:xn,AxKb+td}sup{(b+td)Ty:ym,ATy=c,yK0}.\begin{array}[]{rl}&\inf\big{\{}c^{T}x\;:\;x\in\mathbb{R}^{n},Ax\geq_{K}b+td\big{\}}\\ &\geq\sup\big{\{}(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\big{\}}.\end{array} (15)

On the other hand, it is clear that

sup{(b+td)Ty:ym,ATy=c,yK0}sup{(b+td)Ty:ym,ATy=c,yK0,bTy=φ(b)}.\sup\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\}\\ \geq\sup\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\}. (16)

Combining (15) and (16), one gets

inf{cTx:xn,AxKb+td}sup{(b+td)Ty:ym,ATy=c,yK0,bTy=φ(b)}.\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},Ax\geq_{K}b+td\}\\ \geq\sup\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\}.

It follows that

φ(b+td)sup{(b+td)Ty:ym,ATy=c,yK0,bTy=φ(b)}=sup{bTy+tdTy:ym,ATy=c,yK0,bTy=φ(b)}=φ(b)+tsup{dTy:ym,ATy=c,yK0,bTy=φ(b)}.\begin{array}[]{rl}\varphi(b+td)&\geq\sup\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\}\\ &=\sup\{b^{T}y+td^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\}\\ &=\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\}.\end{array} (17)

Combining (17) with the fact that φ(b)=v(P)>\varphi(b)=v({\rm P})>-\infty, one has φ(b+td)>\varphi(b+td)>-\infty.

Since (P) is strictly feasible, there is a point x0nx^{0}\in\mathbb{R}^{n} satisfying Ax0>KbAx^{0}>_{K}b. Then one can find τ>0\tau>0 such that Ax0>Kb+tdAx^{0}>_{K}b+td for all t[0,τ]t\in[0,\tau]. Hence, (Pt{\rm P}_{t}) is strictly feasible. Applying Lemma 1 for a pair problems (Pt{\rm P}_{t}) and (Dt{\rm D}_{t}), where t[0,τ]t\in[0,\tau], we have

φ(b+td)\displaystyle\varphi(b+td) =sup{(b+td)Ty:ym,ATy=c,yK0}\displaystyle=\sup\{(b+td)^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\}
sup{bTy:ym,ATy=c,yK0}\displaystyle\leq\sup\{b^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\}
+sup{tdTy:ym,ATy=c,yK0}\displaystyle\hskip 14.22636pt+\sup\{td^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\}
=φ(b)+tsup{dTy:ym,ATy=c,yK0}.\displaystyle=\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathbb{R}^{m},A^{T}y=c,y\geq_{K^{*}}0\}.

Thus, the inequality (14) has been proved for all t[0,τ]t\in[0,\tau]. \hfill\Box

We now consider the case where KK is a polyhedral convex cone.

Theorem 5.2

Suppose that KK is a polyhedral convex cone and φ(b)\varphi(b) is finite. Then for every dnd\in\mathbb{R}^{n}, there exists τ>0\tau>0 such that

φ(b+td)=φ(b)+tsup{dTy:ym,ATy=c,yK0,bTy=φ(b)}\varphi(b+td)=\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c,\,y\geq_{K^{*}}0,b^{T}y=\varphi(b)\} (18)

for all t]0,τ]t\in]0,\tau].

Proof Since KK is a polyhedral convex cone in m\mathbb{R}^{m}, there exists a matrix Bk×mB\in\mathbb{R}^{k\times m} satisfying K={ym:By+k0}.K=\{y\in\mathbb{R}^{m}\;:\;By\geq_{\mathbb{R}^{k}_{+}}0\}. Then, the problem (P) can be written in the form

min{cTx:xn,BAx+kBb}.\min\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,BAx\geq_{\mathbb{R}^{k}_{+}}Bb\}. (P’)

The dual problem of (P’) is the problem

max{(Bb)Tz:zk,(BA)Tz=c,z+k0}.\max\{(Bb)^{T}z\;:\;z\in\mathbb{R}^{k},\,(BA)^{T}z=c,z\geq_{\mathbb{R}^{k}_{+}}0\}. (D’)

Since φ(b)\varphi(b) is finite, the (P’) has a solution. Moreover, both of two problems (P’) and (D’) have solutions and the optimal values are equal. Then, for every dmd\in\mathbb{R}^{m}, by using Theorem 1 in Gauvin_2001 for the problem (P’), one gets

φ(b+td)\displaystyle\varphi(b+td) =inf{cTx:xn,AxKb+td}\displaystyle=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b+td\}
=inf{cTx:xn,BAx+kBb+tBd}\displaystyle=\inf\{c^{T}x\;:\;x\in\mathbb{R}^{n},\,BAx\geq_{\mathbb{R}^{k}_{+}}Bb+tBd\}
=φ(b)+tsup{(Bd)Tz:z+k,(BA)Tz=c,(Bb)Tz=φ(b)}\displaystyle=\varphi(b)+t\sup\{(Bd)^{T}z\;:\;z\in\mathbb{R}^{k}_{+},(BA)^{T}z=c,(Bb)^{T}z=\varphi(b)\}
=φ(b)+tsup{dT(BTz):z+k,AT(BTz)=c,bT(BTz)=φ(b)}\displaystyle=\varphi(b)+t\sup\{d^{T}(B^{T}z)\;:\;z\in\mathbb{R}^{k}_{+},A^{T}(B^{T}z)=c,b^{T}(B^{T}z)=\varphi(b)\}

for any t>0t>0 sufficiently small. Denote BTz=yB^{T}z=y, we have

φ(b+td)=φ(b)+tsup{dTy:yBT(+k),ATy=c,bTy=φ(b)}.\varphi(b+td)=\varphi(b)+t\sup\{d^{T}y\;:\;y\in B^{T}(\mathbb{R}^{k}_{+}),A^{T}y=c,b^{T}y=\varphi(b)\}.\\ (19)

In the other hand, since K={ym:BiT,y0,i=1,2,,k}K=\{y\in\mathbb{R}^{m}\;:\;\langle B_{i}^{T},y\rangle\geq 0,\ i=1,2,\dots,k\} where BiB_{i} is the ii-row of matrix BB, one has

K={i=1kλiBiT:λi0,i=1,2,,k}K^{*}=\left\{\sum\limits_{i=1}^{k}\lambda_{i}B^{T}_{i}\;:\;\lambda_{i}\geq 0,\,i=1,2,\dots,k\right\}

by Proposition 2.42 in Bonnans_Shapiro_2000 ; hence K=BT(+k)K^{*}=B^{T}(\mathbb{R}^{k}_{+}). Combining the later and the equation (19), we can assert that (18) holds. \hfill\Box

Remark 2

Theorem 5.2 is a generalization of Theorem 1 in Gauvin_2001 , where the case K=+mK=\mathbb{R}^{m}_{+} was treated.

The following two examples are designed as illustrations for Theorem 5.1.

Example 2

Consider the conic problem (P) with x=(x1,x2)T2x=(x_{1},x_{2})^{T}\in\mathbb{R}^{2}, KK is the 3D ice cream cone in 3\mathbb{R}^{3}, i.e., K={(y1,y2,y3)T3:y3y12+y22}K=\left\{(y_{1},y_{2},y_{3})^{T}\in\mathbb{R}^{3}\;:\;y_{3}\geq\sqrt{y_{1}^{2}+y_{2}^{2}}\right\}, A=[000110]A=\begin{bmatrix}0&0\\ 0&1\\ 1&0\end{bmatrix}, b=(1,0,0)Tb=(-1,0,0)^{T}, c=(1,0)Tc=(1,0)^{T}. We have

(P)={x2:x11+x22}.\mathcal{F}({\rm P})=\Big{\{}x\in\mathbb{R}^{2}\;:\;x_{1}\geq\sqrt{1+x_{2}^{2}}\Big{\}}. (20)

Clearly, φ(b)=1\varphi(b)=1 and 𝒮(P)={(1,0)T}.\mathcal{S}({\rm P})=\big{\{}(1,0)^{T}\big{\}}. Note that ATy=cA^{T}y=c iff y2=0y_{2}=0 and y3=1y_{3}=1. In addition K=KK^{*}=K. It follows that

(D)={y=(y1,0,1)T3:1y11}.\mathcal{F}({\rm D})=\{y=(y_{1},0,1)^{T}\in\mathbb{R}^{3}\;:\;-1\leq y_{1}\leq 1\}.

Obviously, 𝒮(D)={(1,0,1)T}\mathcal{S}({\rm D})=\{(-1,0,1)^{T}\}. Given d3d\in\mathbb{R}^{3} with d1>0d_{1}>0, and for every t>0t>0, one has AxKb+tdAx\geq_{K}b+td iff x1(1td1)2+(x2td2)2+td3x_{1}\geq\sqrt{(1-td_{1})^{2}+(x_{2}-td_{2})^{2}}+td_{3}. On one hand, for tt small enough,

φ(b+td)=(1td1)2+td3=|1td1|+td3=1+t(d3d1).\varphi(b+td)=\sqrt{(1-td_{1})^{2}}+td_{3}=|1-td_{1}|+td_{3}=1+t(d_{3}-d_{1}).

On the other hand, φ(b)+tsup{dTy:y𝒮(D)}=1+t(d3d1)\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathcal{S}({\rm D})\}=1+t(d_{3}-d_{1}) and

φ(b)+tsup{dTy:y(D)}\displaystyle\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathcal{F}({\rm D})\} =1+tsup{d1y1+d3:1y11}\displaystyle=1+t\sup\{d_{1}y_{1}+d_{3}\;:\;-1\leq y_{1}\leq 1\}
=1+t(|d1|+d3).\displaystyle=1+t(|d_{1}|+d_{3}).
Example 3

Consider the conic problem (P) with xx\in\mathbb{R}, KK is the 3D ice cream cone in 3\mathbb{R}^{3}, A=[100]A=\begin{bmatrix}1\\ 0\\ 0\end{bmatrix}, b=(0,0,1)Tb=(0,0,-1)^{T}, c=1c=1, and d=(0,1,1)Td=(0,1,-1)^{T}. Clearly, the condition AxKb+tdAx\geq_{K}b+td is equivalent to 1+tx2+t21+t\geq\sqrt{x^{2}+t^{2}}. For t>0t>0 is small enough, the latter can be rewritten as x22t+1x^{2}\leq 2t+1. It follows that (Pt)=[2t+1;2t+1]\mathcal{F}({\rm P}_{t})=\left[-\sqrt{2t+1};\sqrt{2t+1}\right] and φ(b+td)=2t+1\varphi(b+td)=-\sqrt{2t+1}. Meanwhile, (D)={y=(1,y2,y3)T3:y31+y22}\mathcal{F}({\rm D})=\left\{y=(1,y_{2},y_{3})^{T}\in\mathbb{R}^{3}\;:\;y_{3}\geq\sqrt{1+y_{2}^{2}}\right\} and 𝒮(D)={(1,0,1)T}\mathcal{S}({\rm D})=\left\{(1,0,1)^{T}\right\}. Therefore, φ(b)+tsup{dTy:y(D)}=1t\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathcal{F}{\rm(D)}\}=-1-t and

φ(b)+tsup{dTy:y𝒮(D)}=1.\varphi(b)+t\sup\{d^{T}y\;:\;y\in\mathcal{S}{\rm(D)}\}=-1.

It is clear that 1t2t+1<1-1-t\leq-\sqrt{2t+1}<-1 for every t(0,1)t\in(0,1).

6 Properties of the Function ψ()\psi(\cdot)

In this section, differentiability propertiesof the function ψ()\psi(\cdot) and some increment estimates will be obtained.

Theorem 6.1

Suppose that both problems (P) and (D) are strictly feasible. Then, one has

ψ(c;h)={infx𝒮(P)xTh if hAT(m) if hAT(m).\psi^{\prime}(c;h)=\begin{cases}\inf\limits_{x\in\mathcal{S}({\rm P})}x^{T}h&\text{ if }\quad h\in A^{T}(\mathbb{R}^{m})\\ -\infty&\text{ if }\quad h\notin A^{T}(\mathbb{R}^{m}).\end{cases} (21)

Proof By the assumed strict feasibility of (P) and (D) (see Definition 1), there exist x0nx^{0}\in\mathbb{R}^{n} and y0my^{0}\in\mathbb{R}^{m} such that Ax0>KbAx^{0}>_{K}b, y0>K0y^{0}>_{K^{*}}0, and ATy0=cA^{T}y^{0}=c. Since ψ(c;0)=0\psi^{\prime}(c;0)=0, formula (21) is valid for h=0h=0. Fix a vector hn{0}h\in\mathbb{R}^{n}\setminus\{0\} and define the function g:¯g:\mathbb{R}\to\overline{\mathbb{R}} by

g(t):=ψ(c+th)=inf{(c+th)Tx:xn,AxKb}(t).g(t):=-\psi(c+th)=-\inf\{(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\}\quad(t\in\mathbb{R}).

Note that g(t)g(t)\neq-\infty for all tt\in\mathbb{R} and ψ(c;h)=g(0;1)\psi^{\prime}(c;h)=-g^{\prime}(0;1). Thanks to the weak duality theorem in (BN_2001, , Theorem 2.4.1 (assertion 2)), we have

cTx0v(P)v(D)bTy0.c^{T}x^{0}\geq v({\rm P})\geq v({\rm D})\geq b^{T}y^{0}.

It follows that v(P)>v({\rm P})>-\infty and v(D)<+v({\rm D})<+\infty. So, by Lemma 1, (P) and (D) have solutions, and v(D)=v(P)v({\rm D})=v({\rm P}). Hence, the relations g(0)=ψ(c)=v(P)g(0)=-\psi(c)=-v{\rm(P)} imply that the value g(0)g(0) is finite. Clearly, g(t)(c+th)Tx0g(t)\geq-(c+th)^{T}x^{0} for any tt\in\mathbb{R}.

If hAT(m)h\in A^{T}(\mathbb{R}^{m}), then h=ATvh=A^{T}v for some vmv\in\mathbb{R}^{m}. Note that

AT(y0+tv)=ATy0+tATv=c+th.A^{T}(y^{0}+tv)=A^{T}y^{0}+tA^{T}v=c+th.

Since y0>K0y^{0}>_{K^{*}}0, one can find τ>0\tau>0 such that y0+tv>K0y^{0}+tv>_{K^{*}}0 for every t]τ,τ[t\in]-\tau,\tau[. For any t]τ,τ[t\in]-\tau,\tau[, applying the weak duality theorem in (BN_2001, , Theorem 2.4.1 (assertion 2)) for the conic linear problem

min{(c+th)Tx:xn,AxKb}\min\big{\{}(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\ \,Ax\geq_{K}b\big{\}} (22)

and its dual

max{bTy:ym,ATy=c+th,yK0},\max\big{\{}b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\ \,y\geq_{K^{*}}0\big{\}}, (23)

one has (c+th)TxbT(y0+tv)(c+th)^{T}x\geq b^{T}(y^{0}+tv) for all xnx\in\mathbb{R}^{n} satisfying AxKbAx\geq_{K}b. Consequently, inf{(c+th)Tx:xn,AxKb}bT(y0+tv).\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\big{\}}\geq b^{T}(y^{0}+tv). So, we have g(t)bT(y0+tv)g(t)\leq-b^{T}(y^{0}+tv). Since g(t)(c+th)Tx0g(t)\geq-(c+th)^{T}x^{0} for all tt\in\mathbb{R}, this implies that g(t)g(t) is finite for every t]τ,τ[t\in]-\tau,\tau[. Now, fixing a number t]τ,τ[t\in]-\tau,\tau[ and applying Lemma 1 for the problems (22) and (23), one has

inf{(c+th)Tx:xn,AxKb}=sup{bTy:ym,ATy=c+th,yK0}.\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b\}\\ =\sup\{b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\,y\geq_{K^{*}}0\big{\}}.

Therefore,

g(t)\displaystyle g(t) =sup{bTy:ym,ATy=c+th,yK0}\displaystyle=-\sup\big{\{}b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\,y\geq_{K^{*}}0\big{\}} (24)
=inf{(b)Ty:ym,ATy=c+th,yK0}.\displaystyle=\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\,y\geq_{K^{*}}0\big{\}}.

To apply a result from An_Yen_2015 , we define a function f:×mf:\mathbb{R}\times\mathbb{R}^{m}\to\mathbb{R} by setting

f(t,y)=(b)Ty(t,y)×m.f(t,y)=(-b)^{T}y\quad\forall(t,y)\in\mathbb{R}\times\mathbb{R}^{m}.

Clearly, ff is convex and continuous on ×m\mathbb{R}\times\mathbb{R}^{m}. Let G:mG:\mathbb{R}\rightrightarrows\mathbb{R}^{m} be the multifunction given by G(t):={ym:ATy=c+th,yK0}.G(t):=\big{\{}y\in\mathbb{R}^{m}\;:\;A^{T}y=c+th,\,y\geq_{K^{*}}0\big{\}}. Note that GG is a convex multifunction. Consider the convex optimization problem

min{f(t,y):yG(t)},\min\{f(t,y)\;:\;y\in G(t)\}, (25)

which depends on the parameter tt\in\mathbb{R}, and observe by (24) that g()g(\cdot) is the optimal value function of this problem. According to a remark given in  (An_Yen_2015, , p. 113), g()g(\cdot) is a convex function. Let y¯\bar{y} be a solution of (D)({\rm D}). Then, it is easy to verify that y¯\bar{y} is a solution of the parametric problem (25) at t¯:=0\bar{t}:=0. Hence, applying (An_Yen_2015, , Theorem 4.2) for (25) yields

g(0)=(t,y)f(0,y¯){t+DG(0,y¯)(y)}.\partial g(0)=\bigcup\limits_{(t^{*},y^{*})\in\partial f(0,\bar{y})}\big{\{}t^{*}+D^{*}G(0,\bar{y})(y^{*})\big{\}}.

Since f(t,y)={[0b]}\partial f(t,y)=\left\{\begin{bmatrix}0\\ -b\end{bmatrix}\right\}, this implies that

g(0)=DG(0,y¯)(b).\partial g(0)=D^{*}G(0,\bar{y})(-b). (26)

By the definition of coderivative for convex multifunctions,

DG(0,y¯)(b)\displaystyle D^{*}G(0,\bar{y})(-b) ={t:(t,b)N((0,y¯),gphG)}\displaystyle=\{t^{*}\in\mathbb{R}\;:\;(t^{*},b)\in N((0,\bar{y}),\rm{gph}\,G)\}
={t:tt+bT(yy¯)0(t,y)gphG}.\displaystyle=\big{\{}t^{*}\in\mathbb{R}\;:\;t^{*}t+b^{T}(y-\bar{y})\leq 0\ \,\forall(t,y)\in\rm{gph}\,G\big{\}}.

Hence,

DG(0,y¯)(b)={t:bTy+ttbTy¯(t,y) s.t. ATyth=c,yK0}.D^{*}G(0,\bar{y})(-b)\\ =\big{\{}t^{*}\in\mathbb{R}\;:\;b^{T}y+t^{*}t\leq b^{T}\bar{y}\ \;\forall(t,y)\textrm{ s.t. }A^{T}y-th=c,\,y\geq_{K^{*}}0\big{\}}. (27)

Let

b~:=[bt],A~:=[AhT],y~:=[yt],K~:=K×{0},α:=bTy¯.\widetilde{b}:=\begin{bmatrix}b\\ t^{*}\end{bmatrix},\quad\widetilde{A}:=\begin{bmatrix}A\\ -h^{T}\end{bmatrix},\quad\widetilde{y}:=\begin{bmatrix}y\\ t\end{bmatrix},\quad\widetilde{K}:=K\times\{0\},\quad\alpha:=b^{T}\bar{y}.

Since K~=K×\widetilde{K}^{*}=K^{*}\times\mathbb{R}, the system {ATyth=cyK0\begin{cases}A^{T}y-th=c\\ y\geq_{K^{*}}0\end{cases} is equivalent to {A~Ty~=cy~K~0.\begin{cases}\widetilde{A}^{T}\widetilde{y}=c\\ \widetilde{y}\geq_{\widetilde{K}^{*}}0.\end{cases} Hence, from (27) one gets

DG(0,y¯)(b)={t:b~Ty~αy~withA~Ty~=c,y~K~0}.\displaystyle D^{*}G(0,\bar{y})(-b)=\big{\{}t^{*}\in\mathbb{R}\;:\;\widetilde{b}^{T}\widetilde{y}\leq\alpha\ \forall\widetilde{y}\ \,\textrm{with}\ \,\widetilde{A}^{T}\widetilde{y}=c,\ \widetilde{y}\geq_{\widetilde{K}^{*}}0\big{\}}. (28)

Setting y~0:=[y00]\widetilde{y}^{0}:=\begin{bmatrix}y^{0}\\ 0\end{bmatrix}, we have A~Ty~0=c\widetilde{A}^{T}\widetilde{y}^{0}=c and y~0>K~0\widetilde{y}^{0}>_{\widetilde{K}^{*}}0. Hence, by the Farkas-type result in Lemma 2(b), we can assert that the inequality b~Ty~α\widetilde{b}^{T}\widetilde{y}\leq\alpha holds for every y~m×\widetilde{y}\in\mathbb{R}^{m}\times\mathbb{R} satisfying A~Ty~=c,y~K~0\widetilde{A}^{T}\widetilde{y}=c,\ \widetilde{y}\geq_{\widetilde{K}^{*}}0 iff there exists xnx\in\mathbb{R}^{n} satisfying A~xK~b~\widetilde{A}x\geq_{\widetilde{K}}\widetilde{b} and cTxαc^{T}x\leq\alpha. Since A~x=[AxhTx]\widetilde{A}x=\begin{bmatrix}Ax\\ -h^{T}x\end{bmatrix}, one sees that the inequality A~xK~b~\widetilde{A}x\geq_{\widetilde{K}}\widetilde{b} is equivalent to the system of conditions AxKbAx\geq_{K}b and hTx=t-h^{T}x=t^{*}. Therefore, combining (28) with the description of the feasible region of (P), we have

DG(0,y¯)(b)\displaystyle D^{*}G(0,\bar{y})(-b) ={t:xns.t.A~xK~b~,cTxα}\displaystyle=\{t^{*}\in\mathbb{R}\;:\;\exists x\in\mathbb{R}^{n}\;\textrm{s.t.}\;\widetilde{A}x\geq_{\widetilde{K}}\widetilde{b},\;c^{T}x\leq\alpha\}
={t:xns.t.AxKb,hTx=t,cTxα}\displaystyle=\{t^{*}\in\mathbb{R}\;:\;\exists x\in\mathbb{R}^{n}\;\textrm{s.t.}\;Ax\geq_{K}b,\;-h^{T}x=t^{*},\;c^{T}x\leq\alpha\}
={hTx:xn,AxKb,cTxbTy¯}\displaystyle=\{-h^{T}x\;:\;x\in\mathbb{R}^{n},\;Ax\geq_{K}b,\;c^{T}x\leq b^{T}\bar{y}\}
={xTh:x(P),cTxbTy¯}.\displaystyle=\{-x^{T}h\;:\;x\in\mathcal{F}({\rm P}),\;c^{T}x\leq b^{T}\bar{y}\}.

Hence,

DG(0,y¯)(b)={xTh:x(P),cTxv(D)}.D^{*}G(0,\bar{y})(-b)=\big{\{}-x^{T}h\;:\;x\in\mathcal{F}({\rm P}),\;c^{T}x\leq v({\rm D})\big{\}}. (29)

Since v(D)=v(P)v({\rm D})=v({\rm P}), by (29) one has

DG(0,y¯)(b)={xTh:x(P),cTxv(P)}={xTh:x𝒮(P)}.D^{*}G(0,\bar{y})(-b)=\{-x^{T}h\;:\;x\in\mathcal{F}({\rm P}),\;c^{T}x\leq v({\rm P})\}=\{-x^{T}h\;:\;x\in\mathcal{S}({\rm P})\}.

From this and (26) it follows that g(0)={xTh:x𝒮(P)}.\partial g(0)=\{-x^{T}h\;:\;x\in\mathcal{S}({\rm P})\}. Then, since g()g(\cdot) is a proper convex function and 0int(domg)0\in{\rm int}({\rm dom}\,g), by (Rockafellar_1970, , Theorem 23.4) we have

g(0;1)=supsg(0)s=supx𝒮(P)(xTh).g^{\prime}(0;1)=\sup\limits_{s\in\partial g(0)}s=\sup\limits_{x\in\mathcal{S}({\rm P})}(-x^{T}h). (30)

As ψ(c;h)=g(0;1)\psi^{\prime}(c;h)=-g^{\prime}(0;1), (30) yields

ψ(c;h)=supx𝒮(P)(xTh)=infx𝒮(P)xTh.\psi^{\prime}(c;h)=-\sup\limits_{x\in\mathcal{S}({\rm P})}(-x^{T}h)=\inf\limits_{x\in\mathcal{S}({\rm P})}x^{T}h.

We have thus proved the first assertion in (21).

Now, let hAT(m)h\notin A^{T}(\mathbb{R}^{m}). In this case, we have g(t)=+g(t)=+\infty for every t{0}t\in\mathbb{R}\setminus\{0\}. Indeed, if g(t)g(t)\in\mathbb{R} for some t{0}t\in\mathbb{R}\setminus\{0\}, then the objective function of the minimization problem (22) is bounded from below. So, applying Lemma 1 for the problems (22) and (23), we can assert that (23) has a solution y1y^{1}. Since ATy1=c+thA^{T}y^{1}=c+th and ATy0=cA^{T}y^{0}=c, one gets h=AT[t1(y1y0)]h=A^{T}\left[t^{-1}(y^{1}-y^{0})\right]. This is impossible because hAT(m)h\notin A^{T}(\mathbb{R}^{m}). Thus, g(t)=+g(t)=+\infty for every t{0}t\in\mathbb{R}\setminus\{0\}. It follows that g(0;1)=+g^{\prime}(0;1)=+\infty. Then we have ψ(c;h)=g(0;1)=\psi^{\prime}(c;h)=-g^{\prime}(0;1)=-\infty. \hfill\Box

Proposition 3

For every hmh\in\mathbb{R}^{m} and t>0t>0, one has

ψ(c+th)ψ(c)+tinfx(P)hTx,\psi(c+th)\geq\psi(c)+t\inf\limits_{x\in\mathcal{F}{\rm(P)}}h^{T}x, (31)

and

ψ(c+th)ψ(c)+tinfx𝒮(P)hTx.\psi(c+th)\leq\psi(c)+t\inf\limits_{x\in\mathcal{S}{\rm(P)}}h^{T}x. (32)

Proof Let hmh\in\mathbb{R}^{m} and t>0t>0 be given arbitrarily. Clearly,

ψ(c+th)\displaystyle\psi(c+th) =inf{(c+th)Tx:x(P)}\displaystyle=\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathcal{F}{\rm(P)}\big{\}}
inf{cTx:x(P)}+inf{thTx:x(P)}\displaystyle\geq\inf\big{\{}c^{T}x\;:\;x\in\mathcal{F}{\rm(P)}\big{\}}+\inf\big{\{}th^{T}x\;:\;x\in\mathcal{F}{\rm(P)}\big{\}}
=ψ(c)+tinf{hTx:x(P)}.\displaystyle=\psi(c)+t\inf\big{\{}h^{T}x\;:\;x\in\mathcal{F}{\rm(P)}\big{\}}.

So, the inequality in (31) is valid. In addition,

ψ(c+th)\displaystyle\psi(c+th) =inf{(c+th)Tx:x(P)}\displaystyle=\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathcal{F}{\rm(P)}\big{\}}
inf{(c+th)Tx:x𝒮(P)}\displaystyle\leq\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathcal{S}{\rm(P)}\big{\}}
=ψ(c)+tinfx𝒮(P)hTx.\displaystyle=\psi(c)+t\inf\limits_{x\in\mathcal{S}{\rm(P)}}h^{T}x.

This means that the estimate (32) holds. \hfill\Box

Example 4

Consider problem (P){\rm(P)} in the setting and notations of Example 2. Choose h=(0,1)Th=(0,-1)^{T}. For 0<t<10<t<1, using (20) one has

ψ(c+th)\displaystyle\psi(c+th) =inf{x1tx2:x=(x1,x2)2,x11+x22}\displaystyle=\inf\Big{\{}x_{1}-tx_{2}\;:\;x=(x_{1},x_{2})\in\mathbb{R}^{2},\ x_{1}\geq\sqrt{1+x_{2}^{2}}\Big{\}}
=inf{1+x22tx2:x2}\displaystyle=\inf\Big{\{}\sqrt{1+x_{2}^{2}}-tx_{2}\;:\;x_{2}\in\mathbb{R}\Big{\}}
=1t2.\displaystyle=\sqrt{1-t^{2}}.

Note that ψ(c)=φ(b)=1\psi(c)=\varphi(b)=1. Since (P)={x2:x11+x22}\mathcal{F}({\rm P})=\Big{\{}x\in\mathbb{R}^{2}\;:\;x_{1}\geq\sqrt{1+x_{2}^{2}}\Big{\}}, (31) gives the trivial lower estimate ψ(c+th)\psi(c+th)\geq-\infty for every t>0t>0. Meanwhile, as 𝒮(P)={(1,0)T}\mathcal{S}({\rm P})=\big{\{}(1,0)^{T}\big{\}}, (32) gives the upper estimate ψ(c+th)1\psi(c+th)\leq 1 for every t>0t>0. Therefore, both estimates (31) and (32) are strict for all t]0,1[t\in]0,1[.

The next increment formula for ψ\psi is a generalization of the corresponding result stated in (Gauvin_2001, , p. 119), where the case K=+mK=\mathbb{R}^{m}_{+} was considered.

Theorem 6.2

Suppose that KK is a polyhedral convex cone, (P) is feasible, and (D) is strictly feasible. Then, for every hAT(m)h\in A^{T}(\mathbb{R}^{m}), there exists τ>0\tau>0 such that

ψ(c+th)=ψ(c)+tinfx𝒮(P)hTx.\psi(c+th)=\psi(c)+t\inf\limits_{x\in\mathcal{S}{\rm(P)}}h^{T}x. (33)

for all t]0,τ]t\in]0,\tau].

Proof By our assumptions, there exists y0my^{0}\in\mathbb{R}^{m} such that ATy0=cA^{T}y^{0}=c and y0>K0y^{0}>_{K^{*}}0. Since hAT(m)h\in A^{T}(\mathbb{R}^{m}), there is vmv\in\mathbb{R}^{m} satisfying h=ATvh=A^{T}v. Then, one has AT(y0+tv)=ATy0+tATv=c+th.A^{T}(y^{0}+tv)=A^{T}y^{0}+tA^{T}v=c+th. As y0>K0y^{0}>_{K^{*}}0, there exists a number τ>0\tau^{\prime}>0 such that y0+tv>K0y^{0}+tv>_{K^{*}}0 for all t]τ,τ[t\in]-\tau^{\prime},\tau^{\prime}[. For each t]τ,τ[t\in]-\tau^{\prime},\tau^{\prime}[, the linear optimization problem min{(c+th)Tx:xn,AxKb}\min\big{\{}(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\ \,Ax\geq_{K}b\big{\}} and its dual max{bTy:ym,ATy=c+th,yK0}\max\big{\{}b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\ \,y\geq_{K^{*}}0\big{\}} are feasible. Therefore, by the well-known strong duality theorem in linear programming one has

ψ(c+th)\displaystyle\psi(c+th) =inf{(c+th)Tx:xn,AxKb}\displaystyle=\inf\big{\{}(c+th)^{T}x\;:\;x\in\mathbb{R}^{n},\ \,Ax\geq_{K}b\big{\}}
=sup{bTy:ym,ATy=c+th,yK0}\displaystyle=\sup\big{\{}b^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\ \,y\geq_{K^{*}}0\big{\}}

and ψ(c+th)\psi(c+th) is finite. So,

ψ(c+th)\displaystyle\psi(c+th) =inf{(b)Ty:ym,ATy=c+th,yK0}\displaystyle=-\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y=c+th,\ \,y\geq_{K^{*}}0\big{\}} (34)
=inf{(b)Ty:ym,ATyc+th,\displaystyle=-\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,A^{T}y\geq c+th,
ATy(c+th),yK0}\displaystyle\hskip 113.81102pt-A^{T}y\geq-(c+th),\,y\geq_{K^{*}}0\big{\}}
=inf{(b)Ty:ym,MyLc~+th~},\displaystyle=-\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,My\geq_{L}\widetilde{c}+t\widetilde{h}\big{\}},

where

M:=[ATATEm](2n+m)×m,c~:=[cc0]2n+m,h~:=[hh0]2n+m,M:=\begin{bmatrix}A^{T}\\ -A^{T}\\ E_{m}\end{bmatrix}\in\mathbb{R}^{(2n+m)\times m},\ \,\widetilde{c}:=\begin{bmatrix}c\\ -c\\ 0\end{bmatrix}\in\mathbb{R}^{2n+m},\ \,\widetilde{h}:=\begin{bmatrix}h\\ -h\\ 0\end{bmatrix}\in\mathbb{R}^{2n+m},

EmE_{m} denotes the unit matrix in m×m\mathbb{R}^{m\times m}, and L:=+n×+n×KL:=\mathbb{R}^{n}_{+}\times\mathbb{R}^{n}_{+}\times K^{*}. Clearly, LL is a polyhedral convex cone. So, applying Theorem 5.2 to the problem inf{(b)Ty:ym,MyLc~+th~},\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,My\geq_{L}\widetilde{c}+t\widetilde{h}\big{\}}, one finds a number τ]0,τ[\tau\in]0,\tau^{\prime}[ such that for any t]0,τ[t\in]0,\tau[ one has

inf{(b)Ty:ym,MyLc~+th~}=inf{(b)Ty:ym,MyLc~}+tsup{h~Tz:z2n+m,MTz=b,zL0,c~Tz=infMyLc~(b)Ty}.\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,My\geq_{L}\widetilde{c}+t\widetilde{h}\big{\}}=\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,My\geq_{L}\widetilde{c}\big{\}}\\ \hskip 28.45274pt\ +\ t\sup\big{\{}\widetilde{h}^{T}z\;:\;z\in\mathbb{R}^{2n+m},\,M^{T}z=-b,\,z\geq_{L^{*}}0,\widetilde{c}^{T}z=\inf\limits_{My\geq_{L}\widetilde{c}}\,(-b)^{T}y\big{\}}.

Combining this with (34), we have

ψ(c+th)=ψ(c)tsup{h~Tz:z2n+m,MTz=b,zL0,c~Tz=ψ(c)},\psi(c+th)=\psi(c)\\ -t\sup\big{\{}\widetilde{h}^{T}z\;:\;z\in\mathbb{R}^{2n+m},\,M^{T}z=-b,\,z\geq_{L^{*}}0,\,\widetilde{c}^{T}z=-\psi(c)\big{\}}, (35)

where the equality ψ(c)=inf{(b)Ty:ym,MyLc~}\psi(c)=\inf\big{\{}(-b)^{T}y\;:\;y\in\mathbb{R}^{m},\,My\geq_{L}\widetilde{c}\big{\}} follows from (34) if one takes t=0t=0. Substituting z=[uvw]z=\begin{bmatrix}u\\ v\\ w\end{bmatrix} with u,vnu,v\in\mathbb{R}^{n} and wmw\in\mathbb{R}^{m} into (35), one gets

ψ(c+th)=ψ(c)tsup{hTuhTv:u,v+n,wK,AuAv+w=b,cTucTv=ψ(c)}.\begin{array}[]{rl}\psi(c+th)=\psi(c)-t\sup\big{\{}&h^{T}u-h^{T}v\;\;:\;\ u,v\in\mathbb{R}^{n}_{+},\,w\in K,\\ &Au-Av+w=-b,\,c^{T}u-c^{T}v=-\psi(c)\big{\}}.\end{array}

Setting x:=vux:=v-u, one has

ψ(c+th)\displaystyle\psi(c+th) =ψ(c)\displaystyle=\psi(c)
tsup{hTx:xn,wK,w=Axb,cTx=ψ(c)}\displaystyle\hskip 8.5359pt-t\sup\big{\{}-h^{T}x\;:\;x\in\mathbb{R}^{n},\,w\in K,w=Ax-b,\,-c^{T}x=-\psi(c)\big{\}}
=ψ(c)+tinf{hTx:xn,wK,w=Axb,cTx=ψ(c)}\displaystyle=\psi(c)+t\inf\big{\{}h^{T}x\;:\;x\in\mathbb{R}^{n},\,w\in K,w=Ax-b,\,c^{T}x=\psi(c)\big{\}}
=ψ(c)+tinf{hTx:xn,AxKb,cTx=ψ(c)}\displaystyle=\psi(c)+t\inf\big{\{}h^{T}x\;:\;x\in\mathbb{R}^{n},\,Ax\geq_{K}b,\,c^{T}x=\psi(c)\big{\}}
=ψ(c)+tinfx𝒮(P)hTx.\displaystyle=\psi(c)+t\inf\limits_{x\in\mathcal{S}{\rm(P)}}h^{T}x.

This proves that (33) holds. \hfill\Box

7 Conclusions

Based on strict feasibility conditions and the duality theory in the conic linear programming, we have shown that two optimal value functions of a conic linear program, whether either the constraint system is linearly perturbed or the objective function is linearly perturbed, have some nice continuity and differentiability properties. It turns out that if the convex cone is non-polyhedral, then the behaviors of the optimal value functions in question are more complicated than that of the corresponding optimal value functions of a linear program.

Properties of the solution maps of parametric conic linear programs and generalizations of our results for infinite-dimensional conic linear programming deserve further investigations.

Acknowledgements.
This work was supported by National Foundation for Science &\& Technology Development (Vietnam) and Pukyong National University (Busan, Korea). The first author was partially supported by the Hanoi National University of Education under grant number SPHN20-07: “Differential Stability in Conic Linear Programming”. The second author was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2019R1A2C1008672).

References

  • (1) Bonnans J.F., Shapiro A.: Perturbation Analysis of Optimization Problems. Springer-Verlag, New York (2000)
  • (2) Mordukhovich B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin (2006)
  • (3) Mordukhovich B.S.: Variational Analysis and Generalized Differentiation, II: Applications. Springer, Berlin (2006)
  • (4) Gauvin J., Tolle J.W.: Differential stability in nonlinear programming. SIAM J. Control Optim. 15, 294–311 (1977)
  • (5) Auslender A.: Differential stability in nonconvex and nondifferentiable programming. Math. Program. Study 10, 29–41 (1979)
  • (6) Gauvin J., Dubeau F.: Differential properties of the marginal function in mathematical programming. Math. Program. Study 19, 101–119 (1982)
  • (7) Rockafellar R.T.: Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming. Math. Program. Study 17, 28–66 (1982)
  • (8) Gauvin J., Dubeau F.: Some examples and counterexamples for the stability analysis of nonlinear programming problems. Math. Program. Study 21, 69–78 (1984)
  • (9) Thibault L.: On subdifferentials of optimal value functions. SIAM J. Control Optim. 29, 1019–1036 (1991)
  • (10) Yen N.D.: Stability of the solution set of perturbed nonsmooth inequality systems and application. J. Optim. Theory Appl. 93, 199–225 (1997)
  • (11) Lee G.M., Tam N.N., Yen N.D.: Stability of a class of quadratic programs with a conic constraint. Taiwanese J. Math. 13, 1823–1836 (2009)
  • (12) Mordukhovich B.S., Nam N.M., Yen N.D.: Subgradients of marginal functions in parametric mathematical programming. Math. Program. Ser. B 116, 369–396 (2009)
  • (13) An D.T.V, Yen N. D.: Differential stability of convex optimization problems under inclusion constraints. Appl. Anal. 94, 108–128 (2015)
  • (14) An D.T.V.: Subdifferentials of Optimal Value Functions in Parametric Convex Optimization Problems. Ph.D. Dissertation, Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoi (2018)
  • (15) Gauvin J.: Formulae for the sensitivity analysis of linear programming problems. In: Approximation, Optimization and Mathematical Economics, 117–120. Physica, Heidelberg (2001).
  • (16) Thieu N.N.: Some differential estimates in linear programming. Acta Math. Vietnam. 41, 243–249 (2016)
  • (17) Ben-Tal A., Nemirovski A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. SIAM, Philadelphia (2001)
  • (18) Luenberger D.G., Ye Y.: Linear and Nonlinear Programming. Internat. Ser. Oper. Res. Management Sci. 228, 4th ed., Springer, Cham (2016)
  • (19) Chuong T.D., Jeyakumar V.: Convergent conic linear programming relaxations for cone-convex polynomial programs. Oper. Res. Lett. 45, 220–226 (2017)
  • (20) Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970)