∎
136 Xuan Thuy, Hanoi, Vietnam
[email protected] 33institutetext: Do Sang Kim 44institutetext: Department of Applied Mathematics, Pukyong National University
Busan, Korea
[email protected] 55institutetext: Nguyen Dong Yen 66institutetext: Institute of Mathematics, Vietnam Academy of Science and Technology
18 Hoang Quoc Viet, Hanoi 10307, Vietnam
[email protected]
Two Optimal Value Functions in Parametric Conic Linear Programming111Dedicated to Professor Franco Giannessi on the occasion of his 85th birthday.
Abstract
We consider the conic linear program given by a closed convex cone in an Euclidean space and a matrix, where vector on the right-hand-side of the constraint system and the vector defining the objective function are subject to change. Using the strict feasibility condition, we prove the locally Lipschitz continuity and obtain some differentiability properties of the optimal value function of the problem under right-hand-side perturbations. For the optimal value function under linear perturbations of the objective function, similar differentiability properties are obtained under the assumption saying that both primal problem and dual problem are strictly feasible.
Keywords:
Conic linear programming Primal problem Dual problem Optimal value function Lipschitz continuity Differentiability properties Increment estimatesMSC:
49K40 90C31 90C25 90C301 Introduction
If the feasible region or the objective function of a mathematical programming problem depends on a parameter, then the optimal value of the problem is a function of the parameter. In general, the optimal value function is a fairly complicated function. Continuity properties (resp., differentiability properties) of the optimal value function in parametric mathematical programming are usually classified as results on stability (resp., on differential stability) of optimization problems. Many results in this direction can be found in the books (Bonnans_Shapiro_2000, , Chapters 4,5), (Mordukhovich_Book_I, , Chapter 4) and (Mordukhovich_Book_II, , Chapter 5), the papers [4-13], the dissertation An_Thesis , and the references therein.
In 2001, Gauvin Gauvin_2001 investigated the sensitivity of the optimal value of a parametric linear programming problem. He gave separated formulas for the increment of the optimal value with respect to perturbations of the right-hand-side vector in the constraint system, the cost vector, and the coefficients of the matrix defining the linear constraint system. In 2016, Thieu Thieu_2016 established some lower and upper bounds for the increment of the optimal value of a parametric linear program, where both objective vector and right-hand-side vector in the constraint system are subject to perturbations. He showed that the appearance of an extra second-order term in these estimates is a must.
Conic linear programming is a natural extension of linear programming. In a linear program, a feasible point usually has nonnegative components, i.e., it belongs to the nonnegative orthant of an Euclidean space. In addition, the constraints are written as linear inequalities. Meanwhile, in a conic linear program, the constraint is written as an inequality with the ordering cone being a closed convex cone in an Euclidean space. Conic linear programs can be used in many applications that cannot be simulated by linear programs (see, e.g., (BN_2001, , Section 2.2) and (Luenberger_Ye_2016, , Chapter 6)). This is the reason why these special convex optimization problems have attracted much attention from researchers (see Ben-Tal and Nemirovski BN_2001 , Bonnans and Shapiro Bonnans_Shapiro_2000 , Luenberger and Ye Luenberger_Ye_2016 , Chuong and Jeyakumar Chuong_Jeyakumar_2017 , and the references therein).
It is of interest to know whether results similar to those of Gauvin Gauvin_2001 and Thieu Thieu_2016 can be obtained for conic linear programs, or not. The aim of this paper is to show that the results of Gauvin_2001 admit certain generalizations in conic linear programming, and some differentiability properties, as well as a locally Lipschitz continuity property, can be established for the two related optimal value functions. The obtained results are analyzed via four concrete examples which show that the optimal value functions in a conic linear program are more complicated than that of the corresponding optimal value functions in linear programming.
The remaining part of our paper has six sections. Section 2 contains some definitions, a strong duality theorem in conic linear programming, and a lemma. Section 3 is devoted to the locally Lipschitz continuity property of the optimal value function of a conic linear program under right-hand-side perturbations. Differentiability properties of this optimal value function are studied in Section 4. Some increment formulas for the function are established in Section 5. The optimal value function of a conic linear program under linear perturbations of the objective function is studied in Section 6. The final section presents some concluding remarks.
2 Preliminaries
One says that a nonempty subset of the Euclidean space is a cone if for all . The (positive) dual cone of a cone is given by , where T denotes the matrix transposition. Herein, vectors of Euclidean spaces are written as rows of real numbers in the text, but they are interpreted as columns of real numbers in matrix calculations. The closed ball centered at with radius is denoted by . By we denote the set of extended real values.
Let be a proper convex function and be such that is finite. According to Theorem 23.1 in Rockafellar_1970 , the directional derivative
of at w.r.t. a direction , always exists (it can take values or ), and one has The subdifferential of at is defined by
For a convex set in an Euclidean space , the normal cone of at is given by
Let be a multifunction. The domain and the graph of are given, respectively, by the formulas and
If is a convex set in , then is said to be a convex multifunction. In that case, the coderivative of at is the multifunction with
for all (see, e.g., (Mordukhovich_Book_I, , Section 1.2) and An_Yen_2015 ).
From now on, let be a closed convex cone in . For any from , one writes if . Similarly, one writes if , where denotes the interior of .
Given a matrix , vectors and , we consider the primal conic linear optimization problem
(P) |
Following Bental and Nemirovski (BN_2001, , p. 52), we call
(D) |
the dual problem of (P). The feasible region, and the solution set, and the optimal value of (P) are denoted respectively by , , and . The feasible region, the solution set, and the optimal value of (D) are denoted respectively by , , and . By definitions, one has
and By a standard convention, and .
Thanks to the weak duality theorem in conic linear programming (BN_2001, , Proposition 2.3.1), one has for any and . Hence, . By constructing suitable examples, we can show that the strict inequality is possible, i.e., a conic linear program may have a duality gap. Thus, the equality is guaranteed only if one imposes a certain regularity condition. In what follows, we will rely on the regularity condition called strict feasibility, which was intensively employed by Bental and Nemirovski BN_2001 .
Definition 1
(See (BN_2001, , Section 2.4)) If there exists a point satisfying , then one says that problem (P) is strictly feasible. If there is some satisfying and , then the dual problem (D) is said to be strictly feasible.
The main assertion of the strong duality theorem in conic linear programming can be stated as follows.
Lemma 1
(See (BN_2001, , Theorem 2.4.1 (assertions 3a and 3b))) If (P) is strictly feasible and , then (D) has a solution and . If (D) is strictly feasible and one has , then (P) has a solution and .
The following analogues of the Farkas lemma (Rockafellar_1970, , p. 200) will be used repeatedly in the sequel.
Lemma 2
Let there be given a closed convex cone in , a matrix in , vectors and , and a real number .
(a) Suppose that there exists a point satisfying . Then, the inequality is a consequence of the conic-linear inequality iff there exists a vector satisfying and .
(b) Suppose that there exists a point satisfying and . Then, the inequality is a consequence of the system
iff there exists a vector satisfying and .
Proof To prove assertion (a), suppose that and there exists a vector satisfying and . Then we have
(1) | ||||
where the last inequality in (1) is valid because and . Combining this with the assumption , yields . Conversely, suppose that for all satisfying . It is clear that (P) is strictly feasible and . By the first assertion of Lemma 1, (D) has a solution and one has . Let be a solution of (D). Then we have , and . So, assertion (a) holds true.
Now, to prove assertion (b), suppose that and there exists a vector satisfying and . Then, we have
(2) | ||||
where the last inequality in (2) is valid because and . Since , this yields . Conversely, suppose that for all satisfying . So, (D) is strictly feasible by our assumption and . By the second assertion of Lemma 1, (P) has a solution and . Let be a solution of (P). Then we have and . This justifies the validity of (b).
Remark 1
The result in Lemma 2(a) is a known one (see (BN_2001, , Proposition 2.4.3)). The assertions of Lemma 1 and Lemma 2(a) may be false if the assumption on the strict feasiblity of the conic-linear inequality is removed (see (BN_2001, , Example 2.4.2)). Similarly, the assertion of Lemma 2(b) may be false if the assumption on the strict feasiblity of the conic-linear inequality is removed.
The optimal value depends on the parameters , , and . Following Gauvin Gauvin_2001 , who studied differential stability properties of linear programming problems, we consider the functions and Note that is the optimal value function of (P) under right-hand-side perturbations of the constraint set and is the optimal value function of (P) under perturbations of the objective function.
3 Locally Lipschitz Continuity of
According to a remark given in the paper of An and Yen (An_Yen_2015, , p. 113), we know that is a convex function. The next proposition presents additional continuity properties of .
Theorem 3.1
Suppose that (P) is strictly feasible and . Then, there exists such that is finite for every and is Lipschitz on .
Proof On one hand, since (P) is strictly feasible, one selects a point satisfying , i.e., . Then there exists a convex open neighborhood of satisfying for every . So, is a feasible point of the problem
() |
for all . Moreover . On the other hand, applying the weak duality theorem (BN_2001, , Proposition 2.3.1) for the problems () and
() |
one has for all satisfying and for every satisfying , . Therefore,
This means that
So, we have because
(3) |
where the last inequality in (3) is valid as is nonempty (see Lemma 1). It follows that is finite for every , i.e., . Combining this with the convexity of , by (Rockafellar_1970, , Theorem 24.7) we can asserts that is locally Lipschitz on .
4 Differentiability Properties of
First, let us show that the solution set of (D) possesses a remarkable stability property, provided that the assumptions of Theorem 3.1 are satisfied.
Proposition 1
Suppose that (P) is strictly feasible and . Then there exists such that is nonempty and compact for every in .
Proof Since (P) is strictly feasible, there is satisfying . Therefore, there is a real number satisfying for all . According to Theorem 3.1, without loss of generality we can assume that is finite for all . Fix a vector . Then, the problem () is strictly feasible and . So, applied for the pair problems () and () Lemma 1 asserts that is nonempty and
Since , this implies Clearly, is a closed set. To prove that is compact by contradiction, let us suppose that is unbounded. Select a sequence in satisfying the condition . Define . Since , without loss of generality we can assume that the sequence converges to a vector with . Since the sequence is contained in the closed cone , one has . Observe that
and
It follows that . Meanwhile, since and , . We have obtained a contradiction. Thus, is bounded; hence it is compact.
The following question arises naturally from Proposition 1: If (D) is strictly feasible and , then the solution set is nonempty and compact, or not? By the second assertion of Lemma 1, if (D) is strictly feasible and , then is nonempty. However, may not be a compact set. The next example is an illustration for this statement.
Example 1
Consider the problem (P) with , , , , . Here we have . For , one has and . Thus, (D) is strictly feasible. Meanwhile, is unbounded.
The following result gives a formula for the directional derivative of at in a direction .
Theorem 4.1
Suppose that (P) is strictly feasible and (P) has a solution. Then, for every direction , one has
(4) |
Proof Fix a direction , consider the function given by
where . Let for all . Clearly, is convex and continuous on . Define the multifunction by setting
Note that is a convex multifunction. Consider the parametric convex optimization problem which depends on the parameter , and observe that is the optimal value function of this problem. By a remark given in (An_Yen_2015, , p. 113) we know that is a convex function. Applying a result of An and Yen (An_Yen_2015, , Theorem 4.2) for and for a solution of , we have
Since , this equality yields
(5) |
By the definition of coderivative for convex multifunctions,
(6) | ||||
Let
Clearly, the inequality is equivalent to . So, from (6) one gets
(7) |
Since (P) is strictly feasible, there exists satisfying . Setting , we have . Then, applying the Farkas-type result in Lemma 2(a) to the inequality system and the vector , we can assert that for all such that iff there exists a vector satisfying and . Since
the equality is equivalent to the conditions and . Therefore, combining (7) with the description of the feasible region of the dual problem (D), we have
(8) | ||||
Since (P) is strictly feasible and (P) has a solution, by the strong duality theorem in Lemma 1, (D) has a solution and . Hence, by (8),
(9) | ||||
Combining (9) with (5) gives Then, by the well-known result on the relationships between the directional derivative of a convex function (Rockafellar_1970, , Theorem 23.4), we obtain
(10) |
where the last equality in (10) is valid because is compact. Finally, using (10) and the fact that , we get the desired formula (4).
Based on Theorem 4.1, the next proposition provides us with a formula for the subdifferential of at .
Proposition 2
If (P) is strictly feasible and (P) possesses a solution, then .
Proof Take a vector . For any and , from the inequality it follows that Letting , one has Combining this with the equation (4) gives
(11) |
To show that belongs to , suppose the contrary: . By is a nonempty convex set, by the strongly separation theorem (see, e.g., (Rockafellar_1970, , Corollary 11.4.2)), there exists a vector such that . This contradicts with (11). We have thus proved that . To obtain the opposite inclusion, take any . For all , by (4),
(12) |
Besides, by (Rockafellar_1970, , Theorem 23.1) one gets
(13) |
Combining (12) with (13) we have for all . So, . The proof is complete.
5 Increment Estimates for
Roughly speaking, the following statement gives an analogue of (Gauvin_2001, , Theorem 1) for conic linear programs.
Theorem 5.1
Suppose that (P) is strictly feasible and . Then, for every and , one has
In addition, for every , there exists such that
(14) |
for all .
Proof By Lemma 1, (D) has a solution and ; hence . On one hand, applying the weak duality theorem (BN_2001, , Proposition 2.3.1) for the problems
() |
and
() |
one has for all satisfying and for every satisfying , . Consequently,
(15) |
On the other hand, it is clear that
(16) |
Combining (15) and (16), one gets
It follows that
(17) |
Combining (17) with the fact that , one has .
Since (P) is strictly feasible, there is a point satisfying . Then one can find such that for all . Hence, () is strictly feasible. Applying Lemma 1 for a pair problems () and (), where , we have
Thus, the inequality (14) has been proved for all .
We now consider the case where is a polyhedral convex cone.
Theorem 5.2
Suppose that is a polyhedral convex cone and is finite. Then for every , there exists such that
(18) |
for all .
Proof Since is a polyhedral convex cone in , there exists a matrix satisfying Then, the problem (P) can be written in the form
(P’) |
The dual problem of (P’) is the problem
(D’) |
Since is finite, the (P’) has a solution. Moreover, both of two problems (P’) and (D’) have solutions and the optimal values are equal. Then, for every , by using Theorem 1 in Gauvin_2001 for the problem (P’), one gets
for any sufficiently small. Denote , we have
(19) |
In the other hand, since where is the -row of matrix , one has
by Proposition 2.42 in Bonnans_Shapiro_2000 ; hence . Combining the later and the equation (19), we can assert that (18) holds.
Remark 2
Theorem 5.2 is a generalization of Theorem 1 in Gauvin_2001 , where the case was treated.
The following two examples are designed as illustrations for Theorem 5.1.
Example 2
Consider the conic problem (P) with , is the 3D ice cream cone in , i.e., , , , . We have
(20) |
Clearly, and Note that iff and . In addition . It follows that
Obviously, . Given with , and for every , one has iff . On one hand, for small enough,
On the other hand, and
Example 3
Consider the conic problem (P) with , is the 3D ice cream cone in , , , , and . Clearly, the condition is equivalent to . For is small enough, the latter can be rewritten as . It follows that and . Meanwhile, and . Therefore, and
It is clear that for every .
6 Properties of the Function
In this section, differentiability propertiesof the function and some increment estimates will be obtained.
Theorem 6.1
Suppose that both problems (P) and (D) are strictly feasible. Then, one has
(21) |
Proof By the assumed strict feasibility of (P) and (D) (see Definition 1), there exist and such that , , and . Since , formula (21) is valid for . Fix a vector and define the function by
Note that for all and . Thanks to the weak duality theorem in (BN_2001, , Theorem 2.4.1 (assertion 2)), we have
It follows that and . So, by Lemma 1, (P) and (D) have solutions, and . Hence, the relations imply that the value is finite. Clearly, for any .
If , then for some . Note that
Since , one can find such that for every . For any , applying the weak duality theorem in (BN_2001, , Theorem 2.4.1 (assertion 2)) for the conic linear problem
(22) |
and its dual
(23) |
one has for all satisfying . Consequently, So, we have . Since for all , this implies that is finite for every . Now, fixing a number and applying Lemma 1 for the problems (22) and (23), one has
Therefore,
(24) | ||||
To apply a result from An_Yen_2015 , we define a function by setting
Clearly, is convex and continuous on . Let be the multifunction given by Note that is a convex multifunction. Consider the convex optimization problem
(25) |
which depends on the parameter , and observe by (24) that is the optimal value function of this problem. According to a remark given in (An_Yen_2015, , p. 113), is a convex function. Let be a solution of . Then, it is easy to verify that is a solution of the parametric problem (25) at . Hence, applying (An_Yen_2015, , Theorem 4.2) for (25) yields
Since , this implies that
(26) |
By the definition of coderivative for convex multifunctions,
Hence,
(27) |
Let
Since , the system is equivalent to Hence, from (27) one gets
(28) |
Setting , we have and . Hence, by the Farkas-type result in Lemma 2(b), we can assert that the inequality holds for every satisfying iff there exists satisfying and . Since , one sees that the inequality is equivalent to the system of conditions and . Therefore, combining (28) with the description of the feasible region of (P), we have
Hence,
(29) |
Since , by (29) one has
From this and (26) it follows that Then, since is a proper convex function and , by (Rockafellar_1970, , Theorem 23.4) we have
(30) |
As , (30) yields
We have thus proved the first assertion in (21).
Now, let . In this case, we have for every . Indeed, if for some , then the objective function of the minimization problem (22) is bounded from below. So, applying Lemma 1 for the problems (22) and (23), we can assert that (23) has a solution . Since and , one gets . This is impossible because . Thus, for every . It follows that . Then we have .
Proposition 3
For every and , one has
(31) |
and
(32) |
Proof Let and be given arbitrarily. Clearly,
So, the inequality in (31) is valid. In addition,
This means that the estimate (32) holds.
Example 4
The next increment formula for is a generalization of the corresponding result stated in (Gauvin_2001, , p. 119), where the case was considered.
Theorem 6.2
Suppose that is a polyhedral convex cone, (P) is feasible, and (D) is strictly feasible. Then, for every , there exists such that
(33) |
for all .
Proof By our assumptions, there exists such that and . Since , there is satisfying . Then, one has As , there exists a number such that for all . For each , the linear optimization problem and its dual are feasible. Therefore, by the well-known strong duality theorem in linear programming one has
and is finite. So,
(34) | ||||
where
denotes the unit matrix in , and . Clearly, is a polyhedral convex cone. So, applying Theorem 5.2 to the problem one finds a number such that for any one has
Combining this with (34), we have
(35) |
where the equality follows from (34) if one takes . Substituting with and into (35), one gets
Setting , one has
This proves that (33) holds.
7 Conclusions
Based on strict feasibility conditions and the duality theory in the conic linear programming, we have shown that two optimal value functions of a conic linear program, whether either the constraint system is linearly perturbed or the objective function is linearly perturbed, have some nice continuity and differentiability properties. It turns out that if the convex cone is non-polyhedral, then the behaviors of the optimal value functions in question are more complicated than that of the corresponding optimal value functions of a linear program.
Properties of the solution maps of parametric conic linear programs and generalizations of our results for infinite-dimensional conic linear programming deserve further investigations.
Acknowledgements.
This work was supported by National Foundation for Science Technology Development (Vietnam) and Pukyong National University (Busan, Korea). The first author was partially supported by the Hanoi National University of Education under grant number SPHN20-07: “Differential Stability in Conic Linear Programming”. The second author was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2019R1A2C1008672).References
- (1) Bonnans J.F., Shapiro A.: Perturbation Analysis of Optimization Problems. Springer-Verlag, New York (2000)
- (2) Mordukhovich B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory. Springer, Berlin (2006)
- (3) Mordukhovich B.S.: Variational Analysis and Generalized Differentiation, II: Applications. Springer, Berlin (2006)
- (4) Gauvin J., Tolle J.W.: Differential stability in nonlinear programming. SIAM J. Control Optim. 15, 294–311 (1977)
- (5) Auslender A.: Differential stability in nonconvex and nondifferentiable programming. Math. Program. Study 10, 29–41 (1979)
- (6) Gauvin J., Dubeau F.: Differential properties of the marginal function in mathematical programming. Math. Program. Study 19, 101–119 (1982)
- (7) Rockafellar R.T.: Lagrange multipliers and subderivatives of optimal value functions in nonlinear programming. Math. Program. Study 17, 28–66 (1982)
- (8) Gauvin J., Dubeau F.: Some examples and counterexamples for the stability analysis of nonlinear programming problems. Math. Program. Study 21, 69–78 (1984)
- (9) Thibault L.: On subdifferentials of optimal value functions. SIAM J. Control Optim. 29, 1019–1036 (1991)
- (10) Yen N.D.: Stability of the solution set of perturbed nonsmooth inequality systems and application. J. Optim. Theory Appl. 93, 199–225 (1997)
- (11) Lee G.M., Tam N.N., Yen N.D.: Stability of a class of quadratic programs with a conic constraint. Taiwanese J. Math. 13, 1823–1836 (2009)
- (12) Mordukhovich B.S., Nam N.M., Yen N.D.: Subgradients of marginal functions in parametric mathematical programming. Math. Program. Ser. B 116, 369–396 (2009)
- (13) An D.T.V, Yen N. D.: Differential stability of convex optimization problems under inclusion constraints. Appl. Anal. 94, 108–128 (2015)
- (14) An D.T.V.: Subdifferentials of Optimal Value Functions in Parametric Convex Optimization Problems. Ph.D. Dissertation, Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoi (2018)
- (15) Gauvin J.: Formulae for the sensitivity analysis of linear programming problems. In: Approximation, Optimization and Mathematical Economics, 117–120. Physica, Heidelberg (2001).
- (16) Thieu N.N.: Some differential estimates in linear programming. Acta Math. Vietnam. 41, 243–249 (2016)
- (17) Ben-Tal A., Nemirovski A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. SIAM, Philadelphia (2001)
- (18) Luenberger D.G., Ye Y.: Linear and Nonlinear Programming. Internat. Ser. Oper. Res. Management Sci. 228, 4th ed., Springer, Cham (2016)
- (19) Chuong T.D., Jeyakumar V.: Convergent conic linear programming relaxations for cone-convex polynomial programs. Oper. Res. Lett. 45, 220–226 (2017)
- (20) Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1970)