Stochastic Decomposition Method for Two-Stage Distributionally Robust Optimization
Abstract. In this paper, we present a sequential sampling-based algorithm for the two-stage distributionally robust linear programming (2-DRLP) models. The 2-DRLP models are defined over a general class of ambiguity sets with discrete or continuous probability distributions. The algorithm is a distributionally robust version of the well-known stochastic decomposition algorithm of Higle and Sen (Math. of OR 16(3), 650-669, 1991) for a two-stage stochastic linear program. We refer to the algorithm as the distributionally robust stochastic decomposition (DRSD) method. The key features of the algorithm include (1) it works with data-driven approximations of ambiguity sets that are constructed using samples of increasing size and (2) efficient construction of approximations of the worst-case expectation function that solves only two second-stage subproblems in every iteration. We identify conditions under which the ambiguity set approximations converge to the true ambiguity sets and show that the DRSD method asymptotically identifies an optimal solution, with probability one. We also computationally evaluate the performance of the DRSD method for solving distributionally robust versions of instances considered in stochastic programming literature. The numerical results corroborate the analytical behavior of the DRSD method and illustrate the computational advantage over an external sampling-based decomposition approach (distributionally robust L-shaped method).
1 Introduction
Many applications of practical interest have been formulated as stochastic programming (SP) models. The models with recourse, particularly in a two-stage setting, have gained wide acceptance across application domains. These two-stage stochastic linear programs (2-SLPs) can be stated as follows:
(1) |
One needs to have complete knowledge of the probability distribution to formulate the above problem. Alternatively, one must have an appropriate means to simulate observations of the random variable so that a sample average approximation (SAA) problem with a finite number of scenarios can be formulated and solved. In many practical applications, distribution associated with random parameters in the optimization model is not precisely known. It either has to be estimated from data or constructed by expert judgments, which tend to be subjective. In any case, identifying a distribution using available information may be cumbersome at best. Stochastic min-max programming that has gained significant attention in recent years under the name of distributionally robust optimization (DRO) is intended to alleviate the ambiguity is distributional information.
In this paper, we study a particular manifestation of the DRO problem in the two-stage setting, viz., the two-stage distributionally robust linear programming (2-DRLP) problem. This problem is stated as:
(2) |
Here, is the coefficient vector of a linear cost function and is the feasible set of the first-stage decision vector. The feasible region takes the form of a compact polyhedron , where and . The function is the worst-case expected recourse cost, which is formally defined as follows:
(3) |
The random vector is defined on a measurable space , where is the sample space equipped with the sigma-algebra and is a set of continuous or discrete probability distributions defined on the measurable space . The set of probability distributions is often referred to as the ambiguity set. The expectation operation is taken with respect to the probability distribution . For a given , we refer to the optimization problem in (3) as the distribution separation problem. For a given realization of the random vector , the recourse cost in (3) is the optimal value of the following second-stage linear program:
(4) | ||||
s.t. |
The second-stage parameters , , , and can depend on uncertainty.
Most data-driven approaches for 2-SLP, such as SAA, tackle the problem in two steps. In the first step, an uncertainty representation is generated using a finite set of observations that serves as an approximation of and the corresponding empirical distribution serves as an approximation of . For a given uncertainty representation, one obtains a deterministic approximation of (1). In the second step, the approximate problem is solved using deterministic optimization methods. Such a two-step approach may lead to poor out-of-sample performance, forcing the entire process to be repeated from scratch with an improved uncertainty representation. Since sampling is performed prior to the optimization step, the two-step approach is also referred to as the external sampling procedure.
The data-driven approaches for DRO problems avoid working with a fixed approximation of in the first step. However, the ambiguity set is still defined either over the original sample space or a finite approximation of it. Therefore, the resulting ambiguity set is a deterministic representation of the true ambiguity set in (2). Once again, deterministic optimization tools are employed to solve the DRO problem. In many data-driven settings, prior knowledge of the sample space may not be available, and using a finite sample to approximate the original sample space may result in similar out-of-sample performance as in the case of the external sampling approach for 2-SLP.
1.1 Contributions
In light of the above observations regarding the two-step procedure for data-driven optimization, the main contributions of this manuscript are highlighted in the following.
-
1.
A Sequential Sampling Algorithm: We present a sequential sampling approach for solving 2-DRLP. We refer to this algorithm as the distributionally robust stochastic decomposition (DRSD) algorithm following its risk-neutral predecessor, the two-stage stochastic decomposition (SD) method [20]. The algorithm uses a sequence of ambiguity sets that evolve over the course of the algorithm due to the sequential inclusion of new observations. While the simulation step improves the representation of the ambiguity set, the optimization step improves the solution in an online manner. Therefore, the DRSD method concurrently performs simulation and optimization steps. Moreover, the algorithm design does not depend on any specific ambiguity set description, and hence, is suitable for a general family of ambiguity sets.
-
2.
Convergence Analysis: The DRSD method is an inexact bundle method that creates outer linearization for the dynamically evolving approximation of the first-stage problem. We provide the asymptotic analysis of DRSD and identify conditions on ambiguity sets under which the sequential sampling approach identifies an optimal solution to the 2-DRLP problem in (2) with probability one.
-
3.
Computational Evidence of Performance: We provide the first evidence that illustrates the advantages of a sequential sampling approach for DRO through computational experiments conducted on well-known instances in SP literature.
1.2 Related work
The DRSD method has its roots in two-stage SP, in particular two-stage SD. In this subsection, we review the related two-stage SP literature along with decomposition and reformulation-based approaches for 2-DRLP.
For 2-SLP problems with finite support, including the SAA problem, the L-shaped method due to Van Slyke and Wets [40] has proven to be very effective. Other algorithms for 2-SLP’s such as the Dantzig-Wolfe decomposition [12] and the progressive hedging (PH) algorithm [29] also operate on problems with finite support. The well-established theory of SAA (see Chapter 5 in [37]) supports the external sampling procedure for 2-SLP. The quality of the solution obtained by solving an SAA problem is assessed using the procedures developed, e.g., in [4]. When the quality of the SAA solution is not acceptable, a new SAA is constructed with a larger number of observations. Prior work, such as [5] and [31], provide rules on how to choose the sequence of sample sizes in a sequential SAA procedure.
In contrast to the above, SD-based methods incorporate one new observation in every iteration to create approximations of the dynamically updated SAA of (1). First proposed in [20], this method has seen significant development in the past three decades with the introduction of quadratic regularization term in [21], statistical optimality rules [23], and extensions to multistage stochastic linear programs [18, 34]. The DRSD method presented in this manuscript extends the sequential sampling approach (i.e., SD) for 2-SLPs to DRO problems. Since the simulation of new observations and optimization steps are carried out in every iteration of the SD-based methods, they can also be viewed as internal sampling methods.
The concept of DRO dates back to the work of Scarf [33], and has gained significant attention in recent years. We refer the reader to [27] for a comprehensive treatment on various aspects of the DRO. The algorithmic works on DRO are either decomposition-based or reformulation-based approaches. The decomposition-based methods for 2-DRLP mimic the two-stage SP approach of using a deterministic representation of the sample space using a finite number of observations. As a consequence, the SP solution methods with suitable adaptation can be applied to solve the 2-DRLP problems. For instance, Breton and El Hachem [10, 11] apply the PH algorithm for a 2-DRLP model with a moment-based ambiguity set. Riis and Anderson [28] extend the L-shaped method for 2-DRLP with continuous recourse and moment-based ambiguity set. Bansal et.al. [1] extend the algorithm in [28], which they refer to as the distributionally robust (DR) L-shaped method, to solve 2-DRLPs, with ambiguity set defined over a polytope, in finite iterations. Further extensions of this decomposition approach are presented in [1] and [2] for DRO with mixed-binary recourse and disjunctive programs, respectively.
Another predominant approach to solve 2-DRLP problems is to reformulate the distribution separation problem in (3) as a minimization problem, pose the problem in (2) as a single deterministic optimization problem, and use off-the-shelf deterministic optimization tools to solve the reformulation. For example, Shapiro and Kleywegt [38] and Shapiro and Ahmed [35] provided approaches for 2-DRLP problem with moment matching set to derive an equivalent stochastic program with a certain reference distribution. Bertsimas et al. [7] provided tight semidefinite programming reformulations for 2-DRLP where the ambiguity set is defined using multivariate distributions with known first and second moments. Likewise, Hanasusanto and Kuhn [19] provided a conic programming reformulation for 2-DRLP problem where the ambiguity set comprises of a 2-Wasserstein ball centered at a discrete distribution. Xie [41] provided similar reformulations to tractable convex programs for 2-DRLP problems with ambiguity set defined using Wasserstein metric. Jiang and Guan [24] reduced the worst-case expectation in 2-DRLP, where the ambiguity set is defined using -norm on the space of all (continuous and discrete) probability distributions, to a convex combination of CVaR and an essential supremum. By taking the dual of inner maximization, Love and Bayraksan [3] demonstrated that 2-DRLP where the ambiguity set is defined using -divergence and finite sample space is equivalent to 2-SLP with a coherent risk measure. When reformulations result in equivalent stochastic programs (in [24, 3, 36], for instance), a SAA of the reformulation is used to obtain an approximate solution.
Data-driven approaches for DRO have been presented for specific ambiguity sets. In [13], problems with ellipsoidal moment-based ambiguity set whose parameters are estimated using sampled data are addressed. Esfahani et. al. tackled data-driven problems with Wasserstein metric-based ambiguity sets with convex reformulations in [26]. In both these works, the authors provide finite-sample performance guarantees that probabilistically bound the gap between approximate and true DRO problems. Sun and Xu presented asymptotic convergence analysis of DRO problems with ambiguity sets that are based on moments and mixture distributions constructed using a finite set of observations in [39]. A practical approach to incorporate the results of these works to identify a high-quality DRO solution will be similar to the sequential SAA procedure for SP in [5]. Such an approach will involve the following steps performed in a series – a deterministic representation of ambiguity set using sampled observations, applying appropriate reformulation, and solving the resulting deterministic optimization problem. If the quality of the solution is deemed insufficient, then the entire series of steps is repeated with an improved representation of the ambiguity set (possibly with a larger number of observations).
Organization
The remainder of the paper is organized as follows. In §2, we present the two key ideas of the DRSD, viz., the sequential approximation of the ambiguity set and the recourse function. We provide a detailed description of the DRSD method in §3. We show the convergence of the value functions and solutions generated by the DRSD method in §4. We present results of our computational experiments in §5, and finally we conclude and discuss about potential extensions of this paper in §6.
Notations and Definitions
We define the ambiguity sets over , the set of all finite signed measures on the measurable space . A nonnegative measure (written as ) that satisfies is a probability measure. For probability distributions , we define
(5) |
as the uniform distance of expectation, where is a class of measurable functions. The above is the distance with -structure that is used for the stability analysis in SP [30]. The distance between a single probability distribution to a set of distributions is given as . The distance between two sets of probability distributions and is given as
Finally, the Hausdorff distance between and is defined as
With suitable definitions for the set , the distance in (5) accepts the bounded Lipschitz, the Kantorovich and the -th order Fourier-Mourier metrics (see [30]).
2 Approximating Ambigiuty Set and Recourse Function
In this section, we present the building blocks for the DRSD method. Specifically, we present the procedures to obtain approximations of the ambiguity set and the recourse function . These procedures will be embedded within a sequential sampling-based approach. Going forward we make the following assumptions on the 2-DRLP models:
-
(A1)
The first-stage feasible region is a non-empty and compact set.
-
(A2)
The recourse function satisfies relatively complete recourse. The dual feasible region of the recourse problem is nonempty compact polyhedral set. The transfer (or technology) matrix satisfies .
-
(A3)
The randomness affects the right-hand sides of constraints in (4).
-
(A4)
Sample space is a compact metric space and the ambiguity set .
As a consequence of (A2), the recourse function satisfies with probability one for all . It also implies that the second-stage feasible region, i.e., , is non-empty for all and every . The non-empty dual feasible region implies that there exists a such that . Without loss of generality, we assume that . As a consequence of (A3), the cost coefficient vector and the recourse matrix are not affected by uncertainty. Problems that satisfy (A4) are said to have a fixed recourse. Finally, the compactness of the support guarantees that every probability measure is tight.
2.1 Approximating the Ambiguity Set
The DRO approach assumes only partial knowledge about the underlying uncertainty that is captured by a suitable description of the ambiguity set. An ambiguity set must capture the true distribution with absolute or high degree of certainty, and must be computationally manageable. In this section we present a family of of ambiguity sets that are of interest to us in this paper.
The computational aspects of solving a DRO problem relies heavily on the structure of the ambiguity set. The description of these structures involve parameters which are determined based on practitioner’s risk-preferences. The ambiguity set descriptions that are prevalent in the literature include moment-based ambiguity sets with linear constraints (e.g., [14]) or conic constraints (e.g., [13]); Kantorovich distance or Wasserstein metric-based ambiguity sets [25]; -structure metrics [42], -divergences such as distance and Kullback-Leibler divergence [6]; Prokhorov metrics [15], among others. Although the design of the DRSD method can work with any ambiguity set description defined over a compact sample space, we use 2-DRLPs with moment-based ambiguity sets and Wasserstein distance-based ambiguity sets to illustrate the algorithm in details.
In a data-driven setting, the parameters used in the description of ambiguity sets are estimated using a finite set of independent observations which can either be past realizations of the random variable or simulated by an oracle. We will denote such a sample by with . Naturally, we can view as a random sample and define the empirical frequency
(6) |
where denotes the number of times is observed in the sample. Since in sequential sampling setting, the sample set is updated within the optimization algorithm, it is worthwhile to note that the empirical frequency can be updated using the following recursive equations:
(10) |
where . We will succinctly denote the the above operation using the mapping .
In remainder of this section, we will present alternate descriptions of ambiguity sets and show the construction of what we will refer to as approximate ambiguity sets, denoted by . Let be the -algebra generated by the observations in the sample . Notice that , and hence, is a filtration. We will define the approximate ambiguity sets over the measurable space . These sets should be interpreted to include all distributions that could have generated using the sample , which share a certain relationship with sample statistics. We will use to denote the finite signed measures on .
2.1.1 Moment-based Ambiguity Sets
Given the first moments associated with the random variable , the moment-based ambiguity set can be defined as
(13) |
While the first constraint ensures the definition of a probability measure, the moment requirements are guaranteed by the second constraints. Here, denote real valued measurable function on and be a scalar for . Existence of moments ensures that for all . Notice that the description of the ambiguity set requires explicit knowledge of the following statistics: support and the moments for . In the data-driven setting, the support is approximated by and the sample moments are used to define the following approximate ambiguity set
(16) |
The following result characterizes the relationship between distributions drawn from the above approximate ambiguity set, as well as asymptotic behavior of the sequence .
Proposition 2.1.
For any , we have where . Further, suppose for all , as , almost surely.
Proof.
See Appendix §A. ∎
In the context of DRO, similar ambiguity sets have been studied in [8, 14] where only the first moment (i.e., ) was considered. The form of ambiguity set above also relates to those used in [13, 28, 33, 39] where constraints were imposed only on the mean and covariance. In the data driven setting of [13] and [39], the statistical estimates are used in constructing the approximate ambiguity set as in the case of (16). However, the ambiguity sets in these previous works are defined over the original sample space , as opposed to that is used in (16). This marks a critical deviation in the way the approximate ambiguity set are constructed.
Remark 2.1.
When the moment information is available about the underlying distribution , an approximate moment-based ambiguity set with constant parameters in (16) (i.e., with for all ) can be constructed. Such an approximate ambiguity sets defined over is studied in [28]. Notice that these approximate ambiguity sets satisfy and , for all .
2.1.2 Wasserstein distance-based Ambiguity Sets
We next present approximations of another class of ambiguity sets that has gained significant attention in the DRO literature, viz., the Wasserstein distance-based ambiguity sets. Consider probability distributions , and a function such that is symmetric, satisfies triangle inequality for , and whenever . If denotes the joint distribution of random vectors and with marginals and , respectively, then the Wasserstein metric of order is given by
(17) |
In the above definition, the decision variable can be viewed as a plan to transport goods/mass from an entity whose spatial distribution is given by the measure to another entity with spatial distribution . Therefore, the measures the optimal transport cost between the measures. Notice that an arbitrary norm on satisfies the requirement of the function . In this paper, we will use as the choice of our metric, in which case we obtain the Wasserstein metric of order 1. Using this metric, we define an ambiguity set as follows:
(18) |
for a given and a reference distribution . As was done in §2.1.1, we define approximate ambiguity sets defined over the measurable space as follows:
(19) |
For the above approximate ambiguity set, the distribution separation problem in (3) takes the following form:
(20a) | ||||
subject to | (20g) |
The following result characterizes the distributions drawn from the approximate ambiguity sets of the form in (19), or equivalently (20g).
Proposition 2.2.
The sequence of Wasserstein distance-based approximate ambiguity set satisfies the following properties (1) for any , we have where , and (2) as , almost surely.
Proof.
See appendix §A. ∎
The approximate ambiguity set in [26] is a ball constructed in the space of probability distributions that are defined over the sample space and whose radius reduces with increase in the number of observations. Using Wasserstein balls of shrinking radii, the authors of [26] show that the optimal value of the sequence of DRO problems converges to the optimal value of the expectation-valued SP problem in (1) associated with the true distribution . The approximate ambiguity set in (19), on the other hand, uses a constant radius for all . In this regard, we consider settings where the ambiguity is not necessarily resolved with increasing number of observations. This is reflected in the approximate ambiguity sets (16) and (19) that converge to their respective true ambiguity sets (13) and (18), respectively.
2.2 Approximating the Recourse Problem
Cutting plane methods for the 2-SLPs use an outer linearization-based approximation of the first-stage objective function in (1). In such algorithms, the challenging aspect of computing the expectation is addressed by taking advantage of the structure of the recourse problem (4). Specifically, for a given , the recourse value is known to be convex in the right-hand side parameters that includes the first-stage decision vector . Additionally, if the support of is finite and (A2) holds, then the function is polyhedral. Under assumptions (A2) and (A4), these structural property of convexity extends to the expected recourse value .
Due to strong duality of linear programs, the recourse value is also equal to the optimal value of the dual of (4), i.e.,
(21) | ||||
subject to |
Due to (A2) and (A4), the dual feasible region is a polytope that is not affected by the uncertainty. If denotes the set of all extreme points of the polytope , then the recourse value can also be expressed as the pointwise maximum of affine functions computed using elements of the set as given below.
(22) |
The outer linearization approaches tend to approximate the above form of recourse function by identifying the extreme points (optimal solutions to (21)) at a sequence of candidate (or trial) solutions , and generating the corresponding affine functions. If is the optimal dual obtained by solving (21) with as input, then the affine function is obtained by computing the coefficients and . Following linear programming duality, notice that this affine function is a supporting hyperplane to at , and lower bounds the function at every other .
If the support is finite, then one can solve a dual subproblem for all with the candidate solution as input, generate the affine functions, and collate them together to obtain an approximation of the first-stage objective function. This is the essence of the L-shaped method applied to 2-SLP in (1). In each iteration of the L-shaped method, the affine functions generated using a candidate solution and information gathered from individual observations are weighed by the probability density of the observation to update the approximate first-stage objective function. Notice that even when SAA of the first-stage objective function of the 2-SLP, using a sample of size , the L-shaped method can be applied. A similar approximation strategy is used in the DR L-shaped method for 2-DRLP problems.
Alternatively, we can consider the following approximation of the recourse function expressed in the form given in (22):
(23) |
Notice that the above approximation is built using only a subset of extreme points, and therefore, satisfies . Since , we begin with . Subsequently, we construct a sequence of sets such that that ensures for all . The following result from [20] captures the behavior of the sequence of approximation .
Proposition 2.3.
The sequence converges uniformly to a continuous function on for any .
Proof.
See Appendix A. ∎
The approximations of the form in (23) is one of the principal features of the SD algorithm (see [20, 21]). While the L-shaped and DR L-shaped methods require a finite support for , SD is applicable even for problems with continuous support. The algorithm uses an “incremental” SAA for the first-stage objective function by adding one new observation in each iteration. Therefore, the first-stage objective function approximation used in SD is built using the recourse problem approximation in (23) and the incremental SAA. This approximation is given by:
(24) |
The affine functions generated in SD provide an outer linearization for the approximation in (24). The sequence of sets that grow monotonically in size, viz. , is generated by adding one new vertex to the previous set to obtain the updated set . The newly added vertex is the optimal dual solution obtained by solving (21) with the most recent observation and candidate solution as input.
We refer the reader to [9], [1, 28], and [20, 22] for the a detailed exposition of the L-shaped, the DR L-Shaped, and the SD methods, respectively, and note only the key differences between these methods. Firstly, the sample used to in the (DR) L-shaped method is fixed prior to optimization. In SD, this sample is updated dynamically throughout the course of the algorithm. Secondly, exact subproblem optimization for all observations in the sample is used in every iteration of the (DR) L-shaped method. On the contrary, exact optimization is used only for the subproblems corresponding to the latest observation, and an “argmax” procedure (to be described in the next section) is used for observations encountered in earlier iterations.
3 Distributionally Robust Stochastic Decomposition
In this paper, we focus on a setting where the ambiguity set is approximated by a sequence of ambiguity sets such that the following properties are satisfied: () for any , there exists such that and () as , almost surely. The moment-based ambiguity set and Wasserstein distance-based ambiguity set are two sets that satisfy these properties (Propositions 2.1 and 2.2, respectively). Recall that the approximate ambiguity set in an iteration (say ) is constructed using a finite set of observations that progressively grow in size. Note that the sequence of approximate ambiguity sets may not necessarily converge to a single distribution. In other words, we do not assume that increasing sample size will overcome ambiguity asymptotically, as is the case in [26, 42].
The pseudocode of the DRSD method is given in Algorithm 1. We present the main steps of the DRSD method in iteration (Steps 4-31 of Algorithm 1). At the beginning of iteration , we have a certain approximation of the first-stage objective function that we denote as , a finite set of observations and an incumbent solution . We will use the term incumbent solution to refer to the best solution discovered by the algorithm until iteration . The solution identified in the current iteration will be referred to as the candidate solution and denote it as (without ).
Iteration begins by first identifying the candidate solution by solving the following the master problem (Step 4):
(25) |
denoted by . Following this, a new observation is realized, and added to the current sample of observations to get (Step 5).
In order to build the first-stage objective function approximation, we rely upon the recourse function approximation presented in Section 2.2. For the most recent observation and the candidate solution , we evaluate the recourse function value by solving (4), and obtain the dual optimum solution . Likewise, we obtain dual optimum solution by solving (4) for incumbent solution (Steps 7–10). These dual vectors are added to a set of previously discovered optimal dual vectors. In other words, we recursively update . For all other observations (), we identify a dual vector in that provides the best lower bounding approximation at using the following operation (Steps 11–13):
(26) |
Note that the calculations in (26) are carried out only for previous observations as provides the best lower bound at . Further, notice that
the approximate recourse function value at defined in (23), for all , and .
Using , we solve a distribution separation problem (in Step 16):
(27) |
Let denote the optimal solution of the above problem which we identify as the maximal/extremal probability distribution. Since the problem is solved over measures that are defined only over the observed set , the maximal probability distribution has weights for , and for . Notice that the problem in (3) differs from the distribution separation problem (27) as the latter uses the recourse function approximation and approximate ambiguity set as opposed to the true recourse function and ambiguity set , respectively. For the ambiguity sets in (2.1) the distribution separation problem is a deterministic linear program. In general, the distribution separation problems associated with well-known ambiguity sets remain deterministic convex optimization problems [27], and off-the-shelf solvers can used to obtain the extremal distribution.
In Step 17 of Algorithm 1, we use the dual vectors and the maximal probability distribution to generate a lower bounding affine function:
(28) |
for the worst case expected recourse function measured with respect to the maximal probability distribution which is obtained by solving the distribution separation problem (27). We denote the coefficients of the affine function on the right-hand side of (28) by
(29) |
and succinctly write the affine function as . Similar calculations are carried out using the incumbent solution to identify a maximal probability distribution and a lower bounding affine function resulting in the affine function .
While the latest affine functions provide lower bound on , the affine functions generated in previous iteration are not guaranteed to lower bound . To see this, let us consider the moment-based approximate ambiguity sets . Let be the maximal distribution identified in an iteration which was used to compute the affine function . By assigning for all new observations encountered after iteration , i.e., , we can construct a probability distribution . This reconstructed distribution satisfies . However, it is easy to see that it does not satisfy for all . Therefore, . In other words, while the coefficients are -measurable, the corresponding measure is not feasible to the approximate ambiguity set . Therefore, is not a valid lower bound to . The arguments for the Wasserstein-based approximate ambiguity set are more involved, but persistence of a similar issue can be demonstrated.
In order to address this, we update the previously generated affine functions for , as follows (Steps 19 - 22):
(30) |
such that provides lower bound approximation of for all . Similarly, we update the affine functions , , associated with incumbent solution (Step 18). Recall that for and , the parameter (Propositions 2.1 and 2.2). The candidate and the incumbent affine functions ( and , respectively), as well as the updated collection of previously generated affine functions are used to build the set of affine functions which we will denote by (Step 23). The lower bounding property of this first-stage objective function approximation is captured in the following result.
Theorem 3.1.
Proof.
For non-empty approximate ambiguity set of ambiguity set , the construction of the affine function ensures that . Assume that for all and . The maximal nature of the probability distribution satisfies:
Using above and the monotone property of the approximate recourse function, we have
(31) |
for all . Based on the properties of and (similar to Propositions 2.1 and 2.2), we know that for every we can construct a probability distribution in using the mapping defined by (10). Considering a probability distribution such that , the inequality (31) reduces to
The last inequality is due to assumption (A2), i.e., and the construction of recourse function approximation described in §2.2. Since lower bounds the term in bracket, we have
Using the same arguments for all , and the fact that the and are constructed as lower bounds to the , we have . This completes the proof by induction. ∎
Using the collection of affine functions , we update approximation of the first-stage objective function in Step 24, as follows:
(32) |
Once the approximation is updated, the performance of the candidate solution is compared relative to the incumbent solution (Steps 25–32). This comparison is performed by verifying if the following inequality
(33) |
where parameter , is satisfied. If so, the candidate solution is designated to be the next incumbent solution, i.e., . If the inequality is not satisfied, the precious incumbent solution is retained as . This completes an iteration of the DRSD method.
Remark 3.1.
The algorithm design can be extended to incorporate 2-DRLP where the relatively complete recourse assumption of (A2) and/or assumption (A3) is not satisfied. For problems where relatively complete recourse condition is not met, a candidate solution may lead to one or more subproblems to be infeasible. In this case, the dual extreme rays can be used to compute a feasibility cut that is included in the first-stage approximation. The argmax procedure in (26) is only valid when assumption (A3) is satisfied. In problems where the uncertainty also affects the cost coefficients, the argmax procedure presented in [17] can be utilized. These algorithmic enhancements can be incorporated without affecting the convergence properties of DRSD that we present in the next section.
4 Convergence Analysis
In this section we provide the convergence result of the sequential sampling-based approach to solve DRO problems. In order to facilitate the exposition of our theoretical results, we will define certain quantities for notational convenience that are not necessarily computed during the course of the algorithm in the form presented in the previous section. Our convergence results are built upon stability analyses presented in [39] and convergence analysis of the SD algorithm in [20].
We define a function which is defined over the approximate ambiguity set using the recourse function , that is
(34) |
for a fixed . We begin by analyzing the behavior of the sequence as . In particular, we will assess the sequence of function evaluations at a converging subsequence of first-stage solutions. The result is captured in the following proposition.
Proposition 4.1.
Suppose denotes a subsequence of such that , then , with probability one.
Proof.
Consider the ambiguity set . For and , let . Then,
The inequality in the above follows from optimality of . The above implies that
(35) |
The second relationship is due to the triangular inequality. The third inequality follows from uniform Lipschitz continuity of recourse function , with probability one, under assumption (A2) which implies that there exists a constant such that for any . Therefore, the function is equi-continuous on . Starting with and using the same arguments, we have
(36) |
Now consider ambiguity sets and . Note that
Using the definition of deviation between ambiguity sets and as well as its relationship with Hausdorff distance, we have
(37) |
For and , combining (35), (36), and (37), we have
As , the family of ambiguity sets considered satisfy and the hypothesis informs us that . Therefore, we conclude that as . ∎
Notice that, the behavior of the approximate ambiguity sets defined in §2.1, in particular, the condition as plays a central role in the above proof. Recall that for the moment and Wasserstein distance-based ambiguity sets, the condition is established in propositions 2.1 and 2.2, respectively. It is also worthwhile to note that under the foregoing conditions, (37) also implies uniform convergence of the sequence to , with probability one.
The above result applies to any algorithm that generates a converging sequence of iterates and a corresponding sequence of extremal distributions. Such an algorithm is guaranteed to exhibit convergence to the optimal distributionally robust objective function value. Therefore, this result is applicable to the sequence of instances constructed using external sampling and solved, for example, using reformulation-based methods. Such an approach was adopted in [28] and [39]. The analysis in [28] relies upon two rather restrictive assumptions. The first assumption is that for all there exists a sequence of measures such that and converges weakly to . The second assumption requires the approximate ambiguity sets to be strict subsets of the true ambiguity set, i.e., . Both of these assumptions are very difficult to satisfy in a data-driven setting (also see Remark 2.1).
The analysis in [39], on the other hand, does not make the above assumptions. Therefore, their analysis is more broadly applicable in settings where external sampling is used to generate . DRO instances are constructed based on statistics estimated using and solved to optimality for each . They show the convergence of optimal objective function values and optimal solution sets of approximate problems to the optimal objective function value and solutions of the true DRO problem, respectively. In this regard, the result in Proposition 4.1 can alternatively be derived using Theorem 1(i) in [39]. While the above function is not computed during the course of the sequential sampling algorithm, it provides the necessary benchmark for our convergence analysis.
One of the main point of deviation in our analysis stems from the fact that we use the objective function approximations that are built based on the approximate recourse function in (23). In order to study the piecewise affine approximation of the first-stage objective function, we introduce another benchmark function
(38) |
Notice that the above function uses the approximations for the ambiguity set (as in the case of (34)) as well as the approximation for recourse function. This construction ensures for all and that that follows from the fact that , almost surely. Further, following the result in Theorem 3.1 ensures that . Putting these together, we obtain the following relationship
(39) |
While the previous proposition was focused on the upper limit in the above relationship, we present the asymptotic behavior of the sequence in the following results.
Lemma 4.2.
Suppose denotes a subsequence of such that , , with probability one.
Proof.
From Proposition 4.1, we have . Therefore, there exists and such that
(40) |
Now consider,
The last equality follows from the fact that for all and , almost surely. Moreover, because of the uniform convergence of (Proposition 2.3), the sequence of approximate functions also convergences uniformly. This implies that, there exists such that
(41) |
Let . Using (40) and (41), we have for all
This implies that as . Based on (26), we have for all and . Let
where is an optimal solution of the distributional separation problem (27) where index is replaced by . Then, the affine function provides a lower bound approximation for function , i.e.,
with strict equality holding only at . Therefore, using the definition of we have , almost surely. This completes the proof. ∎
The above result characterizes the behavior of the sequence of affine functions generated during the course of the algorithm. In particular, the sequence accumulates at the objective value of the original DRO problem (2). Recall that the candidate solution is a minimizer of and an affine function is generated at this point such that in all iterations . However, due to the update procedure in (30) the quality of the estimates at gradually diminishes leading to a large value for as increases. This emphasizes the role of the incumbent solution and computing the incumbent affine function during the course of the algorithm. By updating the incumbent solution and frequently reevaluating the affine functions at the incumbent solution, we can ensure that the approximation is “sufficiently good” in the neighborhood of the incumbent solution. In order to assess the improvement of approximation quality, we define
(42) |
The inequality follows from the optimality of with respect to the objective function . The quantity measures the error in objective function estimate at the candidate solution with respect to the estimate at the current incumbent solution. The following result captures the asymptotic behavior of this error term.
Lemma 4.3.
Let denotes a sequence of iterations where the incumbent solution changes. There exists a subsequence of iterations, denoted as , such that .
Proof.
We will consider two cases depending on whether the set is finite or not. First, suppose that is not finite. By the incumbent update rule and (42),
Subsequently, we have . Since and , we have
The left-hand side of the above inequality captures the improvement in the objective function value at the current incumbent solution over the previous incumbent solution. Using the above, we can write the average improvement attained over incumbent changes as
This implies that
Under the assumption that the dual feasible region is non-empty and bounded (this is ensured by relatively complete recourse, (A2)), is a sequence of Lipschitz continuous functions. This along with compactness of (A1), implies that is bounded from above. Hence, the term (a) reduces to zero as . The term (b) converges to zero, with probability one, due to uniform convergence of . Since , we have
with probability one. Further,
Thus, there exists a subsequence indexed by the set such that , with probability one.
Now if is finite, then there exists and such that for all , we have . Notice that, if , uniform convergence of the sequence and Lemma 4.2 ensure that
(43a) | |||
(43b) |
Further, since the incumbent is not updated in iterations , we must have from the update rule in (33) that
Using (43), we have
which implies
Since , we must have . Hence, , with probability one. ∎
Equipped with the results in lemmas 4.2 and 4.3, we state the main theorem which establishes the existence of a subsequence of incumbent solution sequence for which every accumulation point is an optimal solution to (2).
Theorem 4.4.
Let and be the sequence candidate and incumbent solutions generated by the algorithm. There exists a subsequence for which every accumulation point is an optimal solution of 2-DRLP (2), with probability one.
Proof.
Let be an optimal solution of (2). Consider a subsequence indexed by such that . Compactness of ensures the existence of accumulation point and therefore,
(44) |
From Theorem 3.1, we have
Thus, using the uniform convergence of (Proposition 4.1) we have
(45) |
for all subsequences indexed by , with probability one. Recall that,
The inequality in the above follows from the optimality of with respect to . Taking limit over , we have
The last inequality follows from (45) and (Lemma 4.2). From Lemma 4.3, there exists a subsequence indexed by such that . Therefore, if , we have
Using (44) and the above inequality, we conclude that is an optimal solution with probability one. ∎
5 Computational Experiment
In this section, we evaluate the effectiveness and efficiency of the DRSD method in solving 2-DRLP problems. For our preliminary experiments, we consider 2-DRLP problems with moment-based ambiguity set for the first two moments (). We report results from the computational experiments conducted on four well-known SP test problems: CEP, PGP, BAA, and STORM. The test problems have supports of size , , and , respectively.
We use an external sampling-based approach as a benchmark for comparison. The external sampling-based instances involve constructing approximate problems of the form (34) with a pre-determined number of observations (that might not be unique). The resulting instances are solved using the DR L-Shaped method. Specifically, we compare the solution quality provided by these methods along with the solution time.
We conduct independent replications for each problem instance with different seeds for the random number generator. The algorithms are implemented in the C programming language, and the experiments are conducted on a -bit Intel core i7 - 4770 CPU at GHz machine with GB memory. All linear programs, i.e., master problem, subproblems, and distribution separation problem, are solved using CPLEX 12.10 callable subroutines. The results from the experiments are presented in Tables 1 and 2. The results for the instances with a finite sample size obtained from the DR L-shaped method are labeled as DRLS-, where denotes the number of observations used to approximate the ambiguity set. The DRSD method is run for a minimum of iteration, while no such limit is imposed on the DR L-shaped method.
Method | # Iterations | Objective Estimate | # Unique Obs. |
PGP | |||
DRLS-100 | () | () | () |
DRLS-250 | () | () | () |
DRLS-500 | () | () | () |
DRLS-1000 | () | () | () |
DRLS-2000 | () | () | () |
DRSD | () | () | () |
CEP | |||
DRLS-100 | () | () | () |
DRLS-250 | () | () | () |
DRLS-500 | () | () | () |
DRLS-1000 | () | () | () |
DRLS-2000 | () | () | () |
DRSD | () | () | () |
BAA | |||
DRLS-100 | () | () | () |
DRLS-250 | () | () | () |
DRLS-500 | () | () | () |
DRLS-1000 | () | () | () |
DRLS-2000 | () | () | () |
DRSD | () | () | () |
STORM | |||
DRLS-100 | () | () | () |
DRLS-250 | () | () | () |
DRLS-500 | () | () | () |
DRLS-1000 | () | () | () |
DRLS-2000 | () | () | () |
DRSD | () | () | () |
Table 1 shows the average number of iterations, the average objective function value, and the average number of unique observations in Columns 2, 3, and 4, respectively. The values in the parenthesis are the half-widths of the corresponding confidence intervals. Notice that for DRSD, the number of iterations is also equal to the number of observations used to approximate the ambiguity set. The results show the ability of DRSD to dynamically determine the number of observations by assessing the progress made during the algorithm. The objective function estimate obtained using the DRSD is comparable to the objective function estimate obtained using the DR L-shaped method of comparable size. For instance, the DRSD objective function estimate for STORM that is based upon a sample of size (on average) is within of the objective function value estimate of DRLS-500. These results show that the optimal objective function estimate obtained from DRSD are comparable to those obtained using an external sampling-based approach.
Method | Total | Master | Subproblem | Optimality | Argmax | Separation |
---|---|---|---|---|---|---|
PGP | ||||||
DRLS-100 | ||||||
DRLS-250 | ||||||
DRLS-500 | ||||||
DRLS-1000 | ||||||
DRLS-2000 | ||||||
DRSD | ||||||
CEP | ||||||
DRLS-100 | ||||||
DRLS-250 | ||||||
DRLS-500 | ||||||
DRLS-1000 | ||||||
DRLS-2000 | ||||||
DRSD | ||||||
BAA | ||||||
DRLS-100 | ||||||
DRLS-250 | ||||||
DRLS-500 | ||||||
DRLS-1000 | ||||||
DRLS-2000 | ||||||
DRSD | ||||||
STORM | ||||||
DRLS-100 | ||||||
DRLS-250 | ||||||
DRLS-500 | ||||||
DRLS-1000 | ||||||
DRLS-2000 | ||||||
DRSD |
Table 2 shows the average total computational time (Column 2) for each instance. The table also includes the average time spent to solve the Master problem (first-stage approximate problem) and the subproblems, to verify the optimality conditions, to complete the argmax procedure (only for DRSD), and to solve the distribution separation problem (Columns 3–7, respectively). The results for small scale instances (PGP and CEP) show that both DRSD and the DR L-shaped method take a fraction of a second, but the computational time for DRSD is higher than the DR L-shaped method for all . This behavior can be attributed to the fact that (i) the subproblems are relatively easy to solve and the computational effort to solve all the subproblems does not increase significantly with , and (ii) the DRSD is run for a minimum number of iterations (256) and thereby, contributing to the total time taken to solve master problems and distribution separation problems. This observation is in-line with our computational experience with the SD method for 2-SLPs. It is important to note that, while the computational time for the DR L-shaped method on an individual instance may be lower, but the iterative procedure necessary to identify a sufficient sample size may require solving several instances with an increasing sample size. This may result in a significantly higher cumulative computational time. The DRSD method, and the sequential sampling idea in general, mitigates the need for this iterative process.
On the other hand, for large-scale problems (BAA and STORM), we observe a noticeable increase in the computational time for the DR L-shaped method with an increase in . A significant portion of this time is spent on solving the subproblems. Since the DRSD solves only two subproblems in each iteration, the time taken to solve the subproblems is significantly less in comparison to the DR L-shaped method for all where all subproblems corresponding to unique observations are solved in each iteration. Notice that for STORM, the average number of iterations taken by DRSD is at least times the average number of iterations taken by DR L-shaped for each . This is reflected in the significant increase in the computational time for solving the master problems and the distributional separation problems. In contrast, for BAA, the average number of DRSD iterations is only higher than the DR L-shaped iterations. As a result, the increase in the computational times due to an increase in the master and distributional separation problem solution times is overshadowed by the computational gains attained by solving only two subproblems in each iteration. Moreover, the computational time associated with solving the distribution separation problem can be reduced by using column-generation procedures that take advantage of the problem structure. Such an implementation was not undertaken for our current experiments and is a fruitful future research avenue.
6 Conclusions
We presented a new decomposition approach for solving two-stage distributionally robust linear programs (2-DRLPs) with a general ambiguity set that is defined using continuous and/or discrete probability distributions with a very large sample space. Since this approach extended the stochastic decomposition approach of Higle and Sen [20] for 2-DRLPs with a singleton ambiguity set, we referred to it as Distributionally Robust Stochastic Decomposition (DRSD) method. The DRSD is a sequential sampling-based approach that allowed sampling within the optimization step where only two second-stage subproblems are solved in each iteration. In contrast, an external sampling procedure utilizes the distributionally robust L-shaped method [1] for solving 2-DRLP with a finite number of scenarios, where in each iteration all subproblems are solved. While the design of DRSD accommodates general ambiguity sets, we provided its asymptotic convergence analysis for a family of ambiguity sets that includes the well-known moment-based and Wasserstein metric-based ambiguity sets. Furthermore, we performed computational experiments to evaluate the efficiency and effectiveness of solving distributionally robust variants of four well-known stochastic programming test problems that have supports of size ranging from to . Based on our results, we observed that the objective function estimates obtained using the DRSD and the DR L-shaped method are statistically comparable. These DRSD estimates are obtained while providing computational improvements that are critical for large-scale problem instances.
Appendix A Proofs
In this appendix, we provide the proofs for the propositions related to the asymptotic behavior of the ambiguity sets approximations defined in §2.1 and the recourse function approximation presented in §2.2.
Proof.
(Proposition 2.1) For , it is easy to verify that satisfies the support constraint, viz., . Now consider for , we have
This implies that .
Using Proposition 4 in [39], there exists a positive constant such that
Here, and , and denotes the Euclidean norm. Since the approximate ambiguity sets are constructed using independent and identically distributed samples of , using law of large numbers, we have for all . This completes the proof. ∎
Proof.
(Proposition 2.2) Consider approximate ambiguity sets and of the form given in (20g). Let , and let the reconstructed probability distribution be denoted by . We can easily check that is indeed a probability distribution. With fixed, it suffices now to show that the polyhedron
(49) |
is non-empty. Since , there exist for all such that the constraints in the description of the approximate ambiguity set in (20g) are satisfied. We will do this by analyzing two possibilities,
-
1.
We encounter a previously seen observation, i.e., and . Let for and ; and . We will verify the feasibility of this choice by verifying the three sets of constraints in (49).
Finally,
Since all the three constraints are satisfied, the chosen values for is an element of the polyhedron , and therefore, .
-
2.
We encounter a new observation, i.e., . Let for , for , for , and . Let us verify the three conditions defining (49) with this choice for variables.
Consider,
Therefore, the value of the variables satisfy the constraints and . This implies that .
Next, let us consider a distribution . Then,
The above inequality is a consequence of the triangle inequality of Wasserstein distance. Since , we have . Under compactness assumption for , , and , Theorem 2 in [16] guarantees
for all . This implies that the , almost surely. Consequently, we obtain that (or equivalently ) as , almost surely. This completes the proof. ∎
Proof.
(Proposition 2.3) Recall that is a compact set because of Assumptions (A1)) and (A4), and is a sequence of continuous (piecewise linear and convex) functions. Further, the construction of the set of dual vertices satisfies which ensures that for all . Since increases monotonically and is bounded by a finite function (due to (A2)), this sequence pointwise converges to some function . Once again due to (A2), we know that the set of dual vertices is finite and since , the set is also a finite set. Clearly,
is the optimal value of a LP, and hence, is a continuous function. The compactness of , and continuity, monotonicity and pointwise convergence of to guarantees that the sequence uniformly converges to (Theorem 7.13 in [32]). ∎
References
- [1] M. Bansal, K. Huang, and S. Mehrotra. Decomposition Algorithms for Two-Stage Distributionally Robust Mixed Binary Programs. SIAM Journal on Optimization, pages 2360–2383, January 2018.
- [2] M. Bansal and S. Mehrotra. On solving two-stage distributionally robust disjunctive programs with a general ambiguity set. European Journal of Operational Research, 279(2):296–307, December 2019.
- [3] G. Bayraksan and D. K. Love. Data-Driven Stochastic Programming Using Phi-Divergences. In The Operations Research Revolution, INFORMS TutORials in Operations Research, pages 1–19. INFORMS, September 2015.
- [4] G. Bayraksan and D.P. Morton. Assessing solution quality in stochastic programs. Mathematical Programming, 108(2):495–514, Sep 2006.
- [5] G. Bayraksan and D.P. Morton. A sequential sampling procedure for stochastic programming. Operations Research, 59(4):898–913, 2011.
- [6] A. Ben-Tal, D. den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen. Robust Solutions of Optimization Problems Affected by Uncertain Probabilities. Management Science, 59(2):341–357, November 2012.
- [7] D. Bertsimas, X. V. Doan, K. Natarajan, and C. Teo. Models for minimax stochastic linear optimization problems with risk aversion. Mathematics of Operations Research, 35(3):580–602, 2010.
- [8] D. Bertsimas and I. Popescu. Optimal inequalities in probability theory: A convex optimization approach. SIAM Journal on Optimization, 15(3):780–804, 2005.
- [9] J. R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer Series in Operations Research and Financial Engineering. Springer, 2011.
- [10] M. Breton and S. El Hachem. Algorithms for the solution of stochastic dynamic minimax problems. Computational Optimization and Applications, 4(4):317–345, October 1995.
- [11] M. Breton and S. El Hachem. A scenario aggregation algorithm for the solution of stochastic dynamic minimax problems. Stochastics and Stochastic Reports, 53(3-4):305–322, May 1995.
- [12] G. B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Operations Research, 8(1):101–111, 1960.
- [13] E. Delage and Y. Ye. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Operations Research, 58:595–612, 2010.
- [14] J. Dupacová. The minimax approach to stochastic programming and an illustrative application. Stochastics, 20:73–88, 1987.
- [15] E. Erdogan and G. Iyengar. Ambiguous chance constrained problems and robust optimization. Mathematical Programming, 107(1-2):37–61, 2006.
- [16] N. Fournier and A. Guillin. On the rate of convergence in wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3-4):707–738, 2015.
- [17] H. Gangammanavar, Y. Liu, and S. Sen. Stochastic decomposition for two-stage stochastic linear programs with random cost coefficients. INFORMS Journal on Computing, 2020.
- [18] H. Gangammanavar and S. Sen. Stochastic dynamic linear programming: A sequential sampling algorithm for multistage stochastic linear programming. Technical report, Southern Methodist University, 2019. Available on Optimization Online: http://www.optimization-online.org/DB_HTML/2019/09/7395.html.
- [19] G. A. Hanasusanto and D. Kuhn. Conic programming reformulations of two-stage distributionally robust linear programs over wasserstein balls. Operations Research, 66(3):849–869, 2018.
- [20] J. L. Higle and S Sen. Stochastic decomposition: An algorithm for two-stage linear programs with recourse. Mathematics of Operations Research, 16(3):650–669, 1991.
- [21] J. L. Higle and S Sen. Finite master programs in regularized stochastic decomposition. Mathematical Programming, 67(1-3):143–168, 1994.
- [22] J. L. Higle and S Sen. Stochastic Decomposition: A Statistical Method for Large Scale Stochastic Linear Programming. Kluwer Academic Publishers, Boston, MA., 1996.
- [23] J. L. Higle and S Sen. Statistical approximations for stochastic linear programming problems. Annals of Operations Research, 85(0):173–193, 1999.
- [24] R. Jiang and Y. Guan. Risk-averse two-stage stochastic program with distributional ambiguity. Operations Research, 66(5):1390–1405, 2018.
- [25] Sanjay Mehrotra and He Zhang. Models and algorithms for distributionally robust least squares problems. Mathematical Programming, 148(1–2):123–141, 2014.
- [26] P. Mohajerin Esfahani and D. Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: performance guarantees and tractable reformulations. Mathematical Programming, 171(1):115–166, Sep 2018.
- [27] H. Rahimian and S. Mehrotra. Distributionally robust optimization: A review. arXiv preprint arXiv:1908.05659, 2019.
- [28] M. Riis and K. A. Andersen. Applying the minimax criterion in stochastic recourse programs. European Journal of Operational Research, 165(3):569–584, September 2005.
- [29] R. T. Rockafellar and Roger J.-B. Wets. Scenarios and Policy Aggregation in Optimization Under Uncertainty. Mathematics of Operations Research, 16(1):119–147, February 1991.
- [30] W. Römisch. Stability of stochastic programming problems. Handbooks in operations research and management science, 10:483–554, 2003.
- [31] J. O. Royset and R. Szechtman. Optimal budget allocation for sample average approximation. Operations Research, 61(3):762–776, 2013.
- [32] W. Rudin. Principles of mathematical analysis. McGraw-Hill Book Co., New York, third edition, 1976. International Series in Pure and Applied Mathematics.
- [33] H. Scarf. A min-max solution of an inventory problem. In Studies in the Mathematical Theory of Inventory and Production, chapter 12, pages 201–209. RAND Corporation, Santa Monica CA, 1958.
- [34] S. Sen and Z. Zhou. Multistage Stochastic Decomposition: A bridge between Stochastic Programming and Approximate Dynamic Programming. SIAM Journal on Optimization, 24(1):127–153, 2014.
- [35] A. Shapiro and S. Ahmed. On a class of minimax stochastic programs. SIAM Journal on Optimization, 14(4):1237–1249, 2004.
- [36] A. Shapiro and S. Ahmed. On a class of minimax stochastic programs. SIAM Journal on Optimization, 14(4):1237–1249, 2004.
- [37] A. Shapiro, D. Dentcheva, and A. Ruszczynski. Lectures on Stochastic Programming: Modeling and Theory, Second Edition. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2014.
- [38] A. Shapiro and A. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods and Software, 17(3):523–542, 2002.
- [39] H. Sun and H. Xu. Convergence analysis for distributionally robust optimization and equilibrium problems. Mathematics of Operations Research, 41(2):377–401, May 2016.
- [40] R. M. Van Slyke and R. J. B. Wets. L-shaped linear programs with applications to optimal control and stochastic programming. SIAM Journal on Applied Mathematics, 17(4):638–663, 1969.
- [41] W. Xie. Tractable reformulations of distributionally robust two-stage stochastic programs with wasserstein distance. arXiv preprint arXiv:1908.08454, 2019.
- [42] C. Zhao and Y. Guan. Data-driven risk-averse two-stage stochastic program with -structure probability metrics. Available at http://www.optimization-online.org/DB_FILE/2015/07/5014, 2015.