A Distributionally Robust Estimator that
Dominates the Empirical Average
Abstract
††Contact: {nikolaos.koumpis,dionysis.kalogerias}@yale.eduWe leverage the duality between risk-averse and distributionally robust optimization (DRO) to devise a distributionally robust estimator that strictly outperforms the empirical average for all probability distributions with negative excess kurtosis. The aforesaid estimator solves the robust mean squared error problem in closed form.
1 Introduction
The aim of this paper is to deploy the basic insights of DRO in order to devise an empirical estimator that outperforms the (commonly used) sample average w.r.t. the mean squared error; the latter being a standard figure of merit for measuring losses in statistics and machine learning (Girshick and Savage, 1951; James and Stein, 1961; Devroye et al., 2003; Wang and Bovik, 2009; Verhaegen and Verdult, 2007; Kuhn et al., 2019).
To set up the stage for our investigation, let be the set of Borel probability measures on and a set of independent identically distributed (i.i.d.) random vectors realizing . The corresponding to empirical measure is , where is the Delta measure of unit mass located at .
By defining the real-valued map , we obtain the following characterization of the empirical average
(1) |
as the solution to the surrogate of the “true” problem
(2) |
when is unknown. Plausibly then, we are interested in the performance of (1) on :
(3) |
where111We assume that the first order moment is finite is the optimal value of (2). Although (1) dominates among unbiased estimators for normally distributed low dimensional data, in the case of skewed or heavy tailed measures, it exhibits a rather sub-optimal performance and therefore, it leaves room for improvement. Such an improvement could be potentially achieved by an appropriate re-weighting , on , such that
(4) |
The most prominent approach for doing so is via distributionally robust optimization. As a paradigm, DRO (Scarf et al., 1957) has gained interest due to its connections to regularization, generalization, robustness and statistics (Staib and Jegelka, 2019), (Blanchet et al., 2021), (Bertsimas and Sim, 2004), (Blanchet et al., 2021), (Blanchet and Shapiro, 2023), (Nguyen et al., 2023),(Blanchet et al., 2024).
Let be the field of DRO problem instances of the form
(5) |
where is the uncertainty set of radius centered at . The two main approaches for representing the uncertainty set, recently unified by (Blanchet et al., 2023), are the divergence approach, and the Wasserstein approach. In the former one, distributional shifts are measured in terms of likelihood ratios (Bertsimas and Sim, 2004; Bayraksan and Love, 2015; Namkoong and Duchi, 2016; Duchi and Namkoong, 2018; Van Parys et al., 2021), while in the latter, the uncertainty set is represented by a Wasserstein ball (Esfahani and Kuhn, 2015; Gao and Kleywegt, 2023; Lee and Raginsky, 2018; Kuhn et al., 2019).
Let be a solution of . Based on the discussion so far, we are interested in an ideal sub-field of instances formed by the following design specifications: First, if , then there exist a saddle point . Then, the first design specification implies that necessarily
(6) |
In some cases, robustness against distributional shifts is as safeguarding against risky events (naturally coordinated by the distribution tail). That is, solving a DRO problem is equivalent to solving a risk-averse problem. This brings us to the second specification which requires to be the closed form solution of a risk-averse problem (over ). Combining the aforesaid specifications would allow us to implement the empirical counterpart of from data drawn from . That being said, in this paper we study the following motivating question:
Is there an instance such that ?
The aforesaid dual relation between risk-averse and DRO suggests a place to start our design. In particular, our methodology is as follows: We are going to state a risk-averse problem (which in principle is easier to design), that is also solvable in closed form. Then, we are going to show that under certain conditions, its solution also solves a certain, precisely defined DRO problem. Then, we are going to verify our initial assertion by studying the performance of the corresponding empirical solution. We start by considering the following risk-constrained extension of (2):
(7) |
The risk-averse problem (7) is a quadratically constrained quadratic program and admits the closed form solution (see Section 2)
(8) |
where . Risk-averse optimization refers to the optimization of risk measures. Unlike expectations, these are (often) convex functionals on appropariate spaces of random variables and deliver optimizers that take into account the risky events of the underlying distribution. Unlike expectations, a convex risk measure can be represented through the Legendre transform as the supremum of its affine minorants. This representation essentially characterizes the risk measure as a supremum over a certain class of affine functionals, specifically over a subset of the dual space of signed measures (Shapiro et al., 2021). For the particular class of coherent risk measures, this set contains probability measures and thus, safeguarding for risky events is as being robust against distributional shifts. In other words, solving a risk-constrained problem on the population at hand is as solving a min-max problem where expectations are taken for an appropriate re-weighting of that population.
That said, the challenge is that minimization of a risk measure does not always correspond to a DRO formulation. As an example, problem (7) is closely related to the mean-variance risk-measure (Shapiro et al., 2021, p. 275), which does not lead to a DRO formulation in general. Mean-variance optimization was initially utilized by (Markowitz, 1952), in portfolio selection. Later on, (Abeille et al., 2016) deploys mean-variance for dynamic portfolio allocation problems, and (Kalogerias et al., 2020) stressed the importance of safeguarding against risky events in mean squared error Bayesian estimation. DRO with -divergence is almost equivalent as controlling by the variance (Gotoh et al., 2018; Lam, 2016; Duchi and Namkoong, 2019) were the worst case measure is computed exactly in (Staib and Jegelka, 2019). Although minimization of the mean-variance risk-measure does not correspond to an element of , the mean deviation risk measure of order does (see Theorem 1).
At this point, we state informally the first result of this paper (see Theorem 1), which establishes that the candidate problem (7) corresponds to an instance :
Informal Theorem 1.
There exists constant such that, for every , has a saddle point and (8) solves the DRO problem
(9) |
As discussed later, Theorem 1 states that for the corresponding levels of risk of (7), the closed form solution of (7) solves all . We answer the main question of the paper in the one dimensional case (see Theorem 2). We find that in the one dimensional case, the empirical counterpart of (8) dominates the empirical average for all probability distributions with negative excess kurtosis, .
Informal Theorem 2.
Let be the empirical counterpart of (8), and be the empirical average. For all platykurtic probability distributions, there is such that, for every and every ,
(10) |

All proofs of the results presented hereafter can be found in the supplementary material.
2 Risk-constrained minimum mean squared error estimators
We begin analyzing the risk-constrained mmse problem (7) by first referring the reader to (Kalogerias et al., 2019, Lemma 1), where the well-definiteness of the statement (7) is ensured under the following regularity condition
Assumption 1.
It is true that
By noticing that the constraint in (7) can be written as a perfect square, we establish a quadratic reformulation that on the one hand, allows us to interpret the solution to (7) as a stereographic projection, and on the other, sets the stage for a derivative-free solution:
Lemma 1 (The risk-constrained estimator as stereographic projection).
Lemma 1 shows the equivalence of (7) to the convex quadratic program (11) allowing to interpret the former in geometric terms based on the standard inner product in . Within this setting, the solution to (7) is a stereographic projection of the sphere onto . Further, we can do slightly more by applying Pythagora’s theorem to the objective in (11) to obtain
(12) |
where by we mean the opposite vector to . According to Lemma 1, the solution of (12) (and therefore of (7)) is the point lying on the ellipse of radius and of center , with the least distance from the orthogonal projection . Being guided by the geometric picture, the problem is feasible for any . In particular, for all , , while for , a solution exists only when . Moving forward, for all the solution is the unique touching point of the circle and the ellipse and it is obtained by equating the “equilibrium forces”. However, based on Lemma 1, we may proceed derivative-free (see Appendix A).
Proposition 1 (The risk-constrained mmse estimator).
Because by construction, (7) aims to reduce the estimation error variability, it is expected to do so by shifting the optimal estimates towards the areas suffering high loss, that is, towards the tail of the underlying distribution. The next result characterizes the bias of (13): The risk constrained estimator is a shifted version of the conditional expectation by a fraction of the Fisher’s moment coefficient of skewness plus the cross third-order statistics of .
Corollary 1 (Risk-constrained mmse estimator and Fisher’s skewness).
The estimator (13) operates according to
(14) |
where , and denotes the th component of , and , respectively, the orthonormal matrix with columns the eigen-vectors of , and
In an effort to minimize the squared error on average while maintaining its variance small, (13) attributes the estimation error made by the conditional expectation to the skewness of the conditional distribution along the eigen-directions of the conditional expected error. In particular, in the one-dimensional case, cross third-order statistics vanish and therefore,
(15) |
Corollary 1 reveals the “internal mechanism” with which (13) biases towards the tail of the underline distribution. Per error eigen-direction, (13) compensates for the high losses relative to the expectation, by a fraction of the Fisher’s skewness coefficient and a term , that captures the cross third-order statistics of the state components. Although (14) involves the cross third-order statistics of the projected state, these statistics cannot emerge artificially from a coordinate change such as , but are intrinsic to the state itself. Besides, a meaningfull measure of risk should always be coordinate change invariant.
Motivated by the geometric picture, and the Lagrangean of (7), we note that there exists a unique random element such that (13) is viewed as its corresponding average (see Appendix A). Fortunately, such an element can be expressed as a Radon-Nikodym derivative w.r.t. .
Lemma 2 (Risk constrained estimator as a weighted average).
The primal optimal solution of (7) can be written as
(16) |
where
(17) |
In particular, for , , it follows that .
Combining Lemma 2 and (15), we obtain 333Where we explicitly refer to the measure w.r.t. which expectations are taken.
(18) |
In words, averaging on the re-distribution is the same as biasing the expectation w.r.t. the initial measure towards the tail of based on a fraction of the Fisher’s moment coefficient of skewness. This fact can be considered as a strong indication of another manifestation of duality between risk-averse and distributionally robust optimization. In addition, for the one dimensional case, assuming that is positively skewed, we obtain the bound
(19) |
That is, the expectations of neighboring populations that can be tracked with re-weighting are determined by a fraction of the skewness of the population at hand. In the case of empirical measures, (19) implies that lack of mass in the area of the tail is seen as the presence of risky events.
3 Distributionally robust mean square error estimation
In this section we show that (16) (and therefore the corresponding solution of (7)) is the solution to a DRO problem for all radii less than a computable constant. We start, by taking into account the equivalence between (7) and the mean-deviation risk measure of order (see e.g. (Shapiro et al., 2021)). To not overload the notation, let us declare . Then, (7) reads
(20) |
or equivalently
(21) |
It is a standard exercise to verify that the constraint in (21) is convex w.r.t. which leads us to the following result.
Moving forward, strong Lagrangean duality yields
(25) |
where
(26) |
is the mean-deviation risk measure of order w.r.t. (see e.g. (Shapiro et al., 2021)), with sensitivity constant, the optimal Lagrange multiplier .
3.1 Dual representation of the mean-deviation risk measure of order
Equation (26) expresses the mean-deviation risk measure in its primal form, where the sensitivity constant is identified with the Lagrange multiplier associated with the constraint in (7). Aiming to study robustness in distributional shifts, the place to start is with the dual representation of (26). Following (Shapiro et al., 2021), the variance in (26), can be expressed via the norm in
the latter being identified with its dual norm , where , and . Thus, we may write
and subsequently (26) as:
(27) |
From (27), , while implies . Therefore, , where
(28) |
In addition, let . Then , which implies that . Thus, we may write
(29) |
Based on the previous discussion, (7) is equivalent to
(30) |
The equality constraint in (28) can be absorbed and the constraint set may be re-expressed as the chi-square ball of radius centered at the probability measure and formed by the divergence from to :
Subsequently then, (30) reads
(31) |
Thus the inner optimization problem in (31) is of a linear objective subject to a quadratic constraint over the infinite dimensional (dual) vector space , and can be easily handled by variational calculus. In particular, by introducing an additional Lagrange multiplier —for dualizing the quadratic constraint—, it is easy to show that the supremum is attained at
(32) |
Note that the objective in (31) is not convex over since the expectation is taken w.r.t. signed measures. At this point, the reader might find it instructive comparing (32) with (17) through (22), and ask the question: Is there a subset of admissible Lagrange multipliers for which (31) (and therefore (7)) can be formulated as a distributionally (over probability measures) robust optimization problem? We answer this question with the following result.
Theorem 1 (Distributionally robust mean square error esimator).
According to Theorem 1, when the risk in (7) is left relaxed more than 444Here denotes the corresponding to level of risk., (33) can be viewed as a zero-sum game between the statistician (S) who plays , and its fictitious opponent (A), who chooses . Then, is interpreted as the best response of (S) to the best response of (A), with the objective resting on this equilibrium.
Then, according to Corollary 1, the optimal strategy of (S) is to bias -up to a change of co-ordinates- towards the areas of high loss (of ), with a fraction of the Fisher’s moment coefficient of skewness plus some additional cross third-order statistics. The directionality in the play of (S) induces a directionality in the play of (A) as we formally discuss in the next section.
4 Skewness as the Wasserstein gradient of the
divergence
According to the previous discussion, we observe that the shift of the expectation in (14), and (18) attributes directionality to the mass re-distribution play of (A) in Theorem 1. In this subsection we make this observation more formal through the smooth structure of the Wasserstein space of probability measures absolutely continuous w.r.t. Lebesgue measure with finite second moment, equipped with the Wasserstein metric . In particular, the dynamic formulation of the Wasserstein distance (Benamou and Brenier, 2000) entails the definition of a (pseudo) Riemannian structure (Chewi, 2023). At every point , the tangent space contains the re-allocation directions and it is parameterized by functions on , through the elliptic operator . The metric 555 By we denote the class of smooth real-valued maps defined on . has value at the point (Otto, 2001):
Moving forward, let . Then, the gradient w.r.t. the defined metric, is the unique vector field satisfying
where is the differential of . To derive the gradient , consider the curves progressing through and realize the tangent vector for some . It is
We identify , or equivalently , where denotes the first variation of w.r.t. . For the particular case of the divergence from to , we obtain which when evaluated on (32) yields
(34) |
By plugging in (34) and subsequently taking expectations, we obtain (for simplicity in the one dimensional case)
(35) |
By combining (15) and (35) we may further write
Equation (35) characterizes infinitesimally the optimal strategy of (A): Given the optimal play of (S), (A) changes the measure in (31) aiming to increase the divergence from to . For a more comprehensive review in the theory of optimal transport and applications we refer the reader to (Ambrosio et al., 2005; Villani et al., 2009; Wibisono, 2018; Figalli and Glaudo, 2021; Chewi, 2023).
5 Mean squared error of the empirical distributionally robust mse estimator
Now that the risk-constrained estimator has all the desired features, we want to study its performance w.r.t. the mean squared error. We do so in two steps. First, we consider a simplified version of the empirical distributionally robust estimator and we show that it outperforms the empirical average over a rather large class of models. Second (see Appendix B), we argue that with high probability this estimator is the empirical version of the distributionally robust estimator (15). The next results refer to the one dimensional case (), with the extension to other dimensions left open for future exploration.
By considering (15) w.r.t. , we define the following algorithm
(36) |
where is a non-negative and deterministic parameter that replaces the first factor in the second term of (15). For ease of notation, let us first denote , . We call (36) the simplified empirical distributionally robust estimator, which after the above notation reads , and achieves mean squared error
(37) |
for any . Strict domination follows for all those positive such that
(38) |
only under reasonable conditions that render the right-hand side of (38) strictly positive. Let be the set of all probability measures with finite second order moment and , the real-valued map with , where and is the second and fourth central moment of , respectively. Note that the Gaussian measures are all in , and define the set . Strict domination of (36) is demonstrated by the following
Theorem 2 (Strict domination).
Let be as in (36), and be the empirical average. Fix an . Then, for probability measures , and for any ,
(39) |
for all .
As mentioned earlier, Gaussian measures are among those with . In this case , the condition (37) is satisfied with equality for any , and the mse performance of (36) reduces to that of the empirical average. On the contrary, when the data are generated from , (36) outperforms the average as soon as the parameter is chosen inside the prescribed interval. Interestingly, domination remains feasible for arbitrarily large data sets as soon as is sent to zero with a rate of at least . The distributionally robust estimator achieves better mse performance compared to the sample mean by leveraging its bias term. Further, if , then the slope of at zero
(40) |
It is for all , and the maximum value is attained for . In that case of course the best choice is . On the contrary, for all those probability measures strict domination follows for (see also Appendix B):
(41) |
The more negative the slope at zero is, the wider the feasible set becomes. A very negative slope at zero permits large improvement in the mse performance even for very small values of thus rendering, at the same time, the estimator as unbiased as possible. Since the slope at zero is determined by , those models with large trade more favorably mse performance for unbiasedness. That said, for models with large value of , values of close to zero achieve a non-trivial performance difference even for large number of samples. However, on the basis of (40), large data sets tend to flatten out at zero, and to maintain performance has to be increased to
(42) |
thus rendering more biased. Lastly, approaches zero as the feasible set shrinks with a rate of at least pushing to the empirical average.
6 Conclusion and future work
We deployed the main insights of DRO to come up with a distributionally robust estimator; its empirical version serving as a better estimator for the mean compared to the empirical average w.r.t. the mean squared error for all platykurtic probability distributions. The aforesaid estimator biases-up to a change of coordinates-towards the tail of the distribution with the Fisher’s moment coefficient of skewness plus additional cross third order statistics and on top of that, it can be written as an expectation w.r.t. an optimal re-weighting of the original measure. This optimal re-weighting along with the optimal estimator form a saddle point of the mse cost and the bias characterizes the directionality of the infinitesimal play through the Wasserstein geometry of the space of measures. Lastly, extending Theorem 2 in the multi-dimensional setting is left for future investigation.
References
- Abeille et al. [2016] Marc Abeille, Alessandro Lazaric, Xavier Brokmann, et al. Lqg for portfolio optimization. arXiv preprint arXiv:1611.00997, 2016.
- Ambrosio et al. [2005] Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2005.
- Bayraksan and Love [2015] Güzin Bayraksan and David K Love. Data-driven stochastic programming using phi-divergences. In The operations research revolution, pages 1–19. INFORMS, 2015.
- Benamou and Brenier [2000] Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numerische Mathematik, 84(3):375–393, 2000.
- Bertsimas and Sim [2004] Dimitris Bertsimas and Melvyn Sim. The price of robustness. Operations research, 52(1):35–53, 2004.
- Blanchet and Shapiro [2023] Jose Blanchet and Alexander Shapiro. Statistical limit theorems in distributionally robust optimization. In 2023 Winter Simulation Conference (WSC), pages 31–45. IEEE, 2023.
- Blanchet et al. [2021] Jose Blanchet, Karthyek Murthy, and Viet Anh Nguyen. Statistical analysis of wasserstein distributionally robust estimators. In Tutorials in Operations Research: Emerging Optimization Methods and Modeling Techniques with Applications, pages 227–254. INFORMS, 2021.
- Blanchet et al. [2023] Jose Blanchet, Daniel Kuhn, Jiajin Li, and Bahar Taskesen. Unifying distributionally robust optimization via optimal transport theory. arXiv preprint arXiv:2308.05414, 2023.
- Blanchet et al. [2024] Jose Blanchet, Jiajin Li, Sirui Lin, and Xuhui Zhang. Distributionally robust optimization and robust statistics. arXiv preprint arXiv:2401.14655, 2024.
- Chewi [2023] Sinho Chewi. An optimization perspective on log-concave sampling and beyond. PhD thesis, Massachusetts Institute of Technology, 2023.
- Devroye et al. [2003] Luc Devroye, Dominik Schäfer, László Györfi, and Harro Walk. The estimation problem of minimum mean squared error. Statistics & Decisions, 21(1):15–28, 2003.
- Duchi and Namkoong [2018] John Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. arXiv preprint arXiv:1810.08750, 2018.
- Duchi and Namkoong [2019] John Duchi and Hongseok Namkoong. Variance-based regularization with convex objectives. Journal of Machine Learning Research, 20(68):1–55, 2019.
- Esfahani and Kuhn [2015] Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. arXiv preprint arXiv:1505.05116, 2015.
- Figalli and Glaudo [2021] Alessio Figalli and Federico Glaudo. An invitation to optimal transport, Wasserstein distances, and gradient flows. 2021.
- Gao and Kleywegt [2023] Rui Gao and Anton Kleywegt. Distributionally robust stochastic optimization with wasserstein distance. Mathematics of Operations Research, 48(2):603–655, 2023.
- Girshick and Savage [1951] MA Girshick and LJ Savage. Bayes and minimax estimates for quadratic loss functions. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, volume 2, pages 53–74. University of California Press, 1951.
- Gotoh et al. [2018] Jun-ya Gotoh, Michael Jong Kim, and Andrew EB Lim. Robust empirical optimization is almost the same as mean–variance optimization. Operations research letters, 46(4):448–452, 2018.
- James and Stein [1961] William James and Charles Stein. Estimation with quadratic loss. In Breakthroughs in statistics: Foundations and basic theory, pages 443–460. Springer, 1961.
- Kalogerias et al. [2019] Dionysios S Kalogerias, Luiz FO Chamon, George J Pappas, and Alejandro Ribeiro. Risk-aware mmse estimation. arXiv preprint arXiv:1912.02933, 2019.
- Kalogerias et al. [2020] Dionysios S Kalogerias, Luiz FO Chamon, George J Pappas, and Alejandro Ribeiro. Better safe than sorry: Risk-aware nonlinear bayesian estimation. In ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5480–5484. IEEE, 2020.
- Kuhn et al. [2019] Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, and Soroosh Shafieezadeh-Abadeh. Wasserstein distributionally robust optimization: Theory and applications in machine learning. In Operations research & management science in the age of analytics, pages 130–166. Informs, 2019.
- Lam [2016] Henry Lam. Robust sensitivity analysis for stochastic systems. Mathematics of Operations Research, 41(4):1248–1275, 2016.
- Lee and Raginsky [2018] Jaeho Lee and Maxim Raginsky. Minimax statistical learning with wasserstein distances. Advances in Neural Information Processing Systems, 31, 2018.
- Markowitz [1952] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952. ISSN 00221082, 15406261. URL http://www.jstor.org/stable/2975974.
- Namkoong and Duchi [2016] Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. Advances in neural information processing systems, 29, 2016.
- Nguyen et al. [2023] Viet Anh Nguyen, Soroosh Shafieezadeh-Abadeh, Daniel Kuhn, and Peyman Mohajerin Esfahani. Bridging bayesian and minimax mean square error estimation via wasserstein distributionally robust optimization. Mathematics of Operations Research, 48(1):1–37, 2023.
- Otto [2001] Felix Otto. The geometry of dissipative evolution equations: The porous medium equation. Communications in Partial Differential Equations, 26(1-2):101–174, 2001.
- Scarf et al. [1957] Herbert E Scarf, KJ Arrow, and S Karlin. A min-max solution of an inventory problem. Rand Corporation Santa Monica, 1957.
- Shapiro et al. [2021] Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski. Lectures on stochastic programming: modeling and theory. SIAM, 2021.
- Staib and Jegelka [2019] Matthew Staib and Stefanie Jegelka. Distributionally robust optimization and generalization in kernel methods. Advances in Neural Information Processing Systems, 32, 2019.
- Van Parys et al. [2021] Bart PG Van Parys, Peyman Mohajerin Esfahani, and Daniel Kuhn. From data to decisions: Distributionally robust optimization is optimal. Management Science, 67(6):3387–3402, 2021.
- Verhaegen and Verdult [2007] Michel Verhaegen and Vincent Verdult. Filtering and system identification: a least squares approach. Cambridge university press, 2007.
- Villani et al. [2009] Cédric Villani et al. Optimal transport: old and new, volume 338. Springer, 2009.
- Wang and Bovik [2009] Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine, 26(1):98–117, 2009.
- Wibisono [2018] Andre Wibisono. Sampling as optimization in the space of measures: The langevin dynamics as a composite optimization problem. In Conference on Learning Theory, pages 2093–3027. PMLR, 2018.
Appendix A Appendix A (Proofs of Section 2)
Proof of Lemma 1.
The square of the constraint of (7) is written as666All expectations are taken w.r.t. .:
(48) |
where
Thus, by taking expectations
(54) |
where , , and . Equation (54) rests on the permutation-invariance property of the . Without loss of generality, let us assume that the covariance matrix is strictly positive, and consider its Schur complement w.r.t. thus obtaining the following standard factorization (see e.g. [Verhaegen and Verdult, 2007, Lemma 2.3]):
(61) |
∎
Proof of Proposition 1.
From Lemma 1, problem (7) can equivalently be written as:
where is the Lagrangean of (12) given by
(62) |
and is the multiplier associated with the constraint of (12). Slater’s condition is directly verified from (12) and therefore, due to convexity, the dual-optimal value is attained, and (12) has zero duality gap:
The Lagrangian reads
(68) |
and by performing a similar decomposition as in (61) we obtain
(69) |
The last two terms in (69) do not depend on , and thus
(70) |
is primal-optimal. ∎
Proof of Corollary 1.
Let us denote . Differentiation of (70) w.r.t. , gives
(71) |
where the commutator , and therefore we may write
(72) |
It is worth noticing that the dependency on has transferred in , and that , and . Thus, we declare . By integrating (72) w.r.t. we obtain
(73) |
where , and refers to the spectral decomposition of . That is, the optimal estimates are shifted versions of the conditional expectation by a transformed version of the vector . Recall that this vector encodes the difference between the center of the circle and the center of the ellipsoid. From (73) we obtain
(74) |
Further,
(75) |
where . In addition, since
and is unitary,
(76) |
By , and we mean the th component of , and , respectively. Also, note that since the singular values of the covariance matrix are coordinate-change invariant, , or . Thus,
(77) |
Completing the third power in the second term of (77), and subsequently setting the last term equal to concludes the proof. ∎
Proof of Lemma 2.
The optimallity condition for the corresponding to (7) Lagrangean yields:
or
or
or
or
(78) |
The Radon-Nikodym derivative in (78) takes positive as well as negative values. However, and therefore, renders the factor in (78) a positive density. This is guaranteed when
where the first equality follows because the objective increases with . ∎
Appendix B Appendix B (Proofs of Sections 3, 5)
Proof of Lemma 3.
Proof of Theorem 1.
To keep the notation simple, we use just for the optimal Lagrange multiplier of the mean-deviation problem, and for the optimal Lagrange multiplier of the mean-variance problem. Then, on the basis of Lemma (3) and
(79) | ||||
(80) | ||||
(81) |
Equation (80) follows from (79) because is the optimal solution of (29). On top of that,
(82) | ||||
(83) |
where (83) follows from (82) because of Lemma 2. With , again from Lemma 2, the measure is positive and thus,
(84) |
Proof of Theorem 2.
It is , and . The condition for strict domination reads
(85) |
For the numerator of (85) we have:
(86) |
We compute each term of (86). For the first one we have
(87) |
We compute each term inside the parenthesis in (87): For the first we have , while for the second:
(88) |
For the third term inside the parenthesis of (87) we use
(89) |
to obtain
(90) |
Lastly, the fourth term reads:
(91) |
Thus, by (88), (90), (91), the parenthesis in (87) reads:
(92) |
The summation in (87) increases order by :
(93) |
Therefore, for the first term in (86) we have:
(94) |
Moving forward, for the second term of (86) we have
(95) |
We compute the terms inside the parenthesis in (95). For the first one we have:
(96) |
For the second:
(97) |
The third term in (95) reads:
(98) |
where we expanded the third power and subsequently used that
(99) |
The last term in the parenthesis of (95) is
(100) |
where we used
(101) |
where are all combinations that sum up to . By gathering all the terms we have that the parenthesis in (95) reads:
(102) |
Lastly, the outer summation in (95) increases the order by .
(103) |
By plugging (94), and (103) into (86) we obtain
(104) |
or
(105) |
where , and denote the second and fourth central moments of the probability measure , respectively. Strictly positive values for are feasible for all those models with . Lastly, the condition (85) reads:
(106) |
For the case where the random variable is supported over a compact subset of , we can bound the polynomial terms of in (2), and also the variance in the denominator of the first factor of (15). This assumption is not absolutely necessary and can be avoided by showing that with high probability (36) is a DRO estimator. Since Theorem 2 allows , this can be done by controlling the empirical variance in the factor of (15) as well as, the polynomial terms in the expectation (w.r.t. ) related to the threshold . Then, for a fixed model, as one has to trade performance for a distributional robustness, the latter being favored by the logarithmic dependency in the number of samples. ∎