Minimax Regret Optimization for Robust Machine Learning under Distribution Shift
Abstract
In this paper, we consider learning scenarios where the learned model is evaluated under an unknown test distribution which potentially differs from the training distribution (i.e. distribution shift). The learner has access to a family of weight functions such that the test distribution is a reweighting of the training distribution under one of these functions, a setting typically studied under the name of Distributionally Robust Optimization (DRO). We consider the problem of deriving regret bounds in the classical learning theory setting, and require that the resulting regret bounds hold uniformly for all potential test distributions. We show that the DRO formulation does not guarantee uniformly small regret under distribution shift. We instead propose an alternative method called Minimax Regret Optimization (MRO), and show that under suitable conditions this method achieves uniformly low regret across all test distributions. We also adapt our technique to have stronger guarantees when the test distributions are heterogeneous in their similarity to the training data. Given the widespead optimization of worst case risks in current approaches to robust machine learning, we believe that MRO can be a strong alternative to address distribution shift scenarios.
1 Introduction
Learning good models for scenarios where the evaluation happens under a test distribution that differs from the training distribution has become an increasingly important problem in machine learning and statistics. It is particularly motivated by the widespread practical deployment of machine learning models, and the observation that model performance deteriorates when the test distribution differs from the training distribution. Some motivating scenarios range from class imbalance between training and test data (Galar et al., 2011; Japkowicz, 2000) to algorithmic fairness (Dwork et al., 2012; Barocas and Selbst, 2016). Consequently, several recent formalisms have been proposed to develop algorithms for such settings, such as adversarial training (Goodfellow et al., 2014; Madry et al., 2017; Wang et al., 2019), Invariant Risk Minimization (Arjovsky et al., 2019; Ahuja et al., 2020a; Rosenfeld et al., 2020) and Distributionally Robust Optimization (Duchi and Namkoong, 2021; Namkoong and Duchi, 2016; Staib and Jegelka, 2019; Kuhn et al., 2019). While the specifics differ across these formulations, at a high-level they all minimize the worst-case risk of the learned model across some family of test distributions, and spend considerable effort on finding natural classes of test distributions and designing efficient algorithms to optimize the worst-case objectives.
In this paper, we consider this problem from a learning theory perspective, by asking if it is possible to obtain regret bounds that hold uniformly across all test distributions. Recall that in supervised learning, we are given a function class and a loss function , where denotes a sample. For a test distribution , we are interested in learning a prediction function using the training data, so that the risk of under : , is small. In classical learning theory, an estimator is learned from the training data, and we are interested in bounding the regret (or excess risk) of as follows
where minimizes over . If the training data are drawn from the same distribution as the test distribution , then standard results imply that the empirical risk minimizaiton (ERM) method, which finds an with a small training error, achieves small regret. A natural question we would like to ask in this paper is whether there exists an estimator that allows us to achieve small regret with distribution shift? More precisely, suppose we are given a family of distributions with . Given access to samples from , we would like to find an estimator from the training data so that the uniform regret
(1) |
is small. Here the model class can be statistically misspecified in that it may not contain the optimal model that attains the Bayes risk in a pointwise manner.
We note that this objective is different from the objective of Distributionally Robust Optimization (henceforth DRO), which directly minimizes the risk across all test distributions as follows:
(2) |
We observe that the objective (1) is less sensitive than DRO to heterogeneity in the amount of noise in test distributions. Moreover, when the model is well-specified, our criterion directly measures the closeness of the estimator and the optimal model , which is often desirable in applications. We call the criterion to minimize regret uniformly across test distributaions Minimax Regret Optimization (MRO), and its population formulation seeks to minimize the worst-case regret (1):
(3) |
Compared to DRO, MRO evaluates the regret of a candidate model on each distribution as opposed to the raw risk. As we show in the following sections, regret is comparable across distribution in a more robust manner than the risk, since the subtraction of the minimum risk for each distribution takes away the variance due to noise. We note that a similar approach to remove the residual variance to deal with heterogenous noise arises in the offline Reinforcement Learning setting (Antos et al., 2008), which we comment more on in the discussion of related work. Similar to ERM, which replaces the regret minimization problem over a single test distribution by its empirical minimization counterpart, we consider an empirical minimization counterpart of MRO, and analyze its generalization performance.
Our contributions.
With this context, our paper makes the following contributions.
-
1.
The objective (3), which gives an alternative to DRO, alleviates its shortcomings related to noise overfitting, and provides a robust learning formulation for problems with distribution shift.
-
2.
We show the empirical couterpart to (3) can be analyzed using classical tools from learning theory. When the loss function is bounded and Lipschitz continuous, we show that the regret of our learned model on each distribution can be bounded by that of along with a deviation term that goes down as . Concretely, we show that if for all , then with high probability,
where we omit the dependence on the complexities of and as well as the failure probability and use a simplified assumption on than in our formal result of Theorem 2 for ease of presentation. That is, has a bounded regret on each domain, so long as there is at least one function with a small regret on each domain. For squared loss, if the class is convex, we show that our rates improve to (Theorem 4). The result follows naturally as empirical regret concentrates to population regret at a fast rate in this setting. We also show that the worst-case risk of cannot approach the worst case risk of at a rate faster than (Proposition 6), which demonstrates a theoretical benefit of our selection criterion.
-
3.
We present SMRO, an adaptation of our basic technique which further rescales the regret of each distribution differently. We show that SMRO retains the same worst-case guarantees as MRO, but markedly improves upon it when the distributions satisfy an alignment condition, which includes all well-specified settings (Theorems 7 and 8). The rescaling parameters are fully estimated from data.
-
4.
Algorithmically, we show that our method can be implemented using access to a (weighted) ERM oracle for the function class , using the standard machinery of solving minimax problems as a two player game involving a no-regret learner and a best response player. The computational complexity scales linearly in the size of and as in the number of samples.
We conclude this section with a discussion of related work.
1.1 Related Work
Distributional mismatch between training and test data has been studied in many settings (see e.g. (Quinonero-Candela et al., 2009)) and under several notions of mismatch. A common setting is that of covariate shift, where only the distribution of labels can differ between training and test (Shimodaira, 2000; Huang et al., 2006; Bickel et al., 2007). Others study changes in the proportions of the label or other discrete attributes between training and test (Dwork et al., 2012; Xu et al., 2020). More general shifts are considered in domain adaptation (Mansour et al., 2009; Ben-David et al., 2010; Patel et al., 2015) and transfer learning (Pan and Yang, 2009; Tan et al., 2018).
A recent line of work on Invariant Risk Minimization (IRM) (Arjovsky et al., 2019) considers particularly broad generalization to unseen domains from which no examples are available at the training time, motivated by preventing the learning of spurious features in ERM based methods. The idea is to find an estimator which is invariant across all potential test distributions. Although most empirical works on IRM use gradient-penalty based formulations for algorithmic considerations, one interpretation of IRM leads to a minimax formulation similar to the MRO objective (3):
(4) |
where the family consists of all individual domains in the training data and is the entire training data pooled across domains. However, it is not obvious how to meaningfully analyze the distributional robustness of IRM method (4) in the classical learning theory setting, and some subsequent works works (Ahuja et al., 2020b; Kamath et al., 2021) formalize IRM as minimizing a DRO style worst-case risk objective, instead of regret as in the original paper. In contrast, our MRO formulation is motivated by extending regret-driven reasoning to settings with distribution shift.
Distributionally robust optimization, which forms a starting point for this work, has a long history in the optimization literature (see e.g. (Ben-Tal et al., 2009; Shapiro, 2017)) and has gained recent prominence in the machine learning literature (Namkoong and Duchi, 2016; Duchi and Namkoong, 2021; Duchi et al., 2021; Staib and Jegelka, 2019; Kuhn et al., 2019; Zhu et al., 2020). DRO has been applied with mixed success in language modeling (Oren et al., 2019), correcting class imbalance (Xu et al., 2020) and group fairness (Hashimoto et al., 2018). For most of the theoretical works, the emphasis is on efficient optimization of the worst-case risk objective. While Duchi and Namkoong (2021) provide sharp upper and lower bounds on the risk of the DRO estimator, they do not examine regret, which is the central focus of our study.
We note that the raw risk used in DRO is sensitive to heterogeneous noise, in that larger noise leads to larger risk. Such sensitivity can be undesirable for some applications. Challenges in learning across scenarios with heterogeneous noise levels has been previously studied in supervised learning (Crammer et al., 2005), and also arise routinely in reinforcement learning (RL). In the setting of offline RL, a class of techniques is designed to minimize the Bellman error criterion (Bertsekas and Tsitsiklis, 1996; Munos and Szepesvári, 2008), which has known challenges in direct unbiased estimation (see e.g. Example 11.4 in Sutton and Barto (1998)). To counter this, Antos et al. (2008) suggest removing the residual variance, which is akin to considering regret instead of risk in our objective and yields similar minimax objectives as this work.
On the algorithmic side, we build on the use of regret minimization strategies for solving two player zero-sum games, pioneered by Freund and Schapire (1996) to allow for general distribution families as opposed to relying on closed-form maximizations for specific families, as performed in many prior works (Namkoong and Duchi, 2016; Staib and Jegelka, 2019; Kuhn et al., 2019). Similar approaches have been studied in the presence of adversarial corruption in Feige et al. (2015) and in the context of algorithmic fairness for regression in Agarwal et al. (2019).
2 Setting
We consider a class of learning problems where the learner is given a dataset sampled from a distribution of the form with . We also have a class of distributions such that we want to do well relative to this entire class of distributions, even though we only have access to samples from . In Minimax Regret Optimization, we formalize the property of doing uniformly well for all through the objective (3). As mentioned in the introduction, this is closely related to the DRO objective (2). The following result shows that small risk does not lead to small regret, which means DRO does not solve the MRO objective (3). The construction also shows that under heterogenous noise, DRO tends to focus on a distribution with large noise level, which can be undesirable for many applications.
Proposition 1 (DRO is sensitive to noise heterogeneity)
There exists a family of two distributions with heterogeneous noise levels,
and a function class , so that
, while .
Proof Suppose consists of two distributions over . The first distribution is degenerate with . The second distribution . The goal is to estimate the mean of the distributions using least squares loss. In this example, has zero-noise and has a relatively high noise. We consider a function class consisting of two estimates of the mean, and and . Let us abbreviate for . Note that the true mean of is , and that of is . Thus is a better uniform estimate of the means.
It is easy to verify that , , , and . Then the DRO objective (2) evaluates to the optimal value of 0.26, with attaining this value. On the other hand, the MRO objective (3) is equal to and this value is attained for , while incurs a much higher objective of . As a result, we have and . Comparing regrets lets MRO remove the underlying noise level of for .
Note that while the worst-case regret of MRO is always better than that of DRO, this does not imply that the function is always preferable under each distribution to . Next we give one such example where achieves an arguably better trade-off in balanced performance across distributions. This situation can happen in real applications when functions achieving minimum risks vary greatly across distributions, potentially due to overfitting. The example shows that both MRO and DRO may have advantages or disadvantages under different conditions.
Example 1 (Heterogeneous regrets across domains)
Suppose the target class consists of two distributions over and our function class consists of 3 functions . Let us assume that the risks of these functions under the two distributions are given by:
Then is easily disregarded under both MRO and DRO, since it has poor performance under . For both and , their regrets on are larger than , so MRO selects the better function for , that is, . DRO, on the other hand, prefers as it has a smaller worst-case risk, which is arguably the right choice in this scenario.
More generally, one can always find a distribution under which the regret of is smaller than that of and vice versa, but the ordering in the worst-case is clear by definition. In the context of these observations, we next discuss how we might estimate from samples, for which it is useful to rewrite the objective (3) in an equivalent weight-based formulation we discuss next.
A weight-based reformulation.
Having shown the potential benefits of our population objective, we now consider how to optimize it using samples from . Notice that the objective (3) does not depend on explicitly, which makes it unclear how we should approach the problem given our dataset. We address this issue by adopting an equivalent reweighting based formulation as typically done in DRO. Concretely, let us assume that is absolutely continous with respect to for all , so that there exists a weighting functions such that , where . We can equivalently rewrite the objective (3) as
(5) |
where and . It is also straightforward to define an empirical counterpart for this objective. Given samples , we can define the (weighted) empirical risk and its corresponding minimizer as and . With these notations, a natural empirical counterpart of the population objective (5) can be written as
(6) |
We now give a couple of concrete examples to instantiate this general formulation.
Example 2 (Covariate shift in supervised learning)
Suppose the samples where are features and is a prediction target. Suppose that the class contains functions , where is some parameter class. That is, we allow only the marginal distribution of to change while the conditional is identical across test distributions (Shimodaira, 2000; Huang et al., 2006). In our formulation, it is easy to incorporate the covariate shift situation with a general class under the restrictions . It is possible to add additional regularity on the class , such as taking the unit ball in an RKHS (Huang et al., 2006; Staib and Jegelka, 2019). We note that most prior studies of DRO imposed conditions on the joint perturbations of both , which leads to closed form solutions. However, directly instantiating in DRO with a bounded -divergence or MMD perturbations for only the marginal (instead of the joint distribution of ) does not yield a nice closed form solution. This results in additional computational and statistical challenges as discussed in Duchi et al. (2019).
We now demonstrate the versatility of our setting with an example from reinforcement learning.
Example 3 (Offline contextual bandit learning)
In offline contextual bandit (CB) learning, the data consists of with denoting a context, being an action out of possible choices and a reward. There is a fixed and unknown joint distribution over . The object of interest is a decision policy which maps a context to a distribution over actions, and we seek a policy which maximizes the expected reward under its action choices: , where is some given policy class. In the offline setting, there is some fixed policy which is used to choose actions during the data collection process. Then the training data can be described as where , and , and this differs from the action distributions that other policies in induce. Existing literature on this problem typically creates an estimator for and outputs , and the quality of the resulting policy critically relies on the quality of the estimator . A particularly popular choice is the class of doubly robust estimators (Cassel et al., 1976; Dudík et al., 2014) which solve a (weighted) regression problem to estimate . Since we require the reward model to predict well on the actions selected according to different policies , the ideal regression objective for any policy reweights the data as per that policy’s actions, and doing this weighting results in a significant performance gain (Su et al., 2020; Farajtabar et al., 2018). Typically this weighting is simplified or skipped during policy learning, however, as one needs to simultaneously reweight for all policies , and MRO provides an approach to do exactly this by choosing in the optimization for estimating .
Boundedness and complexity assumptions
We now make standard technical assumptions on , the loss function and the weight class . To focus attention on the essence of the results, we omit relatively complex results such as chaining or local Rademacher complexity, and use an - covering for uniform convergence over , where we use to denote the - covering number of with an accuracy . For parametric classes, such an analysis is optimal up to a log factors. We also make a boundedness and Lipschitz continuity assumption on the loss function.
Assumption 1 (Bounded and Lipschitz losses)
The loss function is bounded in for all and and is -Lipschitz continuous with respect to .
We also assume that the weight class satisfies a boundedness condition.
Assumption 2 (Bounded importance weights)
All weights satisfy for all and some constant .
To avoid the complexity of introducing an additional covering number, throughout this paper, we assume that the weight class is of finite cardinality, which is reasonable for many domain adaptation applications. We adopt the notations and to jointly capture the complexities of and , given a failure probability .
3 Regret Bounds for Bounded, Lipschitz Losses
We begin with the most general case with fairly standard assumptions on the loss function, and the function and weight classes and show the following result on the regret of for any .
Theorem 2
The bound of Theorem 2 highlights the main difference of our objective compared with DRO style approaches. The bound states that we our solution has a small regret for each , as long as at least one such function exists in the function class, up to the usual finite sample deviation terms. Thus, attains the uniform regret guarantee we asked for. To better interpret our result, we state an easy corollary, before making some additional remarks.
Corollary 3
Under conditions of Theorem 2, suppose further that such that for all . Then with probability at least , we have
Comparison with DRO approaches.
While Proposition 1 already illustrates the improved robustness of MRO to differing noise levels across target distributions, it is further instructive to compare the guarantees for the two approaches. Concretely, in our setting, it can be shown that with probability at least , we have for all :
(7) |
Compared with Theorem 2, this bound can be inferior when the different distributions identified by have vastly different noise levels. For instance, in the setting of Corollary 3, the bound (7) yields , which has additional dependence on the worst case risk of across all the . We can now recall that the assumption of Corollary 3 is satisfied with small in the constructed example of Proposition 1, in which DRO tries to fit to the large noise. Similarly, if the data distribution is Gaussian with an identical mean but different variances across the , then we see that the two bounds reduce to:
where is the variance of the Gaussian corresponding to importance weights . Overall, these examples serve to illustrate that the DRO objective is reasonable when the different distributions have similar noise levels, but is not robust to heterogeneity in noise levels.
Dependence on the complexity of .
For discrete , our results incur a union bound penalty of . We suspect that this dependency is unavoidable in the general case. If we define using a bound constraint such as , or an -divergence based constraint as in the DRO literature, then the worst-case has a simple solution in the DRO setting, with no explicit dependence on the complexity of . These results are easily adapted to MRO as well by observing . For instance, if is the class of all bounded importance weights, then the results of Shapiro (2017) adapted to MRO imply that
(8) |
Similar arguments hold for the family of Cressie-Read divergences studied in Duchi and Namkoong (2021). However, these closed form maximizations rely on doing joint perturbations over in the supervised learning setting and do not apply to covariate shifts (Duchi et al., 2019).
Proof [Sketch of the proof of Theorem 2] For a fixed , we can bound using Bernstein’s inequality. This requires bounding the range and variance of the random variable . Since the losses are bounded in , the two quantities are bounded by and respectively. Assumption 1 allows us to further get a uniform bound over with an additional dependence on . Standard arguments now yield closeness of to simultaneously for all and , and utilizing the definition of as the minimizer of completes the proof.
4 Fast Rates for Squared Loss and Convex Classes
A limitation of Theorem 2 is that it does not leverage any structure of the loss function beyond Assumption 1, and as a result obtains a dependence on the number of samples. For the case of a fixed distribution, it is well known that self-bounding properties of the loss variance can be used to obtain faster bounds on the regret of the ERM solution, and here we show that this improvement extends to our setting. For ease of exposition, we specialize to the squared loss setting in this section. That is, our samples take the form with , , our functions map from to and and . We make an additional convexity assumption on the function class .
Assumption 3
The class is convex: , for all .
Note that convexity of is quite different from convexity of in its parameters, and can always be satisfied by taking the convex hull of a base class. The assumption can be avoided by replacing our ERM based solution with aggregation approaches (see e.g. Tsybakov, 2003; Dalalyan and Salmon, 2012). However, we focus on the convex case here to illustrate the key ideas.
Theorem 4 crucially leverages the following property of convex classes and squared loss.
Lemma 5
For a convex class and squared loss , we have for any distribution and : .
Note that a similar result holds for the squared loss and any class , if we replace with the unconstrained minimizer of over all measurable functions, but using this property in our analysis would result in an additional approximation error term in the bound of Theorem 4, which is undesirable when considering regret within the function class.
Proof [Sketch of the proof of Theorem 4] The only difference with the proof of Theorem 2 is in the handling of the variance of , which we use as our random variable of interest in this case. Since almost surely, we get
where the inequality follows from the boundedness of and and the final inequality uses Lemma 5. We can now follow the usual recipe for fast rates with a self-bounding property of the variance. Rest of the arguments mirror the proof of Theorem 2.
Comparison with DRO.
It is natural to ask if it is possible to obtain fast rate for DRO under the conditions of Theorem 4. Of course it is not possible to show a regret bound similar to Theorem 4 for DRO, since it does not optimize the worst-case regret as Proposition 1 illustrates. Nevertheless, we investigate if DRO can achieve fast rate for the raw risk. Unfortunately, the following result shows that even under the assumptions of Theorem 4, the worst-case risk criterion for DRO is still subject to the slower rate. The reason is that regret has a markedly smaller variance than the risk, and the latter usually deviates at a rate even for squared loss.
Proposition 6
Under the assumptions of Theorem 4, there is a family of distributions with , satisfying , and a function class with , such that with probability at least , we have .
5 Adapting to Misspecification through Non-uniform Scaling across Distributions
Our result so far guarantees uniform regret, with learning complexity that depends on the worst case complexity over all distributions . For example, if the complexity of the distribution family is characterized by as in our analysis, then the bound depends on . In this section, we show that it is possible to improve such dependence using an adaptive scaling of the objective function. To better illustrate the consequences of this approach, let us assume that we can rewrite our guarantees in the form that for each and , we have with probability at least ,
(9) |
where the scaling function is a distribution dependent quantity that is known to the algorithm. For instance, under conditions of Theorem 2, we get this bound with , and (assuming ). Under conditions of Theorem 4, we can use , and , by taking the second bound of Theorem 4. While this does worsen our dependence on in the first case, this is preferable to assuming that is known and setting of and .
Since we assume is known, then we can define the estimator (Scaled MRO, or SMRO):
(10) |
We now present our main results for the statistical properties of under the assumptions of Theorems 2 and 4, before comparing the results and discussing some key consequences.
Theorem 7
Under assumptions of Theorem 2, consider the estimator with , where is the empirical second moment of the importance weights. Then with probability at least , we have :
We can also state a version for squared loss and convex classes.
Theorem 8
Under assumptions of Theorem 4, consider the estimator with . Then with probability at least , we have
where .
Comparing Theorems 7 and 8 with their counterparts of MRO, we notice a subtle but important difference. The finite sample deviation term in Theorem 7 depends on the weights for which the bound is stated, while the corresponding term in Theorem 7 is . In other words, the heterogeneous scaling in SMRO allows us to obtain a regret bound for each distribution depending on the deviation properties of that particular distribution, unlike in the basic MRO approach. To see the benefits of this approach, we state a corollary of Theorems 7 and 8 next.
Corollary 9 (Aligned distribution class)
Both results follow by choosing in Theorems 7 and 8. This result shows that the rescaling allows our bounds to adapt to the closeness of a target distribution to the data collection distribution, simultaneously for all target distributions. The alignment condition of a shared is always true in well-specified problems, and we illustrate a consequence of Corollary 9 for well-specified linear regression in Appendix D.
6 Algorithmic considerations
So far our development has focused on the statistical properties of MRO. In this section, we discuss how the MRO estimator can be computed from a finite dataset, given some reasonable computational assumptions on the function class .
Definition 10 (ERM oracle)
Given a dataset of the form , an ERM oracle for solves the weighted empirical risk minimization problem: .
While we assume access to an exact ERM oracle, we can weaken the notion to an approximate oracle with an optimization error comparable to the statistical error from finite samples. Given such an oracle, we can now approximately solve the MRO objective (6) (or (10)) by using a well-known strategy of solving minimax problems as two player zero-sum games. To do so, we recall our assumption of a finite class , and let denote the set of all distributions over the importance weight family. Similarly is the set of distributions over our function class. Then we can rewrite the objective in Equation (6) as:
(11) |
The objective is bilinear in and so that the following result holds due to the exchangeability of minimum and maximum for convex-concave saddle point problems. It is also a direct consequence of Proposition 12.
Proposition 11
There exists so that the solution of MRO is equivalent to the following solution of the weighted ERM method: .
For finite which we consider in this paper, it is also possible to find the approximate and efficiently, using the celebrated result of Freund and Schapire (1996) to solve this minimax problem through no-regret dynamics. We use the best response strategy for the -player, as finding the best response distribution given a specific is equivalent to finding the function that minimizes , which can be accomplished through one call to the ERM oracle. For optimizing , we use the exponentiated gradient algorithm of Kivinen and Warmuth (1997) (closely related to Hedge (Freund and Schapire, 1997)), which is a common no-regret strategy to learn a distribution over a finite collection of experts, with each being an ”expert” in our setting. More formally, we initialize to be the uniform distribution over and repeatedly update:
(12) |
Let us denote the distribution for the iterates generated by (12). Then the results of Freund and Schapire (1996) yield the following suboptimality bound on .
Proposition 12
For any and using in the updates (12), the distribution satisfies
At a high-level, Proposition 12 allows us to choose to ensure that the optimization error is no larger than the statistical error, allowing the same bounds to apply up to constants. The optimization strategy used here bears some resemblance to boosting techniques, which also seek to reweight the data in order to ensure a uniformly good performance on all samples, though the reweighting is not constrained to a particular class unlike here. Note that while the optimization error scales logarithmically in , the computational complexity is linear in the size of this set, meaning that our strategy is computationally feasible for sets of a modest size. On the other hand, our earlier discussion (8) suggests that alternative reformulations of the objective might be computationally preferred in the case of class which are continuous. Handling the intermediate regime of a large discrete class in a computationally efficient manner is an interesting question for future research.
7 Conclusion
In this paper, we introduce the MRO criterion for problems with distribution shift, and establish learning theoretic properties of optimizing the criterion from samples. We demonstrate the many benefits of reasoning with uniform regret as opposed to uniform risk guarantees, and we expect these observations to have implications beyond the setting of distribution shift.
On a technical side, it remains interesting to further develop scalable algorithms for large datasets and weight classes. Better understanding the statistical scaling with the size of the weight class and refining our techniques for important scenarios such as covariate shift are also important directions for future research.
References
- Agarwal et al. [2019] Alekh Agarwal, Miroslav Dudík, and Zhiwei Steven Wu. Fair regression: Quantitative definitions and reduction-based algorithms. In International Conference on Machine Learning, pages 120–129. PMLR, 2019.
- Ahuja et al. [2020a] Kartik Ahuja, Karthikeyan Shanmugam, Kush Varshney, and Amit Dhurandhar. Invariant risk minimization games. In International Conference on Machine Learning, pages 145–155. PMLR, 2020a.
- Ahuja et al. [2020b] Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, and Kush R Varshney. Empirical or invariant risk minimization? a sample complexity perspective. arXiv preprint arXiv:2010.16412, 2020b.
- Antos et al. [2008] András Antos, Csaba Szepesvári, and Rémi Munos. Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 71(1):89–129, 2008.
- Arjovsky et al. [2019] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
- Barocas and Selbst [2016] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
- Ben-David et al. [2010] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010.
- Ben-Tal et al. [2009] Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. Robust optimization. Princeton university press, 2009.
- Bertsekas and Tsitsiklis [1996] Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1st edition, 1996. ISBN 1886529108.
- Bickel et al. [2007] Steffen Bickel, Michael Brückner, and Tobias Scheffer. Discriminative learning for differing training and test distributions. In Proceedings of the 24th international conference on Machine learning, pages 81–88, 2007.
- Cassel et al. [1976] Claes M Cassel, Carl E Särndal, and Jan H Wretman. Some results on generalized difference estimation and generalized regression estimation for finite populations. Biometrika, 63(3):615–620, 1976.
- Crammer et al. [2005] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from data of variable quality. In NIPS, pages 219–226, 2005.
- Dalalyan and Salmon [2012] Arnak S Dalalyan and Joseph Salmon. Sharp oracle inequalities for aggregation of affine estimators. The Annals of Statistics, 40(4):2327–2355, 2012.
- Duchi and Namkoong [2021] John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378–1406, 2021.
- Duchi et al. [2019] John C Duchi, Tatsunori Hashimoto, and Hongseok Namkoong. Distributionally robust losses against mixture covariate shifts. Under review, 2019.
- Duchi et al. [2021] John C Duchi, Peter W Glynn, and Hongseok Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. Mathematics of Operations Research, 2021.
- Dudík et al. [2014] Miroslav Dudík, Dumitru Erhan, John Langford, and Lihong Li. Doubly robust policy evaluation and optimization. Statistical Science, 29(4):485–511, 2014.
- Dwork et al. [2012] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.
- Farajtabar et al. [2018] Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. More robust doubly robust off-policy evaluation. In International Conference on Machine Learning, pages 1447–1456. PMLR, 2018.
- Feige et al. [2015] Uriel Feige, Yishay Mansour, and Robert Schapire. Learning and inference in the presence of corrupted inputs. In Conference on Learning Theory, pages 637–657. PMLR, 2015.
- Freund and Schapire [1996] Yoav Freund and Robert E Schapire. Game theory, on-line prediction and boosting. In Proceedings of the ninth annual conference on Computational learning theory, pages 325–332, 1996.
- Freund and Schapire [1997] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
- Galar et al. [2011] Mikel Galar, Alberto Fernandez, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):463–484, 2011.
- Goodfellow et al. [2014] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Hashimoto et al. [2018] Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, pages 1929–1938. PMLR, 2018.
- Huang et al. [2006] Jiayuan Huang, Arthur Gretton, Karsten Borgwardt, Bernhard Schölkopf, and Alex Smola. Correcting sample selection bias by unlabeled data. Advances in neural information processing systems, 19:601–608, 2006.
- Japkowicz [2000] Nathalie Japkowicz. The class imbalance problem: Significance and strategies. In Proc. of the Int’l Conf. on Artificial Intelligence, volume 56. Citeseer, 2000.
- Kamath et al. [2021] Pritish Kamath, Akilesh Tangella, Danica Sutherland, and Nathan Srebro. Does invariant risk minimization capture invariance? In International Conference on Artificial Intelligence and Statistics, pages 4069–4077. PMLR, 2021.
- Kivinen and Warmuth [1997] Jyrki Kivinen and Manfred K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Inf. Comput., 132(1):1–63, 1997.
- Kuhn et al. [2019] Daniel Kuhn, Peyman Mohajerin Esfahani, Viet Anh Nguyen, and Soroosh Shafieezadeh-Abadeh. Wasserstein distributionally robust optimization: Theory and applications in machine learning. In Operations Research & Management Science in the Age of Analytics, pages 130–166. INFORMS, 2019.
- Madry et al. [2017] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Mansour et al. [2009] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.
- Munos and Szepesvári [2008] Rémi Munos and Csaba Szepesvári. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(5), 2008.
- Namkoong and Duchi [2016] Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In NIPS, volume 29, pages 2208–2216, 2016.
- Oren et al. [2019] Yonatan Oren, Shiori Sagawa, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust language modeling. arXiv preprint arXiv:1909.02060, 2019.
- Pan and Yang [2009] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009.
- Patel et al. [2015] Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015.
- Quinonero-Candela et al. [2009] J. Quinonero-Candela, M. Sugiyama, A. Schwaighofer, and N.D. Lawrence. Dataset Shift in Machine Learning. Neural Information Processing series. MIT Press, 2009. ISBN 9780262170055. URL https://books.google.com/books?id=7zrhAAAAMAAJ.
- Rosenfeld et al. [2020] Elan Rosenfeld, Pradeep Kumar Ravikumar, and Andrej Risteski. The risks of invariant risk minimization. In International Conference on Learning Representations, 2020.
- Shalev-Shwartz [2012] Shai Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn., page 107–194, 2012.
- Shapiro [2017] Alexander Shapiro. Distributionally robust stochastic programming. SIAM Journal on Optimization, 27(4):2258–2275, 2017.
- Shimodaira [2000] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
- Staib and Jegelka [2019] Matthew Staib and Stefanie Jegelka. Distributionally robust optimization and generalization in kernel methods. Advances in Neural Information Processing Systems, 32:9134–9144, 2019.
- Su et al. [2020] Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, and Miroslav Dudík. Doubly robust off-policy evaluation with shrinkage. In International Conference on Machine Learning, pages 9167–9176. PMLR, 2020.
- Sutton and Barto [1998] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning series. MIT Press, 1998. ISBN 9780262303842. URL https://books.google.com/books?id=U57uDwAAQBAJ.
- Tan et al. [2018] Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. In International conference on artificial neural networks, pages 270–279. Springer, 2018.
- Tsybakov [2003] Alexandre B Tsybakov. Optimal rates of aggregation. In Learning theory and kernel machines, pages 303–313. Springer, 2003.
- Wang et al. [2019] Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, and Quanquan Gu. On the convergence and robustness of adversarial training. In International Conference on Machine Learning, pages 6586–6595. PMLR, 2019.
- Xu et al. [2020] Ziyu Xu, Chen Dan, Justin Khim, and Pradeep Ravikumar. Class-weighted classification: Trade-offs and robust approaches. In International Conference on Machine Learning, pages 10544–10554. PMLR, 2020.
- Zhu et al. [2020] Jia-Jie Zhu, Wittawat Jitkrittum, Moritz Diehl, and Bernhard Schölkopf. Kernel distributionally robust optimization. arXiv preprint arXiv:2006.06981, 2020.
Appendix A Proof of Theorem 2
Recalling our assumptions that for any , we have and , we know that for a fixed and , we have, with probability at least :
(13) |
This is a consequence of Bernstein’s inequality applied to the random variable which is bounded by almost surely, and has a second moment at most when . Since is -Lipschitz in the second argument, if , we have for any and any :
where the second inequality follows since for all by assumption. Hence, defining to be an cover of at a scale , a union bound combined with the bound (13) yields that with probability , we have for all :
Using the Lipschitz property of the loss and further taking a union bound over , this gives for all and , with probability at least :
(14) |
where the second equality recalls our definition of . We now condition on the probability event that the bound of Equation 14 holds to avoid stating error probabilities in each bound. In particular, applying the bound above to the empirical minimizer for any , we observe that
Hence, for any :
(since ) | ||||
Taking an infimum over completes the proof of the theorem.
Appendix B Proof of Theorem 4
Proof [Proof of Theorem 4] We begin with the deviation bound for a fixed and as before, but use the regret random variable this time. Then by Lemma 5, we have
() | ||||
(Lemma 5) |
Thus we see that for any fixed and , we have with probability :
where is arbitrary.
Hence, with probability at least , we have for a fixed and :
In particular, choosing in the first inequality, we see that
(15) |
Using identical arguments as the proof of Theorem 2 allows us to turn both statements into uniform bounds over all and . Defining , we now have with probability at least :
() | |||
(since ) | |||
(Equation 15) | |||
Taking an infimum over and completes the proof.
We now prove Lemma 5.
B.1 Proof of Lemma 5
Since is convex, if and , then for . Then using , we have for any distribution and :
Since this holds for any , we take the limit to conclude that for all
(16) |
This inequality further allows us to conclude for any
B.2 Proof of Proposition 6
We consider a simple problem where there is a class consisting of two distributions that we want to do well under. Each distribution is such that takes the form where and is with probability . The function class for a constant that will be appropriately chosen in the proof. Let be the difference of means. Let us choose to be supported over so that where with probability 0.5 and with with probability 0.5. We set if and similarly if . Let us define for . Since is equivalent to with in this example, we stick to using the notation for the rest of the proof for consistency with our notation throughout. Let us use , to denote the empirical and expected risks on samples from (equivalently ). Then a direct calculation shows that
where with being the noise realization in . Then for sufficiently large , we see that
and the best worst case risk is given by . Now let us examine the empirical situation. We have
Let us denote as the minimizer of . Let us condition on the event where:
(17) |
Under this event, since both and are continous functions of , they are equal for some . So conditioned on , we seek a solution to
so that we get
Plugging the minimizer back in, we obtain the best worst case population risk as
Recalling that , we further obtain
In order to complete the lower bound on this regret term, we now analyze the random variable , and we choose . Note that from the definitions of and , we have with probability at least , for :
Under this event, we have and . Consequently, under this event
Choosing , we further have
where we have used for . Note that we have also not set the constant defining so far. In order for the unconstrained minimizations above to happen for , we need . This means that and . The latter is satisfied under our event when .
Finally we recall that we have conditioned so far on the event defined in Equation 17. We now examine under the event, and focus on the first condition in (17). We have
under the probability event. Similar reasoning also shows that in this event, so that we get the conclusion of the proposition.
Appendix C Proofs for Section 5
We start with a technical lemma about the concentration of empirical estimates of the second moment of importance weights, which is required for Theorem 7.
Lemma 13
Under Assumption 2, we have with probability at least , for all :
Proof Consider the non-negative random variable so that with probability 1 when . Then we have
Then for a fixed , we have by Bernstein’s inequality, with probability at least :
Rearranging terms and taking a union bound over completes the proof.
We are now ready to prove Theorem 7.
C.1 Proof of Theorem 7
Recall the definition . Plugging the result of Lemma 13 in Equation 13, we see that with probability at least , for a fixed and we have:
Now following the proof of Theorem 2, with probability at least , we have for all :
Now dividing through by , we see that for all , we have with probability at least :
where the second inequality follows from the definition of SMRO as the empirical optimizer of the objective (10). As a result, we have for any , with probability at least :
C.2 Proof of Theorem 8
We will be terse as the proof is largely a combination of the proofs of Theorems 4 and 7. Proceeding as in the proof of Theorem 4, we see that with probability at least , we have for all and :
and
for all .
Dividing through by as before and taking a supremum over , we obtain with probability at least :
Now following the remaining proof of Theorem 7 gives the desired bound.
Appendix D Well-specified linear regression with covariate shift
Let us consider a special case of Theorem 8, where is the class of linear prediction functions with unit norm weights and the data satisfies . We further assume that , where and is zero-mean noise such that for some constant (this just causes additional scaling with in the bounds of Theorems 4 and 8). Suppose further that the covariance is full rank with the smallest eigenvalue equal to . Let be the ordinary least squares estimator, given samples from . Then it is well-known that with probability at least ,
(18) |
Consequently, we have for any other distribution over :
where the inequality uses and that the smallest eigenvalue of is at least . That is, doing OLS under yields a strong guarantee for all target distributions , since we get pointwise accurate predictions in this scenario. However, directly applying the results of Theorem 4 would still have additional scaling with for any target distribution with importance weights . On the other hand, let us consider the bound of Corollary 9 for any class such that is in , so that we always include in our class of target distributions. Since and in this case, the second bound of the corollary yields with probability at least :
This is comparable to the prediction error bound in (18) for ERM on , only incurring an additional term compared. Note that ERM is asymptotically efficient in this setting, so we cannot expect to do better and suffer only a small penalty for our worst-case robustness. The approach of learning on alone is of course not robust to misspecification in the linear regression assumption. The guarantee of Theorem 4, in contrast incurs a bound , which can be significantly worse.
Appendix E Proofs for Section 6
It is clearly seen that the updates of correspond to the exponentiated gradient (EG) algorithm applied to the linear objective at round , where the negative sign happens since the EG algorithm is designed for minimization problems, while we apply it to a maximization problem. The regret guarantee for EG, specifically Corollary 2.14 of Shalev-Shwartz [2012] states that for any
Multiplying through by and substituting gives
Now recalling the definition , following the proof technique of Freund and Schapire [1996] gives that
where follows from the best response property of for (12).