\ul
Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift
Abstract
We establish a new model-agnostic optimization framework for out-of-distribution generalization via multicalibration, a criterion that ensures a predictor is calibrated across a family of overlapping groups. Multicalibration is shown to be associated with robustness of statistical inference under covariate shift. We further establish a link between multicalibration and robustness for prediction tasks both under and beyond covariate shift. We accomplish this by extending multicalibration to incorporate grouping functions that consider covariates and labels jointly. This leads to an equivalence of the extended multicalibration and invariance, an objective for robust learning in existence of concept shift. We show a linear structure of the grouping function class spanned by density ratios, resulting in a unifying framework for robust learning by designing specific grouping functions. We propose MC-Pseudolabel, a post-processing algorithm to achieve both extended multicalibration and out-of-distribution generalization. The algorithm, with lightweight hyperparameters and optimization through a series of supervised regression steps, achieves superior performance on real-world datasets with distribution shift.
1 Introduction
We revisit the problem of out-of-distribution generalization and establish new connections with multicalibration Hébert-Johnson et al. (2018), a criterion originating from algorithmic fairness. Multicalibration is a strengthening of calibration, which requires a predictor to be correct on average within each level set:
Calibration is a relatively weak property, as it can be satisfied even by the uninformative constant predictor that predicts the average outcome. More broadly, calibration provides only a marginal guarantee that does not extend to sub-populations. Multicalibration Hébert-Johnson et al. (2018) mitigates this issue by requiring the calibration to hold over a family of (overlapping) subgroups : for all ,
Multicalibration is initially studied as measure of subgroup fairness for boolean grouping functions , with indicating is a member of group (Hébert-Johnson et al., 2018). Subsequently, Gopalan et al. (2022) and Kim et al. (2019) adopt a broader class of real-valued grouping functions that can identify sub-populations through reweighting. The formulation of real-valued grouping function has enabled surprising connections between multicalibration and distribution shifts. Prior work (Kim et al., 2022; Roth, 2022) studied how distribution shift affects the measure of multicalibration, with a focus on covariate shift where the relationship between and remains fixed. Kim et al. (2022) show that whenever the set of real-valued grouping functions includes the density ratio between the source and target distributions, a multicalibrated predictor with respect to the source remains calibrated in the shifted target distribution.
Our work substantially expands the connections between multicalibration and distribution shifts. At a high level, our results show that robust prediction under distribution shift can actually be facilitated by multicalibration. We extend the notion of multicalibration by incorporating grouping functions that simultaneously consider both covariates and outcomes . This extension enables us to go beyond covariate shift and account for concept shift, which is prevalent in practice due to spurious correlation, missing variables, or confounding (Liu et al., 2023).
Our contributions. Based on the introduction of joint grouping functions, we establish new connections between our extended multicalibration notion and algorithmic robustness in the general setting of out-of-distribution generalization, where the target distribution to assess the model is different from the source distribution to learn the model.
1. We first revisit the setting of covariate shift and show multicalibration implies Bayes optimality under covariate shift, provided a sufficiently rich class of grouping functions. Then, in the setting of concept shifts, we show the equivalence of multicalibration and invariance (Arjovsky et al., 2019), a learning objective to search for a Bayes optimal predictor under a representation over features , even though is different across target distributions. We show correspondence between an invariant representation and a multicalibrated predictor , with a grouping function class containing all density ratios of target distributions and the source distribution.
2. As part of our structural analysis of the new multicalibration concept, we investigate the maximal grouping function class that allows for a nontrivial multicalibrated predictor. For traditional covariate-based grouping functions, the Bayes optimal predictor is always multicalibrated, which is no longer the case for joint grouping functions. We show the maximal grouping function class is a linear space spanned by the density ratio of the target distributions where the predictor is invariant. As a structural characterization of distribution shift, this leads to an efficient parameterization of the grouping functions by linear combination of a spanning set of density ratios. The spanning set can be flexibly designed to incorporates implicit assumptions of various methodologies for robust learning, including multi-environment learning (Shi et al., 2021) and hard sample learning (Liu et al., 2021).
3. We devise a post-processing algorithm to multicalibrate predictors and simultaneously producing invariant predictors. As a multicalibration algorithm, we prove its convergence under Gaussian distributions of data and certify multicalibration upon convergence. As a robust learning algorithm, the procedure is plainly supervised regression with respect to models’ hypothesis class and grouping function class, introducing an overhead of linear regression. This stands out from heavy optimization techniques for out-of-distribution generalization, such as bi-level optimization (Finn et al., 2017; Liu et al., 2022) and multi-objective learning (Ahuja et al., 2021; Arjovsky et al., 2019; Koyama and Yamaguchi, 2020), which typically involves high-order gradients (Ramé et al., 2022). The algorithm introduces no extra hyperparameters. This simplifies model selection, which is a significant challenge for out-of-distribution generalization since validation is unavailable where the model is deployed (Gulrajani and Lopez-Paz, 2021). Under the standard model selection protocol of DomainBed (Gulrajani and Lopez-Paz, 2021), the algorithm achieves superior performance to existing methods in real-world datasets with concept shift, including porverty estimation (Yeh et al., 2020), personal income prediction (Ding et al., 2021) and power consumption (Malinin et al., 2021, 2022) prediction.
2 Multicalibration and Bayes Optimality under Covariate Shift
2.1 Multicalibration with Joint Grouping Functions
Settings of Out-of-distribution Generalization
We are concerned with prediction tasks under distribution shift, where covariates are denoted by a random vector and the target by . The values taken by random variables are written by in lowercase. We have an uncertainty set of absolutely continuous probability measures, denoted by , where there is an accessible source measure and unknown target measure . We use capital letters such as to denote a single probability measure and lowercase letters such as to denote its probability density function. We define predictors as real-valued functions , which is learned in the source distribution and assessed in the target distribution . Given a loss function , we evaluate the average risk of a predictor w.r.t. a probability measure , defined by . We focus on the setting with and in our theoretical analyses.
We propose a new definition of approximate multicalibration with joint grouping functions.
Definition 2.1 (Multicalibration with Joint Grouping Functions).
For a probability measure and a predictor , let be a real-valued grouping function class. We say that is -approximately multicalibrated w.r.t. and if for all :
(1) |
is the pushforward measure. We say is -approximately calibrated for . We say is multicalibrated (calibrated) for . If the grouping function is defined on , which implies for any and , we abbreviate by .
In the special case of a constant grouping function , recovers the overall calibration error. For boolean grouping functions defined on (Hébert-Johnson et al., 2018), computes the calibration error of the subgroup with . For real-valued grouping functions defined on (Gopalan et al., 2022; Kim et al., 2019), evaluates a reweighted calibration error, whose weights are proportional to the likelihood of a sample belonging to the subgroup. Most importantly, we propose an extended domain of grouping functions defined on covariates and outcomes jointly, which is useful for capturing more general distribution shifts. Multicalibration error quantifies the maximal calibration error for all subgroups associated with grouping functions in . We will discuss multicalibration with covariate-based grouping functions in this section and joint grouping functions in next section.
2.2 Multicalibration Implies Bayes Optimality under Covariate Shift
In this subsection we focus on grouping functions defined on covariates. We will prove approximately multicalibrated predictors simultaneous approaches Bayes optimality in each target distribution with covariate shift, bridging the results of Kim et al. (2022) and Globus-Harris et al. (2023). To recap, Kim et al. (2022) studies multicalibration under covariate shift and shows that a multicalibrated predictor remains calibrated in target distribution for a sufficiently large grouping function class. Further, it is shown that multicalibration predictors remain multicalibrated under covariate shift (Kim et al., 2022; Roth, 2022), assuming the grouping function class is closed under some transformation by density ratios (Assumption 2.2.1). Second, Globus-Harris et al. (2023) shows multicalibration implies Bayes optimal accuracy (Globus-Harris et al., 2023), assuming satisfies a weak learning condition (Assumption 2.2.2). Detailed discussion on other related works is deferred to section A in the appendix.
Assumption 2.2 (Sufficiency of Grouping Function Class (informal, see Assumption F.1)).
1. (Closure under Covariate Shift) For a set of probability measures containing the source measure , implies for any density function of distributions in .
2. (()-Weak Learning Condition) For any with the source measure , and every subset with , if the Bayes optimal predictor has lower risk than the constant predictor by a margin , there exists a predictor that is also better than the constant predictor with the margin .
Theorem 2.3 (Risk Bound under Covariate Shift).
For a source measure and a set of probability measures containing , given a predictor with finite range , consider a grouping function class closed under affine transformation and satisfying Assumption 2.2 with . If is -approximately multicalibrated w.r.t and , then for any target measure ,
(2) |
3 Multicalibration and Invariance under Concept Shift
Theorem 2.3 shows multicalibration implies Bayes optimal accuracy for target distributions under covariate shift. However, in practical scenarios, there are both marginal distribution shifts of covariates () and concept shift of the conditional distributions (). Concept shift is especially prevalent in tabular data due to missing variables and confounding (Liu et al., 2023). In order to go beyond covariate shift, we will focus on grouping functions defined on covariates and outcomes jointly. We show that multicalibration notion w.r.t. joint grouping functions is equivalent to invariance, a criterion for robust prediction under concept shift. Extending the robustness of multicalibration to general shift is non-trivial. The fundamental challenge is that there is no shared predictor that is generally optimal in each target distribution because the Bayes optimal predictor varies for different distributions. As a first step, we show multicalibrated predictors w.r.t. joint grouping functions are robust as they are optimal over any post-processing functions in each target distribution.
Theorem 3.1 (Risk Bound under Concept Shift).
For a set of absolutely continuous probability measures containing the source measure , consider a predictor . Assume the grouping function class satisfies the following condition:
(3) |
If is -approximately multicalibrated w.r.t. and , then for any measure ,
(4) |
The theorem shows an approximately multicalibrated predictor on the source almost cannot be improved by post-processing for each target distribution. To ensure such robustness, the grouping function class must include all density ratios between target and source measures, which are functions over . This characterization of robustness in terms of post-processing echoes with Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), a paradigm for out-of-distribution generalization with shift. However, their analysis focuses on representation learning.
Definition 3.2 (Invariant Predictor).
Consider data selected from multiple environments in the set where the probability measure in an environment is denoted by . Denote the representation over covariates by a measurable function . We say that elicits an -approximately invariant predictor across if there exists a function such that for all :
(5) |
Remark 3.3.
(1) Predictors in take a representation extracted from the covariates as input. For a general predictor , if we take and as an identity function, Equation 5 reduces to the form of Equation 4. Therefore, in Equation 4 is a -approximately invariant predictor across environments collected from the uncertainty set . (2) We give an approximate definition of invariant predictors, which recovers the original definition (Arjovsky et al., 2019) when . In this case, there exists a shared Bayes optimal predictor across environments, taking as input. This implies almost surely for any .
IRM searches for a representation such that the optimal predictors upon the representation are invariant across environments. Motivated from causality, the interaction between outcomes and their causes are also assumed invariant, so IRM learns a representation of causal variables for stable prediction. We extend Theorem 3.1 to representation learning and prove equivalence between multicalibrated and invariant predictors.
Theorem 3.4 (Equivalence of Multicalibration and Invariance).
Assume samples are drawn from an environment with a prior such that and . The overall population satisfies where is the environment-specific absolutely continuous measure. With a measurable function , define a function class as:
(6) |
1. If there is a bijection such that is -approximately multicalibrated w.r.t. and , then elicits an -approximately invariant predictor across .
2. If there is such that elicits an -approximately invariant predictor across , then is -approximately multicalibrated w.r.t. and , where .
Remark 3.5.
(1) In the first statement, assuming is a bijection avoids degenerate cases where contains redundant information. For example, every predictor upon representation equals upon representation . Confining to bijections ensures some unique decomposition into predictors and representations. (2) Wald et al. (2021) proves equivalence between exact invariance and simultaneous calibration in each environment. We strengthen their result to show multicalibration on a single source distribution suffices for invariance. Moreover, our results can be directly extended beyond their multi-environment setting to a general uncertainty set of target distributions, by the mapping between grouping functions and density ratios. Further, our theorem is established for both exact and approximate invariance.
The theorem bridges approximate multicalibration with approximate invariance for out-of-distribution generalization beyond covariate shift. The equivalence property indicates that the density ratios of target and source distributions constitute the minimal grouping function class required for robust prediction in terms of invariance.
4 Structure of Grouping Function Classes
Section 3 inspires one to construct richer grouping function classes for stronger generalizability. However, fewer predictors are multicalibrated to a rich function class. For covariate-based grouping functions, the Bayes optimal is always multicalibrated. But the existence of a multicalibrated predictor is nontrivial for joint grouping functions. For example, when we take the grouping function , a multicalibrated predictor implies , which is impossible for regression with label noise. In this section, we first study the maximal grouping function class that is feasible for a multicalibrated predictor. Then, we will leverage our structural results to inform the design of grouping functions.
4.1 Maximal Grouping Function Space
We focus on continuous grouping functions defined on a compact set , i.e., , and consider absolutely continuous probability measures supported on with continuous density functions. Our first proposition shows that the maximal grouping function class for any predictor is a linear space.
Proposition 4.1 (Maximal Grouping Function Class).
Given an absolutely continuous probability measure and a predictor , define the maximal grouping function class that is multicalibrated with respect to:
(7) |
Then is a linear space.
In the following, we further analyze the spanning set of maximal grouping function classes for nontrivial predictors which are at least calibrated.
Theorem 4.2 (Spanning Set).
Consider an absolutely continuous probability measure and a calibrated predictor . Then its maximal grouping function class is given by:
(8) |
A predictor’s maximal grouping function class is spanned by density ratios of target distributions where the predictor is invariant. Correspondingly, Theorem 3.1 gives the minimal grouping function class, comprised of density ratios between target and source distributions, in order to ensure is an invariant predictor. In contrast, Theorem 4.2 states the maximal grouping function class for is exactly the linear space spanned by those density ratios. Next, we further investigate sub-structures of the maximal grouping function class. We focus on the representation learning setting of IRM.
Theorem 4.3 (Decomposition of Grouping Function Space).
Consider an absolutely continuous probability measure and a measurable function with . We define the Bayes optimal predictor over as . We abbreviate with . Then can be decomposed as a Minkowski sum of .
(9) | ||||
(10) |
1. If a predictor is multicalibrated with , then .
2. is an invariant predictor elicited by across a set of environments where for any . If a predictor is multicalibrated with , then is also an invariant predictor across elicited by some representation.
Remark 4.4.
and contain functions defined on which can both be rewritten as functions on by variable substitution. Thus, are still subspaces of grouping functions. is spanned by the density ratio of where the Bayes optimal predictor over must be invariant on the distribution of . is spanned by general density ratio of .
Multicalibration w.r.t. ensures at least the accuracy of the Bayes optimal predictor on , and multicalibration w.r.t. ensures at least the invariance of this predictor. However, we show in Proposition F.20 that sizes of two subspaces are negatively correlated. When is a variable selector, expands with more selected covariates while shrinks. By choosing a combination of and , we strike a balance between accuracy and invariance of the multicalibrated predictor.
Proposition 4.5 (Monotonicity).
Consider which could be sliced as and . Define . and are similarly defined. We have:
1.
2.
is a constant value function.
4.2 Design of Grouping Function Classes
The objective of a robust learning method can be represented by a tuple consisting of an assumption about the boundary of distribution shift and a metric of robustness. Multicalibration is equivalent to invariance as a metric of robustness, while the grouping function class provides a unifying view for assumptions over potential distribution shift. Given any uncertainty set of target distributions , Theorem 4.2 implies an efficient and reasonable construction of grouping functions as linear combinations of density ratios from . We implement two designs of grouping functions for the learning setting with and without environment annotations respectively.
From Environments If samples are drawn from multiple environments and the environment annotations are available, we assume the uncertainty set as the union of each environment’s distribution . This completely recovers IRM’s objective, but we approach it with a different optimization technique in the next section. Taking pooled data as the source , density ratios spanning the grouping function class are , where is estimated by an environment classifier. Then a grouping function can be represented as a linear combination of :
(11) |
From Hard Samples When data contains latent sub-populations without annotations, the uncertainty set can be constructed by identifying sub-populations. Hard sample learning (Li et al., 2021; Lin et al., 2020; Liu et al., 2021) suggests the risk is an indicator for sub-population structures. Samples from the minority sub-population are more likely to have high risks. For example, JTT (Liu et al., 2021) identified the minority subgroup using a risk threshold of a trained predictor . We adopt a continuous grouping by assuming . We construct the uncertainty set as the union of the source and minority sub-population , resulting in a grouping function represented as:
(12) |
Another design utilizing Distributionally Robust Optimization’s assumption Duchi et al. (2019) is in section B.
5 MC-PseudoLabel: An Algorithm for Extended Multicalibration
In this section, we introduce an algorithm for multicalibration with respect to joint grouping functions. Simultaneously, the algorithm also provides a new optimization paradigm for invariant prediction under distribution shift. The algorithm, called MC-PseudoLabel, post-processes a trained model by supervised learning with pseudolabels generated by grouping functions. As shown in Algorithm 1, given a predictor function class and a dataset with an empirical distribution , a regression oracle solves the optimization: . We take as input a model , possibly trained by Empirical Risk Minimization. has a finite range following conventions of prior work in multicalibration (Globus-Harris et al., 2023). For continuous predictors, we discretize the model output and introduce a small rounding error (see section C). For each iteration, the algorithm performs regression with grouping functions on each level set of the model. The prediction of grouping functions rectify the uncalibrated model and serves as pseudolabels for model updates.
Since we regress with grouping functions defined on , a poor design of groupings violating Theorem 4.2 can produce trivial outputs. For example, if grouping functions contain , then never decreases and the algorithm outputs , because there does not exist a multicalibrated predictor. However, the algorithm certifies a multicalibrated output if it converges.
Theorem 5.1 (Certified Multicalibration).
In Algorithm 1, for , if , the output is -approximately multicalibrated w.r.t. .
MC-PseudoLabel reduces to LSBoost Globus-Harris et al. (2023), a boosting algorithm for multicalibration if only contains covariate-based grouping functions. In this case, Line 14 of Algorithm 1 reduces to where does not depend on . For joint grouping functions, since , we project it to models’ space of by learning the model with as pseudolabels. The projection substantially changes the optimization dynamics. LSBoost constantly decreases risks of models, due to . The projection step disrupts the monotonicity of risks, implying that MC-Pseudolabel can output a predictor with a higher risk than input. This is because multicalibration with joint grouping functions implies balance between accuracy and invariance, as is discussed in Theorem 4.3. The convergence of LSBoost relies on the monotonicity of risks, which is not applicable to MC-Pseudolabel. We study the algorithm’s convergence in the context of representation learning. Assume we are given a grouping function class with a latent representation . If a predictor is multicalibrated w.r.t respectively, then it is also multicalibrated w.r.t. . Therefore, we separately study the convergence with two grouping function classes. In Proposition F.26, we show the convergence for a subset of consisting of covariate-based grouping functions, which is a corollary of Globus-Harris et al.’s result. As a greater challenge, we derive convergence for when data follows multivariate normal distributions.
Theorem 5.2 (Covergence for (informal, see Theorem F.27)).
Consider with . Assume that follows a multivariate normal distribution where the random variables are in general position such that is positive definite. For any distribution supported on , take the grouping function class and the predictor class . For an initial predictor , run without rounding, then as with a convergence rate of where .
MC-Pseudolabel is also an optimization paradigm for invariance. Certified multicalibration in Theorem 5.1 also implies certified invariance. Furthermore, MC-Pseudolabel introduces no extra hyperparameters to tradeoff between risks and robustness. Both certified invariance and light-weighted hyperparameters simplify model selection, which is challenging for out-distribution generalization because of unavailable validation data from target distributions (Gulrajani and Lopez-Paz, 2021). MC-Pseudolabel has light-weighted optimization consisting of a series of supervised regression. It introduces an overhead to Empirical Risk Minimization by performing regression on level sets. However, the extra burden is linear regression by designing the grouping function class as linear space. Furthermore, regression on different level sets can be parallelized. Computational complexity is further analyzed in section D.
6 Experiments
6.1 Settings
We benchmark MC-Pseudolabel on real-world regression datasets with distributional shift. We adopt two experimental settings. For the multi-environment setting, algorithms are provided with training data collected from multiple annotated environments. Thereafter, the trained model is assessed on new environments. For the single-environment setting, algorithms are trained on a single source distribution. There could be latent sub-populations in training data, but environment annotations are unavailable. The trained model is assessed on a target dataset with distribution shift from the source.
Datasets We experiment on PovertyMap (Yeh et al., 2020) and ACSIncome (Ding et al., 2021) for the multi-environment setting, and VesselPower (Malinin et al., 2022) for the single-environment setting. As the only regression task in WILDS (Koh et al., 2021), a popular benchmark for in-the-wild distribution shift, PovertyMap performs poverty index estimation for different spatial regions by satellite images. Data are collected from both urban and rural regions, by which the environment is annotated. The test dataset also covers both environments, but is collected from different countries. The primary metric is Worst-U/R Pearson, the worst Pearson correlation of prediction between rural and urban regions. The other two datasets are tabular, where natural concept shift ( shift) is more common due to existence of missing variables and hidden confounders (Liu et al., 2023). ACSIncome (Ding et al., 2021) performs personal income prediction with data collected from US Census sources across different US states. The task is converted to binary classification by an income threshold, but we take raw data for regression. Environments are partitioned by different occupations with similar average income. VesselPower comes from Shifts (Malinin et al., 2021, 2022), a benchmark focusing on regression tasks with real-world distributional shift. The objective is to predict power consumption of a merchant vessel given navigation and weather data. Data are sampled under different time and wind speeds, causing distribution shift between training and test data.
Baselines For the multi-environment setting, baselines include ERM (Empirical Risk Minimization); methods for invariance learning which mostly adopts multi-objective optimization: IRM (Arjovsky et al., 2019), MIP (Koyama and Yamaguchi, 2020), IB-IRM (Ahuja et al., 2021), CLOvE (Wald et al., 2021), MRI (Huh and Baidya, 2022), REX (Krueger et al., 2021), Fishr (Ramé et al., 2022); an alignment-based method from domain generalization: IDGM (Shi et al., 2021); and Group DRO (Sagawa et al., 2019). Notably, CLOvE learns a calibrated predictor simultaneously on all environments, but it is optimized by multi-objective learning with a differentiable regularizer for calibration. For the singe-environment setting, baselines include reweighting based techniques: CVaR (Levy et al., 2020), JTT (Liu et al., 2021), Tilted-ERM (Li et al., 2021); a Distributionally Robust Optimization method -DRO (Duchi and Namkoong, 2019); and a data augmentation method C-Mixup (Yao et al., 2022). Other methods are not included because of specification in classification (Zhang et al., 2018, 2022) or exposure to target distribution data during training (Idrissi et al., 2022; Kirichenko et al., 2023). For all experiments, we train an Oracle ERM with data sampled from target distribution.
Implementation We implement the predictor with MLP for ACSIncome and VesselPower, and Resnet18-MS (He et al., 2016) for PovertyMap, following WILDS’ default architecture. The grouping function class is implemented according to Equation 11 and Equation 12 for the multi-environment and single-environment setting respectively. We follow DomainBed’s protocol (Gulrajani and Lopez-Paz, 2021) for model selection. Specifically, we randomly sample 20 sets of hyperparameters for each method, containing both the training hyperparameters and extra hyperparameters from the robust learning algorithm. We select the best model across hyperparameters based on three model selection criteria, including in-distribution validation on the average of training data, worst-environment validation with the worst performance across training environments, and oracle validation on target data. Oracle validation is not recommended by DomainBed, which suggests limited numbers of access to target data. The entire run is repeated with different seeds for three times to measure standard errors of performances. Specifically for PovertyMap, an OOD validation set is provided for oracle validation. And we perform 5-fold cross validation instead of three repeated experiments, following WILDS’ setup.
6.2 Results
ACSIncome: RMSE | PovertyMap: Worst-U/R Pearson | |||||
---|---|---|---|---|---|---|
Method | ID | Worst | Oracle | ID | Worst | Oracle |
ERM | ||||||
IRM | ||||||
MIP | ||||||
IB-IRM | ||||||
CLOvE | ||||||
MRI | ||||||
REX | ||||||
Fishr | ||||||
IDGM | ||||||
GroupDRO | ||||||
MC-Pseudolabel | ||||||
Oracle ERM |
Results are shown in Table 1 for multi-environment settings and Table 2 for single-environment settings. MC-Pseudolabel achieves superior performance in all datasets with in-distribution and worst-environment validation which does not violate test data. For oracle validation, MC-Pseudolabel achieves comparable performances to the best method. For example, CLOvE, which also learns invariance by calibration, achieves best performance under oracle validation in PovertyMap, but it sharply degrades when target validation data is unavailable. It’s because CLOvE tunes its regularizer’s coefficient to tradeoff with ERM risk, whose optimal value depends on the target distribution shift.
VesselPower: RMSE | ||
---|---|---|
Method | ID | Oracle |
ERM | ||
CVaR | ||
JTT | ||
Tilted-ERM | ||
-DRO | ||
C-Mixup | ||
MC-Pseudolabel | ||
Oracle ERM |
In contrast, MC-Pseudolabel exhibits an advantage with in-distribution model selection. This is further supported by Figure 1, which shows that MC-Pseudolabel’s out-of-distribution errors strongly correlates with in-distribution errors. The experiment spans across different hyperparameters and seeds with the same model architecture on VesselPower. The phenomenon, known as accuracy-on-the-line (Miller et al., 2021), is well known for a general class of models under covariate shift. However, Liu et al. (2023) shows accuracy-on-the-line does not exist under concept shift, which is the case for ERM and C-Mixup. This introduces significant challenge for model selection. However, MC-Pseudolabel recovers the accuracy to the line.
7 Conclusion
To conclude, we establish a new optimization framework for out-of-distribution generalization through extended multicalibration with joint grouping functions. While the current algorithm focuses on regression, there is potential for future work to extend our approach to general forms of tasks, particularly in terms of classification.
References
- Ahuja et al. (2021) Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 3438–3450, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html.
- Arjovsky et al. (2019) Martín Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. CoRR, abs/1907.02893, 2019. URL http://arxiv.org/abs/1907.02893.
- Blasiok et al. (2023a) Jaroslaw Blasiok, Parikshit Gopalan, Lunjia Hu, and Preetum Nakkiran. A unifying theory of distance from calibration. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 1727–1740. ACM, 2023a. doi: 10.1145/3564246.3585182. URL https://doi.org/10.1145/3564246.3585182.
- Blasiok et al. (2023b) Jaroslaw Blasiok, Parikshit Gopalan, Lunjia Hu, and Preetum Nakkiran. When does optimizing a proper loss yield calibration? In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023b. URL http://papers.nips.cc/paper_files/paper/2023/hash/e4165c96702bac5f4962b70f3cf2f136-Abstract-Conference.html.
- Creager et al. (2021) Elliot Creager, Jörn-Henrik Jacobsen, and Richard S. Zemel. Environment inference for invariant learning. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 2189–2200. PMLR, 2021. URL http://proceedings.mlr.press/v139/creager21a.html.
- Deng et al. (2023) Zhun Deng, Cynthia Dwork, and Linjun Zhang. Happymap : A generalized multicalibration method. In Yael Tauman Kalai, editor, 14th Innovations in Theoretical Computer Science Conference, ITCS 2023, January 10-13, 2023, MIT, Cambridge, Massachusetts, USA, volume 251 of LIPIcs, pages 41:1–41:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023. doi: 10.4230/LIPICS.ITCS.2023.41. URL https://doi.org/10.4230/LIPIcs.ITCS.2023.41.
- Ding et al. (2021) Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 6478–6490, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/32e54441e6382a7fbacbbbaf3c450059-Abstract.html.
- Duchi and Namkoong (2019) John C. Duchi and Hongseok Namkoong. Variance-based regularization with convex objectives. J. Mach. Learn. Res., 20:68:1–68:55, 2019. URL http://jmlr.org/papers/v20/17-750.html.
- Duchi and Namkoong (2021) John C Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378–1406, 2021.
- Duchi et al. (2019) John C Duchi, Tatsunori Hashimoto, and Hongseok Namkoong. Distributionally robust losses against mixture covariate shifts. Under review, 2(1), 2019.
- Duchi et al. (2021) John C. Duchi, Peter W. Glynn, and Hongseok Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. Math. Oper. Res., 46(3):946–969, 2021. doi: 10.1287/MOOR.2020.1085. URL https://doi.org/10.1287/moor.2020.1085.
- Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR, 2017. URL http://proceedings.mlr.press/v70/finn17a.html.
- Globus-Harris et al. (2023) Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, and Jessica Sorrell. Multicalibration as boosting for regression. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 11459–11492. PMLR, 2023. URL https://proceedings.mlr.press/v202/globus-harris23a.html.
- Gopalan et al. (2022) Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In Mark Braverman, editor, 13th Innovations in Theoretical Computer Science Conference, ITCS 2022, January 31 - February 3, 2022, Berkeley, CA, USA, volume 215 of LIPIcs, pages 79:1–79:21. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022. doi: 10.4230/LIPICS.ITCS.2022.79. URL https://doi.org/10.4230/LIPIcs.ITCS.2022.79.
- Gulrajani and Lopez-Paz (2021) Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=lQdXeXDoWtI.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
- Hébert-Johnson et al. (2018) Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 1944–1953. PMLR, 2018. URL http://proceedings.mlr.press/v80/hebert-johnson18a.html.
- Huh and Baidya (2022) Dongsung Huh and Avinash Baidya. The missing invariance principle found - the reciprocal twin of invariant risk minimization. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/91b482312a0845ed86e244adbd9935e4-Abstract-Conference.html.
- Idrissi et al. (2022) Badr Youbi Idrissi, Martín Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In Bernhard Schölkopf, Caroline Uhler, and Kun Zhang, editors, 1st Conference on Causal Learning and Reasoning, CLeaR 2022, Sequoia Conference Center, Eureka, CA, USA, 11-13 April, 2022, volume 177 of Proceedings of Machine Learning Research, pages 336–351. PMLR, 2022. URL https://proceedings.mlr.press/v177/idrissi22a.html.
- Kim et al. (2019) Michael P. Kim, Amirata Ghorbani, and James Y. Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In Vincent Conitzer, Gillian K. Hadfield, and Shannon Vallor, editors, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, HI, USA, January 27-28, 2019, pages 247–254. ACM, 2019. doi: 10.1145/3306618.3314287. URL https://doi.org/10.1145/3306618.3314287.
- Kim et al. (2022) Michael P Kim, Christoph Kern, Shafi Goldwasser, Frauke Kreuter, and Omer Reingold. Universal adaptability: Target-independent inference that competes with propensity scoring. Proceedings of the National Academy of Sciences, 119(4):e2108097119, 2022.
- Kirichenko et al. (2023) Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/pdf?id=Zb6c8A-Fghk.
- Koh et al. (2021) Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran S. Haque, Sara M. Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 5637–5664. PMLR, 2021. URL http://proceedings.mlr.press/v139/koh21a.html.
- Koyama and Yamaguchi (2020) Masanori Koyama and Shoichiro Yamaguchi. When is invariance useful in an out-of-distribution generalization problem? arXiv preprint arXiv:2008.01883, 2020.
- Krueger et al. (2021) David Krueger, Ethan Caballero, Jörn-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Rémi Le Priol, and Aaron C. Courville. Out-of-distribution generalization via risk extrapolation (rex). In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 5815–5826. PMLR, 2021. URL http://proceedings.mlr.press/v139/krueger21a.html.
- Levy et al. (2020) Daniel Levy, Yair Carmon, John C. Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/64986d86a17424eeac96b08a6d519059-Abstract.html.
- Li et al. (2021) Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. Tilted empirical risk minimization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=K5YasWXZT3O.
- Lin et al. (2020) Tsung-Yi Lin, Priya Goyal, Ross B. Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell., 42(2):318–327, 2020. doi: 10.1109/TPAMI.2018.2858826. URL https://doi.org/10.1109/TPAMI.2018.2858826.
- Liu et al. (2021) Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 6781–6792. PMLR, 2021. URL http://proceedings.mlr.press/v139/liu21f.html.
- Liu et al. (2022) Jiashuo Liu, Jiayun Wu, Bo Li, and Peng Cui. Distributionally robust optimization with data geometry. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/da535999561b932f56efdd559498282e-Abstract-Conference.html.
- Liu et al. (2023) Jiashuo Liu, Tianyu Wang, Peng Cui, and Hongseok Namkoong. On the need for a language describing distribution shifts: Illustrations on tabular datasets. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.
- Malinin et al. (2021) Andrey Malinin, Neil Band, Yarin Gal, Mark J. F. Gales, Alexander Ganshin, German Chesnokov, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Denis Roginskiy, Mariya Shmatova, Panagiotis Tigas, and Boris Yangel. Shifts: A dataset of real distributional shift across multiple large-scale tasks. In Joaquin Vanschoren and Sai-Kit Yeung, editors, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/ad61ab143223efbc24c7d2583be69251-Abstract-round2.html.
- Malinin et al. (2022) Andrey Malinin, Andreas Athanasopoulos, Muhamed Barakovic, Meritxell Bach Cuadra, Mark J. F. Gales, Cristina Granziera, Mara Graziani, Nikolay Kartashev, Konstantinos Kyriakopoulos, Po-Jui Lu, Nataliia Molchanova, Antonis Nikitakis, Vatsal Raina, Francesco La Rosa, Eli Sivena, Vasileios Tsarsitalidis, Efi Tsompopoulou, and Elena Volf. Shifts 2.0: Extending the dataset of real distributional shifts. CoRR, abs/2206.15407, 2022. doi: 10.48550/ARXIV.2206.15407. URL https://doi.org/10.48550/arXiv.2206.15407.
- Miller et al. (2021) John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 7721–7735. PMLR, 2021. URL http://proceedings.mlr.press/v139/miller21b.html.
- Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
- Ramé et al. (2022) Alexandre Ramé, Corentin Dancette, and Matthieu Cord. Fishr: Invariant gradient variances for out-of-distribution generalization. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 18347–18377. PMLR, 2022. URL https://proceedings.mlr.press/v162/rame22a.html.
- Roth (2022) Aaron Roth. Uncertain: Modern topics in uncertainty estimation, 2022.
- Sagawa et al. (2019) Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks. In International Conference on Learning Representations, 2019.
- Shi et al. (2021) Yuge Shi, Jeffrey Seely, Philip HS Torr, N Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. arXiv preprint arXiv:2104.09937, 2021.
- Sinha et al. (2018) Aman Sinha, Hongseok Namkoong, and John C. Duchi. Certifying some distributional robustness with principled adversarial training. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Hk6kPgZA-.
- Wald et al. (2021) Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. On calibration and out-of-domain generalization. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 2215–2227, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html.
- Wang et al. (2022) Haoxiang Wang, Bo Li, and Han Zhao. Understanding gradual domain adaptation: Improved analysis, optimal path and beyond. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 22784–22801. PMLR, 2022. URL https://proceedings.mlr.press/v162/wang22n.html.
- Yao et al. (2022) Huaxiu Yao, Yiping Wang, Linjun Zhang, James Y. Zou, and Chelsea Finn. C-mixup: Improving generalization in regression. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/1626be0ab7f3d7b3c639fbfd5951bc40-Abstract-Conference.html.
- Yeh et al. (2020) Christopher Yeh, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. Using publicly available satellite imagery and deep learning to understand economic well-being in africa. Nature Communications, 2020.
- Zhang et al. (2018) Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=r1Ddp1-Rb.
- Zhang et al. (2022) Michael Zhang, Nimit S Sohoni, Hongyang R Zhang, Chelsea Finn, and Christopher Re. Correct-n-contrast: a contrastive approach for improving robustness to spurious correlations. In International Conference on Machine Learning, pages 26484–26516. PMLR, 2022.
Appendix A Related Work
Multicalibration Multicalibration is first proposed by Hébert-Johnson et al. [2018] with binary grouping functions. Kim et al. [2019] and Gopalan et al. [2022] extend the grouping functions to real-valued functions. Globus-Harris et al. [2023] shows that with a sufficiently rich class of real-valued grouping functions, multicalibration actually implies accuracy. Globus-Harris et al. also provides a boosting algorithm for both regression and multicalibration. The connection between multicalibration and distribution shift is first studied by Kim et al. [2022], who proves that multicalibration error remains under covariate shift, given a sufficiently large real-valued grouping function class. Kim et al. further shows that under covariate shift, a multicalibrated predictor can perform statistical inference of the average outcome of a sample batch. In contrast, we derive a robustness result for individual prediction of outcomes for multicalibrated predictors. In addition, Wald et al. [2021] studies the equivalence of Invariant Risk Minimization and simultaneous calibration on each environment. Our equivalence results for multicalibration can be perceived as a generalization of Wald et al.’s results beyond the multi-environment setting, by deriving a mapping between density ratios and grouping functions. We also extend the equivalence to approximately multicalibrated and approximately invariant predictors. Furthermore, we move beyond Wald et al.’s multi-objective optimization with Lagrangian regularization, by proposing a new post-processing optimization framework consisting of a series of supervised regression. Meanwhile, Blasiok et al. [2023b] discusses connections between calibration and post-processing, which is an equivalent expression of invariance. There are other extensions of multicalibration, such as Deng et al. [2023] who generalize the term in multicalibration’s definition to a class of general functions. While our work is the first to generalize the grouping functions to consider the outcomes.
Out-of-distribution Generalization Beyond Covariate Shift Despite abundant literature from domain generalization that focuses on image classification where covariate shift dominates, research on algorithmic robustness on regression tasks beyond covariate shift is relatively limited. The setting can be categorized according to if the source distribution is partitioned into several environments. For the multi-environment generalization setting, Invariant Risk Minimization and its variants assume that outcomes are generated by a common causal structural equation across all environments, and aims to recover such an invariant (or causal) predictor [Ahuja et al., 2021, Arjovsky et al., 2019, Huh and Baidya, 2022, Koyama and Yamaguchi, 2020, Krueger et al., 2021, Ramé et al., 2022]. Group DRO [Sagawa et al., 2019] is a simple but surprisingly strong technique that optimizes for the worst group risk with reweighting of environments. There are also meta-learning methods [Finn et al., 2017] that handles multi-environment generalization with bi-level optimization. For the single environment setting, Distributionally Robust Optimization optimizes for the worst-case risk in an uncertainty set of distributions centering around the source distribution [Duchi and Namkoong, 2019, 2021, Duchi et al., 2021, Levy et al., 2020, Sinha et al., 2018]. Another branch of research is targeted at mitigating spurious correlation with an assumption of simplicity bias, which utilizes a simple model to discover latent sub-populations and then correct the biased predictor by sample reweighting [Li et al., 2021, Lin et al., 2020, Liu et al., 2021], retraining on a subgroup-balanced dataset or a small batch from target distribution [Idrissi et al., 2022, Kirichenko et al., 2023, Zhang et al., 2022], or perform Invariant Risk Minimization on discovered subgroups [Creager et al., 2021]. Data augmentation is a prevalent technique to enhance algorithmic robustness for vision tasks. Quite a lot of these methods are tailored for classification. For example, Mixup [Zhang et al., 2018] interpolates between features of samples with the same label. The approach is extended to regression settings by C-Mixup [Yao et al., 2022]. Pseudolabelling is a common technique for out-of-distribution generalization, but typically adopted in a setting with exposure to unlabelled samples from target distribution, known as domain adaptation [Wang et al., 2022]. However, MC-Pseudolabel generate pseudolabels for the source distribution itself.
Appendix B Grouping Functions for Distributionally Robust Optimization
Distributionally Robust Optimization assumes the target distribution to reside in an uncertainty set of distributions centering around the source distribution . For example, Duchi et al. [2019] formulates the uncertainty set as arbitrary subgroups that has a proportion of at least . Duchi et al. only consider subgroups of covariates:
(13) | ||||
(14) |
By the correspondence between density ratios and grouping functions, the equivalent design of a grouping function class is given by:
(15) |
We can also extend the grouping functions to consider both covariates and outcomes, such that general subgroups are incorporated into the uncertainty set:
(16) |
In the case of grouping functions defined on and jointly, the grouping function class is not closed under affine transformation and is not a linear space spanned by density ratios, which suggests that a perfectly multicalibrated solution might not exist. However, approximately multicalibrated predictors can still be pursued.
Appendix C Model Discretization
For continuous predictors, we take a preprocessing step to discretize the model to as many bins as possible such that the rounding error is negligible while still ensuring enough samples in individual bins. Specifically, we equally split the outcomes of predictors to bins with equal intervals from the minimum to maximum of model output. We start from a minimum bin number , and keeps increasing as long as of the samples reside in a bin with at least 30 samples. When the criterion is violated, we stop increasing and select it as the final bin number. The model discretization procedure is fixed across all experiments.
Appendix D Computational Complexity
We assume that the predictor’s outcomes are uniformly distributed. Denote the average bin size by , which is a constant around 30 in our implementation. The bin number is given by where is the sample size. For neural networks, represents the batch size. The overhead of MC-Pseudolabel compared to Empirical Risk Minimization is linear regression on each bin, whose sample complexity is with OLS. Please note that an individual linear regression for around 30 samples is extremely cheap. A non-parallel implementation of regression on every bin scales linearly with the bin number , so the overall complexity is . However, since the regression on each bin is independent, we adopt a multi-processing implementation. Denote the number of jobs by , the overall time cost of MC-Pseudolabel is . As a comparison, OLS on samples has a computational complexity of .
In conclusion, the complexity of MC-Pseudolabel scales linearly with sample size (or batch size for neural networks). Counterintuitively, increasing the bin number (and thus decreasing the bin size) actually decreases the computational complexity. This is because linear regression scales cubically with sample size, so decreasing the sample size in each bin is preferred to decreasing the bin number.
Appendix E Experiments
E.1 An Additional Experiment: Synthetic Dataset
We start from a multi-environment synthetic dataset with a multivariate normal distribution corresponding to Theorem 5.2. In this experiment, we examine the optimization dynamics of MC-Pseudolabel. The data generation process is inspired by Arjovsky et al. [2019]. The covariates can be sliced into with and , where is the causal variable for and is the spurious variable. The data is generated by the following structural equations:
(17) | ||||
(18) | ||||
(19) |
The covariate is spuriously correlated with because the coefficient depends on the specific environment . We set respectively for two training environments while extrapolates to during testing. A robust algorithm is supposed to bypass the spurious variable and output a predictor in order to survive the test distribution where the correlation between and is negated.
The predictor class for this dataset is linear models, and the environment classifier is implemented by MLP with a single hidden layer. In this experiment, we fix the training hyperparameters for the base linear model, and perform grid search over the extra hyperparameter introduced by robust learning methods. Baselines except for ERM and Group DRO share a hyperparameter which is the regularizer’s coefficient, and Group DRO introduces a temperature hyperparameter. We search over their hyperparameter space and report RMSE metric on the test set in Figure 2. Most baselines exhibit a U-turn with an increasing hyperparameter, and the minimum point varies across methods. The sensitivity of hyperparameters implies the dependence on a strong model selection criterion, such as oracle model selection on target distribution. However, the dashed line for MC-Pseudolabel’s error is tangent to all the U-turns of baselines, indicating a competitive performance of MC-Pseudolabel both with and without oracle model selection.
We also investigate the evolution of pseudo labels in Algorithm 1 to recover the dynamics of MC-Pseudolabel. The first row of Figure 3 demonstrates how pseudolabelling results in a multicalibrated predictor. It shows that pseudolabels for two environments deviate from model prediction at Step 0, but the gap quickly converges at Step 4, implying multicalibrated prediction. The second row provides insight about how pseudolabelling contributes to an invariant predictor. We observe that the curve of two environments are gradually merging because the pseudolabel introduces a special noise to the original label such that the correlation between the pseudolabel and spurious variable is weakened. As a result, the predictor will depend on the causal variable which is relatively more strongly correlated with pseudolabels.
E.2 VesselPower
In figure 4, we provide the correlation between models’ in-distribution validation performance and out-of-distribution test performance across all methods.
E.3 Training Details
We follow DomainBed’s protocol [Gulrajani and Lopez-Paz, 2021] for model selection. Specifically, we randomly sample 20 sets of hyperparameters for each method, containing both the training hyperparameters of base models in Table 3 and extra hyperparameters from the robust learning algorithm in Table 4. We select the best model across hyperparameters based on three model selection criteria. In-distribution (ID) validation selects the model with the best metric on the average of an in-distribution validation dataset, which is sampled from the same distribution as the training data. Worst-environment (Worst) validation selects the best model by the worst performance across all environments in the in-distribution validation dataset. Worst validation is applicable only to the multi-environment setting. Oracle validation selects the best model by an out-of-distribution validation dataset sampled from the target distribution of test data. Oracle validation leaks the test distribution, so it is not recommended by DomainBed. However, most robust learning methods relies on out-of-distribution validation, so Domainbed suggests limited numbers of access to target data when using Oracle validation. Though MC-Pseudolabel already performs well under ID and Worst validation, we still report its performance under Oracle validation to compare the limit of robust learning methods regardless of model selection. Specifically for PovertyMap, an OOD validation set is provided for Oracle validation. The results of Oracle validation recover PovertyMap’s public benchmark.
Following DomainBed, the entire model selection procedure is repeated with different seeds for three times to measure standard errors of performances. Thus, we have totally 60 runs per method per dataset. Specifically for PovertyMap, we follow WILDS’ setup [Koh et al., 2021] and perform 5-fold cross validation instead of three repeated experiments. For each fold of the dataset, we conduct the model selection procedure with a single seed, and we report the average and standard error of performances across 5 folds. Thus, the standard error measures both the difficulty disparity across folds and the model’s instability.
The grouping function class of MC-Pseudolabel is implemented according to Equation 11 and Equation 12 for the multi-environment and single-environment setting respectively. For the multi-environment setting, the environment classifier is implemented as MLP with a single hidden layer of size 100 for tabular datasets including Simulation and ACSIncome. For PovertyMap, the environment classifier is implemented by Resnet18-MS with the same architecture as the predictor. For the single-environment setting of VesselPower, the identification model is implemented as a Ridge regression model.
Simulation | ACSIncome | VesselPower | PovertyMap | |
---|---|---|---|---|
Architecture | Linear | MLP | MLP | Resnet18-MS |
Hidden Layer Dimensions | None | 16 | 32, 8 | Standard [Koh et al., 2021] |
Optimizer | Adam | Adam | Adam | Adam |
Weight Decay | 0 | 0 | 0 | 0 |
Loss | MSE | MSE | MSE | MSE |
Learning Rate | 0.1 | |||
Batch Size | 1024 | [256, 512, 1024, 2048] | [256, 512, 1024, 2048] | [32, 64, 128] |
Range | |
---|---|
Regularizer Coefficient | |
(GroupDRO) | |
(JTT) | [0.1, 0.2, 0.5, 0.7] |
(JTT) | [5, 10, 20, 50] |
(-DRO) | [0.2, 0.5, 1, 1.5] |
(Tilted-ERM) | [0.1, 0.5, 1, 5, 10, 50, 100, 200] |
(C-Mixup) | [0.5, 1, 1.5, 2] |
(C-Mixup) | [0.01, 0.1, 1, 10, 100] |
E.4 Software and Hardware
Our experiments are based on the architecture of PyTorch [Paszke et al., 2019]. Each experiment with a single set of hyperparameters is run on one NVIDIA GeForce RTX 3090 with 24GB of memory, taking at most 15 minutes.
Appendix F Theory
F.1 Multicalibration and Bayes Optimality under Covariate Shift
Assumption F.1 (Restatement of Assumption 2.2).
1. (Closure under Covariate Shift) For a set of probability measures containing the source measure ,
(20) |
2. (()-Weak Learning Condition) For any with the source conditional measure and every measurable set satisfying , if
(21) |
then there exists satisfying
(22) |
Lemma F.2 (Globus-Harris et al. [2023]).
Fix any distribution , any model , and any class of real valued functions that is closed under affine transformation. Let:
be the set of functions in upper-bounded by 1 on . Let , and . Then if satisfies the -weak learning condition and is -approximately multicalibrated with respect to and , then has squared error
where .
Definition F.3.
For a probability measure and a predictor , let be a function class. We say that is -approximately multicalibrated w.r.t. and if for all :
(23) | |||
(24) | |||
(25) |
Lemma F.4 ( Roth [2022]).
Suppose have the same conditional label distribution, and suppose is -approximately multicalibrated with respect to and . If satisfies Equation 20, then is also -approximately multicalibrated with respect to and :
Lemma F.5.
For a predictor and a grouping function satisfying where ,
(26) |
Remark F.6.
The lemma is extended from Roth [2022]’s result for .
Proof.
First we prove . For any ,
(27) | ||||
(28) | ||||
(29) | ||||
(30) | ||||
(31) |
Equation 29 follows from the Cauchy-Schwarz inequality.
(32) | ||||
(33) | ||||
(34) |
Then we prove . Still from the Cauchy-Schwarz inequality:
(35) | ||||
(36) | ||||
(37) |
∎
Theorem F.7 (Restatement of Theorem 2.3).
For a source measure and a set of probability measures containing , given a predictor with finite range , consider a grouping function class closed under affine transformation satisfying Assumption 2.2 with . If is -approximately multicalibrated w.r.t and ’s bounded subset , then for any target measure ,
(38) |
where is the optimal regression function in each target distribution.
F.2 Multicalibration and Invariance under Concept Shift
Theorem F.8 (Restatement of Theorem 3.1).
For a set of absolutely continuous probability measures containing the source measure , consider a predictor . Assume the grouping function class satisfies the following condition:
(44) |
If is -approximately multicalibrated w.r.t. and , then for any measure ,
(45) |
Proof.
For any where , since is -approximately multicalibrated, . It follows from Lemma F.5 that .
(46) | ||||
(47) | ||||
(48) | ||||
(49) | ||||
(50) | ||||
(51) |
Thus, we have for any . We will prove an equivalent form of calibration error:
(52) | ||||
(53) |
can be proved by taking . On the other hand,
(54) | ||||
(55) |
Actually the right hand side of Equation 52 resembles smooth calibration [Blasiok et al., 2023a], which restricts to Lipschitz functions. Based on smooth calibration, Blasiok et al. [Blasiok et al., 2023b] shows that approximately calibrated predictors cannot be improved much by post-processing. In the above we present a similar proof for calibration error.
For any , there exists such that for any .
(56) | ||||
(57) | ||||
(58) | ||||
(59) | ||||
(60) |
∎
Theorem F.9 (Restatement of Theorem 3.4).
Assume samples are drawn from an environment with a prior such that and . The overall population satisfies where is the environment-specific absolutely continuous measure. For a representation over features, define a function class as:
(61) |
1. If there exists a bijection such that is -approximately multicalibrated w.r.t. and , then for any ,
(62) |
2. For , if there exists such that Equation 62 is satisfied for any , then is -approximately multicalibrated w.r.t. and , where .
Proof.
We first prove statement 1.
For any , since is a bijection, where . Since is -approximately multicalibrated, it follows from Theorem F.8 that for any ,
(63) | ||||
(64) |
Then we give a proof of statement 2, which is inspired by Blasiok et al. [Blasiok et al., 2023b].
For simplicity let . For any and any , define . Construct , where .
(65) | ||||
(66) | ||||
(67) | ||||
(68) | ||||
(69) |
Rearranging the inequality above gives:
(70) |
Since , it follows from Equation 62 that:
(71) |
Combining Equation 70 and Equation 71 gives for any . From Equation 52, it follows that:
(72) |
From Equation 51, for any . Further, for any ,
(73) | ||||
(74) | ||||
(75) | ||||
(76) |
By Lemma F.5, it follows that ∎
F.3 Structure of Grouping Function Classes
In this subsection, we focus on Euclidean space where is compact and measurable for some and . Grouping functions are assumed to be continuous, i.e., . We consider absolutely continuous probability measures with continuous density functions.
Proposition F.10 (Restatement of Proposition 4.1).
Consider an absolutely continuous probability measure and a predictor , define the maximal grouping function class that is multicalibrated with respect to:
(77) |
Then is a linear space.
Particularly for where is a measurable function, we abbreviate with . Then any finite subset implies , where denotes a constant function.
Proof.
For any and any , .
(78) | ||||
(79) | ||||
(80) | ||||
(81) | ||||
(82) |
According to Lemma F.5, is equivalent as for bounded . Thus, which implies . Now we finishes the proof that is a linear space.
For ,
(83) | ||||
(84) | ||||
(85) | ||||
(86) |
Thus, which implies . Since is a linear space, implies . ∎
Lemma F.11.
For any absolutely continuous probability measure with a continuous density function , and any grouping function , there exists and such that , and it is density ratio between some absolutely continuous probability measure and , i.e., , where also has a continuous density function.
Proof.
Since is bounded, there exists such that for any . Define:
(87) | ||||
(88) |
is still continuous.
We have for any .
(89) | ||||
(90) | ||||
(91) |
Thus, is an absolutely continuous probability measure.
Its density function is continuous. ∎
Theorem F.12.
Consider an absolutely continuous probability measure and a predictor where is a measurable function. We abbreviate with .
(92) | ||||
(93) |
Proof.
For any ,
(99) | ||||
(100) | ||||
(101) | ||||
(102) |
Next, we prove
(103) |
This is equivalent to saying that for any absolutely continuous probability measure satisfying almost surely, for almost every .
(104) | ||||
(105) | ||||
(106) | ||||
(107) | ||||
(108) | ||||
(109) | ||||
(110) |
Next, we prove
(111) |
By Lemma F.11, any grouping function could be rewritten as for some continuous density functions and . Thus, we just need to prove the statement that implies .
(112) | ||||
(113) | ||||
(114) | ||||
(115) | ||||
(116) | ||||
(117) |
Corollary F.13 (Restatement of Theorem 4.2).
Consider an absolutely continuous probability measure and a calibrated predictor .
(118) | ||||
(119) |
Remark F.14.
almost surely implies , which is equivalent as , since we adopt square error.
Proof.
Theorem F.15 (Restatement of Theorem 4.3 (first part)).
Consider an absolutely continuous probability measure and a predictor where is a measurable function. can be decomposed as .
(121) | ||||
(122) | ||||
(123) | ||||
(124) |
Remark F.16.
contains functions defined on which can be rewritten as functions on by variable substitution. Thus, is still a set of grouping functions. For , is a constant depending on .
Proof.
First we prove .
For any and ,
(125) | |||
(126) | |||
(127) | |||
(128) | |||
(129) | |||
(130) | |||
(131) | |||
(132) |
Next we prove .
For any , let
(133) | ||||
(134) |
Then we have
(135) | |||
(136) | |||
(137) | |||
(138) | |||
(139) | |||
(140) |
Thus, .
(141) | ||||
(142) | ||||
(143) |
Thus, . Following a similar proof of Theorem F.12, we have
(144) |
Next, we prove .
This is equivalent to saying that for any continuous density function .
(145) | ||||
(146) | ||||
(147) |
Thus, we have .
Next, we prove .
By Lemma F.11, any grouping function could be rewritten as for some continuous density function and . Thus, we just need to prove the statement that implies .
(148) | ||||
(149) | ||||
(150) | ||||
(151) |
Thus, we have which is a constant. Since , we have .
(152) | ||||
(153) |
∎
Lemma F.17 (Theorem 3.2 from Globus-Harris et al. [2023]).
If is calibrated and there exists an such that
(154) |
then:
(155) |
Proposition F.18 (Restatement of Theorem 4.3 (second part)).
If a predictor is multicalibrated with , then .
Proof.
We prove by contradiction. If , then
(156) |
Let .
Proposition F.19 (Restatement of Theorem 4.3 (third part)).
is an invariant predictor elicited by across a set of environments where for any . If a predictor is multicalibrated with , then is also an invariant predictor across elicited by some representation.
Proof.
Since , we have for every .
Thus,
(161) |
This implies is an invariant predictor elicited by across .
For any , we have:
(162) |
Since is multicalibrated with , it follows from Theorem F.8:
(163) |
This implies that is an invariant predictor across elicited by . ∎
Proposition F.20 (Restatement of Proposition 4.5).
Consider which could be sliced as and . Define . and are similarly defined. We have:
1.
2.
is a constant value function.
Remark F.21.
The proposition shows that , as a subspace of , evolves monotonically and in opposite direction to . If we perceive the representation as a filter, gaining more information from covariates facilitates multicalibration w.r.t. (and accuracy) but hampers multicalibration w.r.t. (and invariance). With and combined together, a multicalibrated predictor is searching for an appropriate level of information filter to balance the tradeoff between accuracy and invariance.
Proof.
1. For any , since , is also a function of . Thus, we have . It follows that . Similarly we have and .
2. For any such that for any values of , we have
(164) | ||||
(165) | ||||
(166) | ||||
(167) |
Thus, . It follows that . Similarly, we have and . Particularly for , we have is a constant for any values of . ∎
Lemma F.22 (Globus-Harris et al. [2023]).
Let be such a grouping function class that implies for any and . If satisfies the -weak learning condition in Assumption F.1, a predictor is multicalibrated w.r.t. if and only if almost surely.
Proposition F.23.
For a measurable function , a predictor is multicalibrated w.r.t. if and only if is multicalibrated w.r.t. .
Proof.
Since , ’s multicalibration w.r.t. implies ’s multicalibration w.r.t. .
On the other hand, satisfies the -weak learning condition with the pushforward measure on , because . It follows from Lemma F.22 that is multicalibrated w.r.t. implies almost surely. By the definition of , is multicalibrated w.r.t. . ∎
F.4 MC-PseudoLabel: An Algorithm for Extended Multicalibration
Lemma F.24.
Fix a model . Suppose for some there is an such that:
Let for .
Then:
Proof. Following [Globus-Harris et al., 2023], we have
Theorem F.25 (Restatement of Theorem 5.1).
In Algorithm 1, for , if the following is satisfied:
(168) |
the output is -approximately multicalibrated w.r.t. .
Proof.
We prove by contradiction. Assume that is not -approximately multicalibrated with respect to . Then there exists such that:
For each define
Then we have .
which contradicts the condition in Equation 168. ∎
The following proposition is a direct corollary from Globus-Harris et al. [2023]’s Theorem 4.3.
Proposition F.26.
For any distribution supported on and , take the grouping function class and the predictor class . For any and an initial predictor with , then halts after at most steps and outputs a model that is -approximately multicalibrated w.r.t and .
Theorem F.27 (Restatement of Theorem 5.2).
Consider with . Assume that follows a multivariate normal distribution where the random variables are in general position such that is positive definite. we partition into blocks:
(172) |
For any distribution supported on , take the grouping function class and the predictor class . For an initial predictor , run without rounding, then as with a convergence rate of , where
(173) | ||||
(174) |
We have .
Remark F.28.
are always linear. Thus, the limit of functions is equivalent as the limit of coefficients.
Remark F.29.
is multicalibrated with respect to . Furthermore, any calibrated predictor on , denoted by , is multicalibrated with respect to . This is because:
(175) | ||||
(176) | ||||
(177) |
However, is the most accurate predictor among all multicalibrated .
Remark F.30.
The convergence rate does not depend on the dimension of covariates. When implying that is sufficient for prediction, following from :
(178) |
It follows that and the algorithm will converge in one step.
On the other hand, when and are linearly dependent given such that is singular, which violates positive definiteness, following from the proof below:
(179) |
It follows that and the algorithm can’t converge.
So the convergence rate depends on the singularity of the problem. Since the algorithm converges to a predictor that does not depend on , stronger the "spurious" correlation between and given in the distribution , the algorithm takes longer to converge.
Proof.
Without loss of generality, assume .
Let . Denote dimensions of by .
Let and with
.
We have:
(180) | ||||
(181) |
According to Theorem F.15, is a constant for different values of .
(182) | ||||
(183) |
This implies:
(184) |
Rearranging to:
(185) |
Let and . We claim that . Otherwise, consider .
(186) |
Thus, .
On the other hand,
(187) | |||
(188) | |||
(189) |
The inequality follows from the fact that is the covariance matrix of , which is positive definite because is positive definite. The inequality contradicts with the definition of . Thus, .
Define a matrix whose columns support the solution space of Equation 185. Then is a random vector. According to the definition of ,
(190) | ||||
(191) | ||||
(192) |
In the above equation, .
Since , we have . Substituting into Equation 184:
(193) |
Substituting into Equation 192:
(194) | ||||
(195) |
Then we have .
This is equivalent as:
(196) | ||||
(197) |
Combining the two equations above, we have:
(198) | ||||
(199) |
Thus,
(200) | ||||
(201) |
In the above equation, .
Define , where
(202) |
In the following we show . Since , and has exactly one nonzero eigenvalue , we just have to show .
(203) | ||||
(204) |
Since, and are both Schur complements of ’s principal submatrix, they are still positive definite.
Thus,
(205) | |||
(206) |
It follows that . So we just have to show .
Since , by applying row addition on , we have:
(207) |
It follows that:
(208) | |||
(209) |
Rearranging to:
(210) | ||||
(211) | ||||
(212) |
Thus, .
In the following, we show . By Equation 202 and , we have:
(213) | ||||
(214) | ||||
(215) | ||||
(216) |
Equation 215 follows from the fact that .
Define such that . We have
(217) | |||
(218) | |||
(219) | |||
(220) | |||
(221) | |||
(222) |
Thus, . Subsequently, . The convergence ratio is .
We have:
(223) |
where
(224) | ||||
(225) |
Define , where
(226) |
Thus, . Since , we have .
Subsequently, .
∎
Appendix G Limitations
Both our theory and algorithm focuses on the bounded regression setting. The definition of extended multicalibration does not depend on the risk function. However, the analysis of the maximal grouping function class as a linear space assumes a continuous probability distribution of observations, implying a continuous target domain. The convergence of MC-Pseudolabel is also established in a regression setting. All experiments are performed on regression tasks. As most algorithms for out-of-distribution generalization are set up with classification problems, we fill the gap for regression and leave an extension to general risk functions for future work.