Few-shot Domain Adaptation by Causal Mechanism Transfer
Abstract
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available. Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities, e.g., identical conditionals or small distributional discrepancies. However, these assumptions may preclude the possibility of adaptation from intricately shifted and apparently very different distributions. To overcome this problem, we propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant across domains. This transfer assumption can accommodate nonparametric shifts resulting in apparently different distributions while providing a solid statistical basis for DA. We take the structural equations in causal modeling as an example and propose a novel DA method, which is shown to be useful both theoretically and experimentally. Our method can be seen as the first attempt to fully leverage the structural causal models for DA.
1 Introduction
Learning from a limited amount of data is a long-standing yet actively studied problem of machine learning. Domain adaptation (DA) Ben-David et al. (2010) tackles this problem by leveraging auxiliary data sampled from related but different domains. In particular, we consider few-shot supervised DA for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
A key component of DA methods is the transfer assumption (TA) to relate the source and the target distributions. Many of the previously explored TAs have relied on certain direct distributional similarities, e.g., identical conditionals Shimodaira (2000) or small distributional discrepancies Ben-David et al. (2007). However, these TAs may preclude the possibility of adaptation from apparently very different distributions. Many others assume parametric forms of the distribution shift Zhang et al. (2013) or the distribution family Storkey and Sugiyama (2007) which can highly limit the considered set of distributions. (we further review related work in Section 5.1).
To alleviate the intrinsic limitation of previous TAs due to relying on apparent distribution similarities or parametric assumptions, we focus on a meta-distributional scenario where there exists a common generative mechanism behind the data distributions (Figures 1,2). Such a common mechanism may be more conceivable in applications involving structured table data such as medical records Yadav et al. (2018). For example, in medical record analysis for disease risk prediction, it can be reasonable to assume that there is a pathological mechanism that is common across regions or generations, but the data distributions may vary due to the difference in cultures or lifestyles. Such a hidden structure (pathological mechanism, in this case), once estimated, may provide portable knowledge to enable DA, allowing one to obtain accurate predictors for under-investigated regions or new generations.
Concretely, our assumption relies on the generative model of nonlinear independent component analysis (nonlinear ICA; Figure 1), where the observed labeled data are generated by first sampling latent independent components (ICs) and later transforming them by a nonlinear invertible mixing function denoted by Hyvärinen et al. (2019). Under this generative model, our TA is that representing the mechanism is identical across domains (Figure 2). This TA allows us to formally relate the domain distributions and develop a novel DA method without assuming their apparent similarities or making parametric assumptions.

Our contributions.
Our key contributions can be summarized in three points as follows.
-
1.
We formulate the flexible yet intuitively accessible TA of shared generative mechanism and develop a few-shot regression DA method (Section 3). The idea is as follows. First, from the source domain data, we estimate the mixing function by nonlinear ICA Hyvärinen et al. (2019) because is the only assumed relation of the domains. Then, to transfer the knowledge, we perform data augmentation using the estimated on the target domain data using the independence of the IC distributions. In the end, the augmented data is used to fit a target predictor (Figure 3).
-
2.
We theoretically justify the augmentation procedure by invoking the theory of generalized U-statistics (Lee, 1990). The theory shows that the proposed data augmentation procedure yields the uniformly minimum variance unbiased risk estimator in an ideal case. We also provide an excess risk bound Mohri et al. (2012) to cover a more realistic case (Section 4).
- 3.
2 Problem Setup
In this section, we describe the problem setup and the notation. To summarize, our problem setup is homogeneous, multi-source, and few-shot supervised domain adapting regression. That is, respectively, all data distributions are defined on the same data space, there are multiple source domains, and a limited number of labeled data is available from the target distribution (and we do not assume the availability of unlabeled data). In this paper, we use the terms domain and distribution interchangeably.
Notation.
Let us denote the set of real (resp. natural) numbers by (resp. ). For , we define . Throughout the paper, we fix and suppose that the input space is a subset of and the label space is a subset of . As a result, the overall data space is a subset of . We generally denote a labeled data point by . We denote by the set of independent distributions on with absolutely continuous marginals. For a distribution , we denote its induced expectation operator by . Table 3 in Supplementary Material provides a summary of notation.
Basic setup: Few-shot domain adapting regression.
Let be a distribution (the target distribution) over , and let be a hypothesis class. Let be a loss function where is a constant. Our goal is to find a predictor which performs well for , i.e., the target risk is small. We denote . To this goal, we are given an independent and identically distributed (i.i.d.) sample . In a fully supervised setting where is large, a standard procedure is to select by empirical risk minimization (ERM), i.e., , where . However, when is not sufficiently large, may not accurately estimate , resulting in a high generalization error of . To compensate for the scarcity of data from the target distribution, let us assume that we have data from distinct source distributions over , that is, we have independent i.i.d. samples whose relations to are described shortly. We assume for simplicity.
Key assumption.
In this work, the key transfer assumption is that all domains follow nonlinear ICA models with identical mixing functions (Figure 2). To be precise, we assume that there exists a set of IC distributions , and a smooth invertible function (the transformation or mixing) such that is generated by first sampling and later transforming it by
(1) |
and similarly for . The above assumption allows us to formally relate and . It also allows us to estimate when sufficient identification conditions required by the theory of nonlinear ICA are met. Due to space limitation, we provide a brief review of the nonlinear ICA method used in this paper and the known theoretical conditions in Supplementary Material A. Having multiple source domains is assumed here for the identifiability of : it comes from the currently known identification condition of nonlinear ICA Hyvärinen et al. (2019). Note that complex changes in are allowed, hence the assumption of invariant can accommodate intricate shifts in the apparent distribution . We discuss this further in Section 5.3 by taking a simple example.
Example: Structural equation models
A salient example of generative models expressed as Eq. (1) is structural equation models (SEMs; Pearl, 2009; Peters et al., 2017), which are used to describe the data generating mechanism involving the causality of random variables (Pearl, 2009). More precisely, the generative model of Eq.(1) corresponds to the reduced form Reiss and Wolak (2007) of a Markovian SEM Pearl (2009), i.e., a form where the structural equations to determine from are solved so that is expressed as a function of . Such a conversion is always possible because a Markovian SEM induces an acyclic causal graph Pearl (2009), hence the structural equations can be solved by elimination of variables. This interpretation of reduced-form SEMs as Eq.(1) has been exploited in methods of causal discovery, e.g., in the linear non-Gaussian additive-noise models and their successors (Kano and Shimizu, 2003; Shimizu et al., 2006; Monti et al., 2019). In the case of SEMs, the key assumption of this paper translates into the invariance of the structural equations across domains, which enables an intuitive assessment of the assumption based on prior knowledge. For instance, if all domains have the same causal mechanism and are in the same intervention state (including an intervention-free case), the modeling choice is deemed plausible. Note that we do not estimate the original structural equations in the proposed method (Section 3) but we only require estimating the reduced form, which is an easier problem compared to causal discovery.
3 Proposed Method: Mechanism Transfer
In this section, we detail the proposed method, mechanism transfer (Algorithm 1). The method first estimates the common generative mechanism from the source domain data and then uses it to perform data augmentation of the target domain data to transfer the knowledge (Figure 3).

(b) Find IC
(c) Shuffle
(d) Pseudo target data
3.1 Step 1: Estimate using the source domain data
The first step estimates the common transformation by nonlinear ICA, namely via generalized contrastive learning (GCL; Hyvärinen et al., 2019). GCL uses auxiliary information for training a certain binary classification function, , equipped with a parametrized feature extractor . The trained feature extractor is used as an estimator of . The auxiliary information we use in our problem setup is the domain indices . The classification function to be trained in GCL is consisting of , and the classification task of GCL is logistic regression to classify as positive and as negative. This yields the following domain-contrastive learning criterion to estimate :
where and are sets of parametrized functions, denotes the expectation with respect to ( denotes the uniform distribution), and is the logistic loss . We use the solution as an estimator of . In experiments, is implemented by invertible neural networks Kingma and Dhariwal (2018), by multi-layer perceptron, and is replaced by a random sampling renewed for every mini-batch.
3.2 Step 2: Extract and inflate the target ICs using
The second step extracts and inflates the target domain ICs using the estimated . We first extract the ICs of the target domain data by applying the inverse of as
After the extraction, we inflate the set of IC values by taking all dimension-wise combinations of the estimated IC:
to obtain new plausible IC values . The intuitive motivation of this procedure stems from the independence of the IC distributions. Theoretical justifications are provided in Section 4. In our implementation, we use invertible neural networks Kingma and Dhariwal (2018) to model the function to enable the computation of the inverse .
3.3 Step 3: Synthesize target data from the inflated ICs
The third step estimates the target risk by the empirical distribution of the augmented data:
(2) |
and performs empirical risk minimization. In experiments, we use a regularization term to control the complexity of and select
The generated hypothesis is then used to make predictions in the target domain. In our experiments, we use , where and the norm is that of the reproducing kernel Hilbert space (RKHS) which we take the subset from. Note that we may well subsample only a subset of combinations in Eq. (2) to mitigate the computational cost similarly to Clémençon et al. (2016) and Papa et al. (2015).
4 Theoretical Insights
In this section, we state two theorems to investigate the statistical properties of the method proposed in Section 3 and provide plausibility beyond the intuition that we take advantage of the independence of the IC distributions.
4.1 Minimum variance property: Idealized case
The first theorem provides an insight into the statistical advantage of the proposed method: in the ideal case, the method attains the minimum variance among all possible unbiased risk estimators.
Theorem 1 (Minimum variance property of ).
Assume that . Then, for each , the proposed risk estimator is the uniformly minimum variance unbiased estimator of , i.e., for any unbiased estimator of ,
as well as holds.
The proof of Theorem 1 is immediate once we rewrite as a -variate regular statistical functional and as its corresponding generalized U-statistic (Lee, 1990). Details can be found in Supplementary Material D. Theorem 1 implies that the proposed risk estimator can have superior statistical efficiency in terms of the variance over the ordinary empirical risk.
4.2 Excess risk bound: More realistic case
In real situations, one has to estimate . The following theorem characterizes the statistical gain and loss arising from the estimation error . The intuition is that the increased number of points suppresses the possibility of overfitting because the hypothesis has to fit the majority of the inflated data, but the estimator has to be accurate so that fitting to the inflated data is meaningful. Note that the theorem is agnostic to how is obtained, hence it applies to more general problem setup as long as can be estimated.
Theorem 2 (Excess risk bound).
Let be a minimizer of Eq. (2). Under appropriate assumptions (see Theorem 3 in Supplementary Material), for arbitrary , we have with probability at least ,
Here, is the -Sobolev norm, and we define the effective Rademacher complexity by
(3) |
where are independent sign variables, is the expectation with respect to , the dummy variables are i.i.d. copies of , and is defined by using the degree- symmetric group as
and and are higher order terms. The constants and depend only on and , respectively, while depends only on , and .
Details of the statement and the proof can be found in Supplementary Material C. The Sobolev norm Adams and Fournier (2003) emerges from the evaluation of the difference between the estimated IC distribution and the ground-truth IC distribution. In Theorem 2, the utility of the proposed method appears in the effective complexity measure. The complexity is defined by a set of functions which are marginalized over all but one argument, resulting in mitigated dependence on the input dimensionality from exponential to linear (Supplementary Material C, Remark 3).
5 Related Work and Discussion
In this section, we review some existing TAs for DA to clarify the relative position of the paper. We also clarify the relation to the literature of causality-related transfer learning.
5.1 Existing transfer assumptions
Here, we review some of the existing work and TAs. See Table 1 for a summary.
(1) Parametric assumptions.
Some TAs assume parametric distribution families, e.g., Gaussian mixture model in covariate shift Storkey and Sugiyama (2007). Some others assume parametric distribution shift, i.e., parametric representations of the target distribution given the source distributions. Examples include location-scale transform of class conditionals Zhang et al. (2013); Gong et al. (2016), linearly dependent class conditionals Zhang et al. (2015), and low-dimensional representation of the class conditionals after kernel embedding Stojanov et al. (2019). In some applications, e.g., remote sensing, some parametric assumptions have proven useful Zhang et al. (2013).
(2) Invariant conditionals and marginals.
Some methods assume invariance of certain conditionals or marginals Qui (2009), e.g., in the covariate shift scenario Shimodaira (2000), for an appropriate feature transformation in transfer component analysis Pan et al. (2011), for a feature selector Rojas-Carulla et al. (2018); Magliacane et al. (2018), in the target shift (TarS) scenario Zhang et al. (2013); Nguyen et al. (2016), and few components of regular-vine copulas and marginals in Lopez-paz et al. (2012). For example, the covariate shift scenario has been shown to fit well to brain computer interface data Sugiyama et al. (2007).
(3) Small discrepancy or integral probability metric.
Another line of work relies on certain distributional similarities, e.g., integral probability metric Courty et al. (2017) or hypothesis-class dependent discrepancies Ben-David et al. (2007); Blitzer et al. (2008); Ben-David et al. (2010); Kuroki et al. (2019); Zhang et al. (2019); Cortes et al. (2019). These methods assume the existence of the ideal joint hypothesis Ben-David et al. (2010), corresponding to a relaxation of the covariate shift assumption. These TA are suited for unsupervised or semi-supervised DA in computer vision applications Courty et al. (2017).
(4) Transferable parameter.
Some others consider parameter transfer Kumagai (2016), where the TA is the existence of a parameterized feature extractor that performs well in the target domain for linear-in-parameter hypotheses and its learnability from the source domain data. For example, such a TA has been known to be useful in natural language processing or image recognition Lee et al. (2009); Kumagai (2016).
TA | AD | NP | Suited app. example |
---|---|---|---|
(1) Parametric | - | Remote sensing | |
(2) Invariant dist. | - | BCI | |
(3) Disc. / IPM | - | Computer vision | |
(4) Param-transfer | Computer vision | ||
(Ours) Mechanism | Medical records |
5.2 Causality for transfer learning
Our method can be seen as the first attempt to fully leverage structural causal models for DA. Most of the causality-inspired DA methods express their assumptions in the level of graphical causal models (GCMs), which only has much coarser information than structural causal models (SCMs) (Peters et al., 2017, Table 1.1) exploited in this paper. Compared to previous work, our method takes one step further to assume and exploit the invariance of SCMs. Specifically, many studies assume the GCM (the anticausal scenario) following the seminal meta-analysis of Schölkopf et al. (2012) and use it to motivate their parametric distribution shift assumptions or the parameter estimation procedure Zhang et al. (2013, 2015); Gong et al. (2016, 2018). Although such assumptions on the GCM have the virtue of being more robust to misspecification, they tend to require parametric assumptions to obtain theoretical justifications. On the other hand, our assumption enjoys a theoretical guarantee without relying on parametric assumptions.
One notable work in the existing literature is Magliacane et al. (2018) that considered the domain adaptation among different intervention states, a problem setup that complements ours that considers an intervention-free (or identical intervention across domains) case. To model intervention states, Magliacane et al. (2018) also formulated the problem setup using SCMs, similarly to the present paper. Therefore, we clarify a few key differences between Magliacane et al. (2018) and our work here. In terms of the methodology, Magliacane et al. (2018) takes a variable selection approach to select a set of predictor variables with an invariant conditional distribution across different intervention states. On the other hand, our method estimates the SEMs (in the reduced form) and applies a data augmentation procedure to transfer the knowledge. To the best of our knowledge, the present paper is the first to propose a way to directly use the estimated SEMs for domain adaptation, and the fine-grained use of the estimated SEMs enables us to derive an excess risk bound. In terms of the plausible applications, their problem setup may be more suitable for application fields with interventional experiments such as genomics, whereas ours may be more suited for fields where observational studies are more common such as health record analysis or economics. In Appendix E, we provide a more detailed comparison.
5.3 Plausibility of the assumptions
Checking the validity of the assumption.
As is often the case in DA, the scarcity of data disables data-driven testing of the TAs, and we need domain knowledge to judge the validity. For our TA, the intuitive interpretation as invariance of causal models (Section 2) can be used.
Invariant causal mechanisms.
The invariance of causal mechanisms has been exploited in recent work of causal discovery such as Xu et al. (2014) and Monti et al. (2019), or under the name of the multi-environment setting in Ghassami et al. (2017). Moreover, the SEMs are normally assumed to remain invariant unless explicitly intervened in Hünermund and Bareinboim (2019). However, the invariance assumption presumes that the intervention states do not vary across domains (allowing for the intervention-free case), which can be limiting for some applications where different interventions are likely to be present, e.g., different treatment policies being put in place in different hospitals. Nevertheless, the present work can already be of practical interest if it is combined with the effort to find suitable data or situations. For instance, one may find medical records in group hospitals where the same treatment criteria is put in place or local surveys in the same district enforcing identical regulations. In future work, relaxing the requirement to facilitate the data-gathering process is an important area. For such future extensions, the present theoretical analyses can also serve as a landmark to establish what can be guaranteed in the basic case without mechanism alterations.
Fully observed variables.
As the first algorithm in the approach to fully exploit SCMs for DA, we also consider the case where all variables are observable. Although it is often assumed in a causal inference problem that there are some unobserved confounding variables, we leave further extension to such a case for future work.
Required number of source domains.
A potential drawback of the proposed method is that it requires a number of source domains in order to satisfy the identification condition of the nonlinear ICA, namely GCL in this paper (Supplementary Material A). The requirement solely arises from the identification condition of the ICA method and therefore has the possibility to be made less stringent by the future development of nonlinear ICA methods. Moreover, if one can accept other identification conditions, one-sample ICA methods (e.g., linear ICA) can be also used in the proposed approach in a straightforward manner, and our theoretical analyses still hold regardless of the method chosen.
Flexibility of the model.
The relation between and can drastically change while is invariant. For example, even in a simple additive noise model , the conditional can shift drastically if the distribution of the independent noise changes in a complex manner, e.g., becoming multimodal from unimodal.
6 Experiment
In this section, we provide proof-of-concept experiments to demonstrate the effectiveness of the proposed approach. Note that the primary purpose of the experiments is to confirm whether the proposed method can properly perform DA in real-world data, and it is not to determine which DA method and TA are the most suited for the specific dataset.
6.1 Implementation details of the proposed method
Estimation of (Step 1).
We model by an -layer Glow neural network (Supplementary Material B.2). We model by a -hidden-layer neural network with a varied number of hidden units, output units, and the rectified linear unit activation LeCun et al. (2015). We use its -th output () as the value for . For training, we use the Adam optimizer Kingma and Ba (2017) with fixed parameters , fixed initial learning rate , and the maximum number of epochs . The other fixed hyperparameters of and its training process are described in Supplementary Material B.
Augmentation of target data (Step 3).
For each evaluation step, we take all combinations (with replacement) of the estimated ICs to synthesize target domain data. After we synthesize the data, we filter them by applying a novelty detection technique with respect to the union of source domain data. Namely, we use one-class support vector machine Schölkopf et al. (2000) with the fixed parameter and radial basis function (RBF) kernel with . This is because the estimated transform is not expected to be trained well outside the union of the supports of the source distributions. After performing the filtration, we combined the original target training data with the augmented data to ensure the original data points to be always included.
Predictor hypothesis class .
As the predictor model, we use the kernel ridge regression (KRR) with RBF kernel. The bandwidth is chosen by the median heuristic similarly to Yamada et al. (2011) for simplicity. Note that the choice of the predictor model is for the sake of comparison with the other methods tailored for KRR Cortes et al. (2019), and that an arbitrary predictor hypothesis class and learning algorithm can be easily combined with the proposed approach.
Hyperparameter selection.
We perform grid-search for hyperparameter selection. The number of hidden units for is chosen from and the coefficient of weight-decay from . The regularization coefficient of KRR is chosen from following Cortes et al. (2019). To perform hyperparameter selection as well as early-stopping, we record the leave-one-out cross-validation (LOOCV) mean-squared error on the target training data every epochs and select its minimizer. The leave-one-out score is computed using the well-known analytic formula instead of training the predictor for each split. Note that we only use the original target domain data as the held-out set and not the synthesized data. In practice, if the target domain data is extremely few, one may well use percentile-cv Ng (1997) to mitigate overfitting of hyperparameter selection.
Computation environment
6.2 Experiment using real-world data
Dataset.
We use the gasoline consumption data (Greene, 2012, p.284, Example 9.5), which is a panel data of gasoline usage in 18 of the OECD countries over 19 years. We consider each country as a domain, and we disregard the time-series structure and consider the data as i.i.d. samples for each country in this proof-of-concept experiment. The dataset contains four variables, all of which are log-transformed: motor gasoline consumption per car (the predicted variable), per-capita income, motor gasoline price, and the stock of cars per capita (the predictor variables) Baltagi and Griffin (1983). For further details of the data, see Supplementary Material B. We used the dataset because there are very few public datasets for domain adapting regression tasks Cortes and Mohri (2014) especially for multi-source DA, and also because the dataset has been used in econometric analyses involving SEMs Baltagi (2005), conforming to our approach.
Compared methods.
We compare the following DA methods, all of which apply to regression problems. Unless explicitly specified, the predictor class is chosen to be KRR with the same hyperparameter candidates as the proposed method (Section 6.1). Further details are described in Supplementary Material B.5.
-
•
Naive baselines (SrcOnly, TarOnly, and S&TV): SrcOnly (resp. TarOnly) trains a predictor on the source domain data (resp. target training data) without any device. SrcOnly can be effective if the source domains and the target domain have highly similar distributions. The S&TV baseline trains on both source and target domain data, but the LOOCV score is computed only from the target domain data.
-
•
TrAdaBoost: Two-stage TrAdaBoost.R2; a boosting method tailored for few-shot regression transfer proposed in Pardoe and Stone (2010). It is an iterative method with early-stopping Pardoe and Stone (2010), for which we use the leave-one-out cross-validation score on the target domain data as the criterion. As suggested in Pardoe and Stone (2010), we set the maximum number of outer loop iterations at . The base predictor is the decision tree regressor with the maximum depth Hastie et al. (2009). Note that although TrAdaBoost does not have a clarified transfer assumption, we compare the performance for reference.
-
•
IW: Importance weighted KRR using RuLSIF Yamada et al. (2011). The method directly estimates a relative joint density ratio function for , where is a hypothetical source distribution created by pooling all source domain data. Following Yamada et al. (2011), we experiment on and report the results separately. The regularization coefficient is selected from using importance-weighted cross-validation Sugiyama et al. (2007).
-
•
GDM: Generalized discrepancy minimization Cortes et al. (2019). This method performs instance-weighted training on the source domain data with the weights that minimize the generalized discrepancy (via quadratic programming). We select the hyper-parameters from as suggested by Cortes et al. (2019). The selection criterion is the performance of the trained predictor on the target training labels as the method trains on the source domain data and the target unlabeled data.
-
•
Copula: Non-parametric regular-vine copula method Lopez-paz et al. (2012). This method presumes using a specific joint density estimator called regular-vine (R-vine) copulas. Adaptation is realized in two steps: the first step estimates which components of the constructed R-vine model are different by performing two-sample tests based on maximum mean discrepancy Lopez-paz et al. (2012), and the second step re-estimates the components in which a change is detected using only the target domain data.
-
•
LOO (reference score): Leave-one-out cross-validated error estimate is also calculated for reference. It is the average prediction error of predicting for a single held-out test point when the predictor is trained on the rest of the whole target domain data including those in the test set for the other algorithms.
Evaluation procedure.
The prediction accuracy was measured by the mean squared error (MSE). For each train-test split, we randomly select one-third (6 points) of the target domain dataset as the training set and use the rest as the test set. All experiments were repeated 10 times with different train-test splits of target domain data.
Results.
The results are reported in Table 2. We report the MSE scores normalized by that of LOO to facilitate the comparison, similarly to Cortes and Mohri (2014). In many of the target domain choices, the naive baselines (SrcOnly and S&TV) suffer from negative transfer, i.e., higher average MSE than TarOnly (in 12 out of 18 domains). On the other hand, the proposed method successfully performs better than TarOnly or is more resistant to negative transfer than the other compared methods. The performances of GDM, Copula, and IW are often inferior even compared to the baseline performance of SrcAndTarValid. For GDM and IW, this can be attributed to the fact that these methods presume the availability of abundant (unlabeled) target domain data, which is unavailable in the current problem setup. For Copula, the performance inferior to the naive baselines is possibly due to the restriction of the predictor model to its accompanied probability model Lopez-paz et al. (2012). TrAdaBoost works reasonably well for many but not all domains. For some domains, it suffered from negative transfer similarly to others, possibly because of the very small number of training data points. Note that the transfer assumption of TrAdaBoost has not been stated Pardoe and Stone (2010), and it is not understood when the method is reliable. The domains on which the baselines perform better than the proposed method can be explained by the following two cases: (1) easier domains allow naive baselines to perform well and (2) some domains may have deviated . Case (1) implies that estimating is unnecessary, hence the proposed method can be suboptimal (more likely for JPN, NLD, NOR, and SWE, where SrcOnly or S&TV improve upon TrgOnly). On the other hand, case (2) implies that an approximation error was induced as in Theorem 2 (more likely for IRL and ITA). In this case, others also perform poorly, implying the difficulty of the problem instance. In either case, in practice, one may well perform cross-validation to fallback to the baselines.
Target | (LOO) | TarOnly | Prop | SrcOnly | S&TV | TrAda | GDM | Copula | IW(.0) | IW(.5) | IW(.95) |
---|---|---|---|---|---|---|---|---|---|---|---|
AUT | 1 | 5.88 (1.60) | 5.39 (1.86) | 9.67 (0.57) | 9.84 (0.62) | 5.78 (2.15) | 31.56 (1.39) | 27.33 (0.77) | 39.72 (0.74) | 39.45 (0.72) | 39.18 (0.76) |
BEL | 1 | 10.70 (7.50) | 7.94 (2.19) | 8.19 (0.68) | 9.48 (0.91) | 8.10 (1.88) | 89.10 (4.12) | 119.86 (2.64) | 105.15 (2.96) | 105.28 (2.95) | 104.30 (2.95) |
CAN | 1 | 5.16 (1.36) | 3.84 (0.98) | 157.74 (8.83) | 156.65 (10.69) | 51.94 (30.06) | 516.90 (4.45) | 406.91 (1.59) | 592.21 (1.87) | 591.21 (1.84) | 589.87 (1.91) |
DNK | 1 | 3.26 (0.61) | 3.23 (0.63) | 30.79 (0.93) | 28.12 (1.67) | 25.60 (13.11) | 16.84 (0.85) | 14.46 (0.79) | 22.15 (1.10) | 22.11 (1.10) | 21.72 (1.07) |
FRA | 1 | 2.79 (1.10) | 1.92 (0.66) | 4.67 (0.41) | 3.05 (0.11) | 52.65 (25.83) | 91.69 (1.34) | 156.29 (1.96) | 116.32 (1.27) | 116.54 (1.25) | 115.29 (1.28) |
DEU | 1 | 16.99 (8.04) | 6.71 (1.23) | 229.65 (9.13) | 210.59 (14.99) | 341.03 (157.80) | 739.29 (11.81) | 929.03 (4.85) | 817.50 (4.60) | 818.13 (4.55) | 812.60 (4.57) |
GRC | 1 | 3.80 (2.21) | 3.55 (1.79) | 5.30 (0.90) | 5.75 (0.68) | 11.78 (2.36) | 26.90 (1.89) | 23.05 (0.53) | 47.07 (1.92) | 45.50 (1.82) | 45.72 (2.00) |
IRL | 1 | 3.05 (0.34) | 4.35 (1.25) | 135.57 (5.64) | 12.34 (0.58) | 23.40 (17.50) | 3.84 (0.22) | 26.60 (0.59) | 6.38 (0.13) | 6.31 (0.14) | 6.16 (0.13) |
ITA | 1 | 13.00 (4.15) | 14.05 (4.81) | 35.29 (1.83) | 39.27 (2.52) | 87.34 (24.05) | 226.95 (11.14) | 343.10 (10.04) | 244.25 (8.50) | 244.84 (8.58) | 242.60 (8.46) |
JPN | 1 | 10.55 (4.67) | 12.32 (4.95) | 8.10 (1.05) | 8.38 (1.07) | 18.81 (4.59) | 95.58 (7.89) | 71.02 (5.08) | 135.24 (13.57) | 134.89 (13.50) | 134.16 (13.43) |
NLD | 1 | 3.75 (0.80) | 3.87 (0.79) | 0.99 (0.06) | 0.99 (0.05) | 9.45 (1.43) | 28.35 (1.62) | 29.53 (1.58) | 33.28 (1.78) | 33.23 (1.77) | 33.14 (1.77) |
NOR | 1 | 2.70 (0.51) | 2.82 (0.73) | 1.86 (0.29) | 1.63 (0.11) | 24.25 (12.50) | 23.36 (0.88) | 31.37 (1.17) | 27.86 (0.94) | 27.86 (0.93) | 27.52 (0.91) |
ESP | 1 | 5.18 (1.05) | 6.09 (1.53) | 5.17 (1.14) | 4.29 (0.72) | 14.85 (4.20) | 33.16 (6.99) | 152.59 (6.19) | 53.53 (2.47) | 52.56 (2.42) | 52.06 (2.40) |
SWE | 1 | 6.44 (2.66) | 5.47 (2.63) | 2.48 (0.23) | 2.02 (0.21) | 2.18 (0.25) | 15.53 (2.59) | 2706.85 (17.91) | 118.46 (1.64) | 118.23 (1.64) | 118.27 (1.64) |
CHE | 1 | 3.51 (0.46) | 2.90 (0.37) | 43.59 (1.77) | 7.48 (0.49) | 38.32 (9.03) | 8.43 (0.24) | 29.71 (0.53) | 9.72 (0.29) | 9.71 (0.29) | 9.79 (0.28) |
TUR | 1 | 1.65 (0.47) | 1.06 (0.15) | 1.22 (0.18) | 0.91 (0.09) | 2.19 (0.34) | 64.26 (5.71) | 142.84 (2.04) | 159.79 (2.63) | 157.89 (2.63) | 157.13 (2.69) |
GBR | 1 | 5.95 (1.86) | 2.66 (0.57) | 15.92 (1.02) | 10.05 (1.47) | 7.57 (5.10) | 50.04 (1.75) | 68.70 (1.25) | 70.98 (1.01) | 70.87 (0.99) | 69.72 (1.01) |
USA | 1 | 4.98 (1.96) | 1.60 (0.42) | 21.53 (3.30) | 12.28 (2.52) | 2.06 (0.47) | 308.69 (5.20) | 244.90 (1.82) | 462.51 (2.14) | 464.75 (2.08) | 465.88 (2.16) |
#Best | - | 2 | 10 | 2 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
7 Conclusion
In this paper, we proposed a novel few-shot supervised DA method for regression problems based on the assumption of shared generative mechanism. Through theoretical and experimental analysis, we demonstrated the effectiveness of the proposed approach. By considering the latent common structure behind the domain distributions, the proposed method successfully induces positive transfer even when a naive usage of the source domain data can suffer from negative transfer. Our future work includes making an experimental comparison with extensively more datasets and methods as well as an extension to the case where the underlying mechanism are not exactly identical but similar.
Acknowledgments
The authors would like to thank the anonymous reviewers for their insightful comments and thorough discussions. We would also like to thank Yuko Kuroki and Taira Tsuchiya for proofreading the manuscript. This work was supported by RIKEN Junior Research Associate Program. TT was supported by Masason Foundation. IS was supported by KAKEN 17H04693. MS was supported by JST CREST Grant Number JPMJCR18A2.
References
- Qui (2009) Dataset Shift in Machine Learning. Neural Information Processing Series. MIT Press, Cambridge, Mass, 2009.
- Adams and Fournier (2003) Robert A. Adams and John JF Fournier. Sobolev Spaces. Academic press, 2003.
- Arjovsky et al. (2020) Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv:1907.02893 [cs, stat], March 2020.
- Baltagi (2005) Badi Baltagi. Econometric Analysis of Panel Data. New York: John Wiley and Sons, 3rd edition, 2005.
- Baltagi and Griffin (1983) Badi H. Baltagi and James M. Griffin. Gasoline demand in the OECD: An application of pooling and testing procedures. European Economic Review, 22(2):117–137, 1983.
- Ben-David et al. (2007) Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems 19, pages 137–144. MIT Press, 2007.
- Ben-David et al. (2010) Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151–175, 2010.
- Blitzer et al. (2008) John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In Advances in Neural Information Processing Systems 20, pages 129–136. Curran Associates, Inc., 2008.
- Clémençon et al. (2016) Stephan Clémençon, Igor Colin, and Aurélien Bellet. Scaling-up empirical risk minimization: Optimization of incomplete U-statistics. Journal of Machine Learning Research, 17(76):1–36, 2016.
- Cortes and Mohri (2014) Corinna Cortes and Mehryar Mohri. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014.
- Cortes et al. (2019) Corinna Cortes, Mehryar Mohri, and Andrés Muñoz Medina. Adaptation based on generalized discrepancy. Journal of Machine Learning Research, 20(1):1–30, 2019.
- Courty et al. (2017) Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems 30, pages 3730–3739. Curran Associates, Inc., 2017.
- Ghassami et al. (2017) AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, and Kun Zhang. Learning causal structures using regression invariance. In Advances in Neural Information Processing Systems 30, pages 3011–3021. Curran Associates, Inc., 2017.
- Golub and Van Loan (2013) Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. The Johns Hopkins University Press, Baltimore, 4th edition, 2013.
- Gong et al. (2016) Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Schölkopf. Domain adaptation with conditional transferable components. In Proceedings of the 33rd International Conference on Machine Learning, pages 2839–2848, New York, USA, 2016. PMLR.
- Gong et al. (2018) Mingming Gong, Kun Zhang, Biwei Huang, Clark Glymour, Dacheng Tao, and Kayhan Batmanghelich. Causal generative domain adaptation networks. arXiv:1804.04333 [cs, stat], April 2018.
- Greene (2012) William H. Greene. Econometric Analysis. Prentice Hall, Boston, 7th edition, 2012.
- Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2nd edition, 2009.
- Hayfield and Racine (2008) Tristen Hayfield and Jeffrey S. Racine. Nonparametric econometrics: The np package. Journal of Statistical Software, 27(5), 2008.
- Hünermund and Bareinboim (2019) Paul Hünermund and Elias Bareinboim. Causal inference and data-fusion in econometrics. arXiv:1912.09104 [econ], December 2019.
- Hyvärinen and Pajunen (1999) A. Hyvärinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural networks, 12(3):429–439, 1999.
- Hyvärinen and Morioka (2016) Aapo Hyvärinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ICA. In Advances in Neural Information Processing Systems 29, pages 3765–3773. Curran Associates, Inc., 2016.
- Hyvärinen and Morioka (2017) Aapo Hyvärinen and Hiroshi Morioka. Nonlinear ICA of temporally dependent stationary sources. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 460–469, 2017.
- Hyvärinen et al. (2019) Aapo Hyvärinen, Hiroaki Sasaki, and Richard Turner. Nonlinear ICA using auxiliary variables and generalized contrastive learning. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 859–868, 2019.
- Ipsen and Rehman (2008) Ilse C. F. Ipsen and Rizwana Rehman. Perturbation bounds for determinants and characteristic polynomials. SIAM Journal on Matrix Analysis and Applications, 30(2):762–776, 2008.
- Kano and Shimizu (2003) Yutaka Kano and Shohei Shimizu. Causal inference using nonnormality. In Proceedings of the International Symposium on the Science of Modeling, the 30th Anniversary of the Information Criterion,, pages 261–270, 2003.
- Khemakhem et al. (2019) Ilyes Khemakhem, Diederik P. Kingma, Ricardo Pio Monti, and Aapo Hyvärinen. Variational autoencoders and nonlinear ICA: A unifying framework. arXiv:1907.04809 [cs, stat], July 2019.
- Kingma and Ba (2017) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 [cs], January 2017.
- Kingma and Dhariwal (2018) Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems 31, pages 10215–10224. Curran Associates, Inc., 2018.
- Kumagai (2016) Wataru Kumagai. Learning bound for parameter transfer learning. In Advances in Neural Information Processing Systems 29, pages 2721–2729. Curran Associates, Inc., 2016.
- Kuroki et al. (2019) Seiichi Kuroki, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, and Masashi Sugiyama. Unsupervised domain adaptation based on source-guided discrepancy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4122–4129, 2019.
- LeCun et al. (2015) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
- Lee (1990) A. J. Lee. U-Statistics: Theory and Practice. M. Dekker, New York, 1990.
- Lee et al. (2009) Honglak Lee, Rajat Raina, Alex Teichman, and Andrew Y. Ng. Exponential family sparse coding with applications to self-taught learning. In Proceedings of the 21st International Jont Conference on Artifical Intelligence, pages 1113–1119, San Francisco, CA, USA, 2009. Morgan Kaufmann Publishers Inc.
- Lopez-paz et al. (2012) David Lopez-paz, Jose M. Hernández-lobato, and Bernhard Schölkopf. Semi-supervised domain adaptation with non-parametric copulas. In Advances in Neural Information Processing Systems 25, pages 665–673. Curran Associates, Inc., 2012.
- Magliacane et al. (2018) Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M Mooij. Domain adaptation by using causal inference to predict invariant conditional distributions. In Advances in Neural Information Processing Systems 31, pages 10846–10856. Curran Associates, Inc., 2018.
- Mohri et al. (2012) Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. Adaptive Computation and Machine Learning Series. MIT Press, Cambridge, MA, 2012.
- Monti et al. (2019) Ricardo Pio Monti, Kun Zhang, and Aapo Hyvärinen. Causal discovery with general non-linear relationships using non-linear ICA. In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, 2019.
- Ng (1997) Andrew Y. Ng. Preventing “overfitting” of cross-validation data. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 245–253, San Francisco, CA, USA, 1997.
- Nguyen et al. (2016) Tuan Duong Nguyen, Marthinus Christoffel, and Masashi Sugiyama. Continuous Target Shift Adaptation in Supervised Learning. In Asian Conference on Machine Learning, volume 45 of Proceedings of Machine Learning Research, pages 285–300. PMLR, 2016.
- Pan et al. (2011) Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2011.
- Papa et al. (2015) Guillaume Papa, Stéphan Clémençon, and Aurélien Bellet. SGD Algorithms based on Incomplete U-statistics: Large-Scale Minimization of Empirical Risk. In Advances in Neural Information Processing Systems 28, pages 1027–1035. Curran Associates, Inc., 2015.
- Pardoe and Stone (2010) David Pardoe and Peter Stone. Boosting for regression transfer. In Proceedings of the Twenty-Seventh International Conference on Machine Learning, pages 863–870, Haifa, Israel, 2010.
- Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
- Pearl (2009) Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, Cambridge, U.K. ; New York, 2nd edition, 2009.
- Peters et al. (2017) Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of Causal Inference: Foundations and Learning Algorithms. Adaptive Computation and Machine Learning Series. The MIT Press, Cambridge, Massachuestts, 2017.
- R Core Team (2018) R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria, 2018.
- Reiss and Wolak (2007) Peter C. Reiss and Frank A. Wolak. Structural econometric modeling: Rationales and examples from industrial organization. In Handbook of Econometrics, volume 6, pages 4277–4415. Elsevier, 2007.
- Rejchel (2012) Wojciech Rejchel. On ranking and generalization bounds. Journal of Machine Learning Research, 13(May):1373–1392, 2012.
- Rojas-Carulla et al. (2018) Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. Journal of Machine Learning Research, 19(36):1–34, 2018.
- Schölkopf et al. (2000) Bernhard Schölkopf, Robert C Williamson, Alex J. Smola, John Shawe-Taylor, and John C. Platt. Support vector method for novelty detection. In Advances in Neural Information Processing Systems 12, pages 582–588. MIT Press, 2000.
- Schölkopf et al. (2012) Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. In Proceedings of the 29th International Coference on Machine Learning, pages 459–466. Omnipress, 2012.
- Sherman (1994) Robert P. Sherman. Maximal inequalities for degenerate U-processes with applications to optimization estimators. The Annals of Statistics, 22(1):439–459, 1994.
- Shimizu et al. (2006) Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti J Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(October):2003–2030, 2006.
- Shimodaira (2000) Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000.
- Stojanov et al. (2019) Petar Stojanov, Mingming Gong, Jaime Carbonell, and Kun Zhang. Data-driven approach to multiple-source domain adaptation. In Proceedings of Machine Learning Research, volume 89, pages 3487–3496. PMLR, 2019.
- Storkey and Sugiyama (2007) Amos J Storkey and Masashi Sugiyama. Mixture regression for covariate shift. In Advances in Neural Information Processing Systems 19, pages 1337–1344. MIT Press, 2007.
- Sugiyama et al. (2007) Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert Müller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(May):985–1005, 2007.
- Wainwright (2019) Martin J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press, 1st edition, 2019.
- Xu et al. (2014) Lele Xu, Tingting Fan, Xia Wu, KeWei Chen, Xiaojuan Guo, Jiacai Zhang, and Li Yao. A pooling-LiNGAM algorithm for effective connectivity analysis of fMRI data. Frontiers in Computational Neuroscience, 8(October):125, 2014.
- Yadav et al. (2018) Pranjul Yadav, Michael Steinbach, Vipin Kumar, and Gyorgy Simon. Mining electronic health records (EHRs): A survey. ACM Computing Surveys, 50(6):1–40, 2018.
- Yamada et al. (2011) Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, and Masashi Sugiyama. Relative density-ratio estimation for robust distribution comparison. In Advances in Neural Information Processing Systems 24, pages 594–602. Curran Associates, Inc., 2011.
- Zhang et al. (2015) K. Zhang, M. Gong, and B. Schölkopf. Multi-source domain adaptation: A causal view. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 3150–3157. AAAI Press, 2015.
- Zhang et al. (2013) Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In Proceedings of the 30th International Conference on Machine Learning, pages 819–827, 2013.
- Zhang et al. (2019) Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning, pages 7404–7413, Long Beach, California, USA, 2019. PMLR.
This is the Supplementary Material for “Few-shot Domain Adaptation by Causal Mechanism Transfer.” Table 3 summarizes the abbreviations and the symbols used in the paper.
Abbreviation / Symbol | Meaning |
---|---|
DA | Domain adaptation |
TA | Transfer assumption |
SEM | Structural equation model |
GCM | Graphical causal model |
SCM | Structural causal model |
IC | Independent component |
ICA | Independent component analysis |
GCL | Generalized contrastive learning |
i.i.d. | Independent and identically distributed |
where | |
The -Sobolev norm | |
The predictor random vector (-valued) | |
The predicted random variable (-valued) | |
The joint random variable (-valued) | |
The independent component vector (-valued) | |
The space of | |
The space of | |
The space of | |
Predictor hypothesis class | |
Loss function | |
Target domain risk | |
Minimizer of target domain risk | |
The set of independent distributions | |
Ground truth mixing function | |
The target joint distribution | |
The joint distribution of source domain | |
The target independent component (IC) distribution | |
The IC distribution of source domain | |
The dimension of | |
The number of source domains | |
The size of the target labeled sample | |
The size of the labeled sample from source domain | |
Target labeled data set | |
Source labeled data set of source domain | |
The ordinary empirical risk estimator | |
The proposed risk estimator (Eq. (2)) | |
The estimator of | |
The penultimate layer functions composed with during GCL |
Appendix A Preliminary: Nonlinear ICA
Here, we use the same notation as the main text. The recently developed nonlinear ICA provides an algorithm to estimate the mixing function . For the case of nonlinear , the impossibility of identification (i.e., consistent estimation) of in the one-sample i.i.d. case had been established more than two decades ago (Hyvärinen and Pajunen, 1999). However, recently, various conditions have been proposed under which can be identified with the help of auxiliary information Hyvärinen and Morioka (2016, 2017); Hyvärinen et al. (2019); Khemakhem et al. (2019).
The identification condition that is directly relevant to this paper is that of the generalized contrastive learning (GCL) proposed in Hyvärinen et al. (2019). Hyvärinen et al. (2019) assumes that an auxiliary variable from some measurable set is obtained for each data point as and that the ICs are conditionally independent given :
Under such conditions, GCL estimates by training a classification function
(4) |
parametrized by and with the logistic loss for classifying
where . The key condition for the identification of is the following.
Assumption 1 (Assumption of variability; Hyvärinen et al., 2019, Theorem 1).
For any , there exist distinct points in , denoted by , such that the set of -dimensional vectors are linearly independent, where
Under Assumption 1 and some regularity conditions, Theorem 1 of Hyvärinen et al. (2019) states that the transformation in Eq. (4) trained by GCL is a consistent estimator of upto additional dimension-wise invertible transformations. Note that the assumption is intrinsically difficult to confirm based on data due to the unsupervised nature of the problem setting. In this paper, we use the source domain index as the auxiliary variable and employ GCL for domain adaptation. The present version of Assumption 1 requires that we have at least distinct source domains. Although this condition can be restrictive in high-dimensional data, we conjecture that there is a possibility for this assumption to be made less stringent in the future because the identification condition is only known to be a sufficient condition, not a necessary condition. However, pursuing a refinement of the identification condition is out of the scope of this paper. Among the various methods for nonlinear ICA, we chose to use GCL Hyvärinen et al. (2019) because it can operate under a nonparametric assumption on the IC distributions whereas other nonlinear ICA methods Hyvärinen and Morioka (2016, 2017); Khemakhem et al. (2019) may require parametric assumptions.
Appendix B Experiment Details
Here, we describe more implementation details of the experiment. Our experiment code can be found at https://github.com/takeshi-teshima/few-shot-domain-adaptation-by-causal-mechanism-transfer.
B.1 Dataset details
Gasoline consumption data.
The data was downloaded from http://bcs.wiley.com/he-bcs/Books?action=resource&bcsId=4338&itemId=1118672321&resourceId=13452.
B.2 Model details: Invertible neural networks
Here, we describe the details of the Glow architecture Kingma and Dhariwal (2018) used in our experiments. Glow consists of three types of layers which are invertible by design, namely affine coupling layers, convolution layers, and activation normalization (actnorm) layers. In our implementation, we use actnorm as the first layer, and each of the subsequent layers consists of a convolution layer followed by an affine coupling layer.
Affine coupling layers.
The coefficients and for affine coupling layers in the notation of Kingma and Dhariwal (2018) are parametrized by two one-hidden-layer neural networks whose number of hidden units is the same and the first layer parameter is shared. The activation functions of the first layer, the second layer of , and the second layer of are the rectified linear unit (ReLU) activation LeCun et al. (2015), the hyperbolic tangent function, and the linear activation function, respectively. A standard practice of affine coupling layers is to compose the coefficient with an exponential function so as to simplify the computation of the log-determinant of the Jacobian Kingma and Dhariwal (2018). In our implementation, since we do not require the computation of the log-determinant, we omit this device and instead compose . The addition of shifts the parameter space so that corresponds to the the identity map, where denotes the constant zero function. The split of the affine coupling layers is fixed at .
convolution layers.
We initialize the parameters of the neural networks by where is the number of parameters of each layer and is the normal distribution.
B.3 Model details: Penultimate layer networks
We initialize the parameter for each layer of by , where is the number of input features and is the uniform distribution.
B.4 Training details
During the training of GCL, we fix the batch size at 32.
B.5 Compared methods details
Here, we detail the methods compared through the experiment. Note that the present paper focuses on regression problems as our approach is based on ICA, hence the methods for classification domain adaptation are not comparable.
TrAdaBoost.
As suggested in Pardoe and Stone (2010), we use the linear loss function and set the maximum number of internal boosting iterations at .
GDM.
We fix the number of sampling required for approximating the maximization in the generalization discrepancy at . This method presumes using hypothesis classes in a reproducing kernel Hilbert space (RKHS).
Copula.
For this model, the probabilistic model of non-parametric R-vine copula of depth is used following Lopez-paz et al. (2012). Kernel density estimators with RBF kernel are used for estimating the marginal distributions and the copulas. The bandwidths of the RBF kernels are determined using the rule-of-thumb implemented as “normal-reference” in the np package of R language Hayfield and Racine (2008). The predictions are made by numerically aggregating the estimated conditional distribution over the interval where denotes the square root of the unbiased variance of . The aggregation is performed by discretizing the interval into a grid of points. The level of the two-sample test is fixed at for all combination of the two-sample tests following the experiment code of Lopez-paz et al. (2012). This method is a single-source domain adaptation method and we pool all source domain data for adaptation.
Appendix C Details and Proofs of Theorem 2
Here, we detail the assumptions, the statement, and the proof of Theorem 2.
C.1 Notation
To make the proof self-contained, we first recall some general and problem-specific notation. In the notation here, we omit the domain identifiers from the distributions and the sample size, such as Tar or Src, because only the target domain data or their distributions appear in the proofs. The theorem holds regardless of how is estimated as long as is independent of the target domain data. In the proof, we extend the maximal discrepancy bound of U-statistics previously proved for the case of degree- in Rejchel (2012), to allow higher degrees.
General mathematical notation.
We denote the set of natural numbers (resp. real numbers) by (resp. ). For any , we define . We use to denote the number of -combinations of elements. For a finite set , the notation denotes the operator to take an average over , i.e., . For a -dimensional function , we denote its -th dimension () by suffixing . For a vector , we denote its -th element by . We denote the Jacobian determinant of a differentiable function at by . We denote the identity matrix by regardless of the size of the matrices when there is no ambiguity. For finite dimensional vectors, we denote the -norm by and the -norm by . For square matrices, we denote the operator- norm by and the operator- norm by . We use to denote the Sobolev space (on ) of order and define its associated norm by where is a multi-index and denotes the partial derivative (Adams and Fournier, 2003, Paragraph 3.1). We let be the degree- symmetric group, be the set of grouping of indices in , and be the set of all size- combinations (without replacement) of indices in .
Distributions and expectations.
We denote by the set of all factorized distributions on with absolutely continuous marginals. For a measure , we denote its -product measure by (repeated times). We assume that all measures appearing in this proof are absolutely continuous with respect to the Lebesgue measure. The push-forward of a distribution by a function is denoted by . The expectation of a function with respect to measure is denoted by (if it exists) by abuse of notation. We also abuse the notation to use as the shorthand for where .
C.2 Problem setup
We denote the target domain distribution by .
We fix a hypothesis class , and our goal is to find a such that the risk functional
is small, where is a loss function. We denote by a minimizer of (assuming it exists). To this end, we are given the training data . Throughout, we assume . To complement the smallness of , we assume the existence of a generative mechanism. Concretely, we assume that there exists a diffeomorphism such that satisfies . With this transform, the original risk functional is also expressed as
As an estimator of , we are given another diffeomorphism such that . With this , the proposed method converts the dataset by . We can regard , where . We use (resp. ) to denote the probability measure corresponding to the density (resp. ). This conversion results in the relation:
As a candidate hypothesis , the proposed method selects a minimizer of the proposed risk estimator defined as
(5) |
In the proof, we evaluate its concentration around the expectation . We use to denote the expectation with respect to . Let denote a hypothesis which minimizes (assuming it exists).
In what follows, for notational simplicity, we define the -variate symmetric function as
where indicates an averaging operation over all permutations (without replacement) of . We use to denote the sample average operator with respect to or , depending on the context.
C.3 Assumptions
Assumption 2 (The underlying density function is bounded and Lipschitz continuous).
Assume
Assumption 3 ( is Lipschitz continuous and Hölder continuous).
We assume where is the -Hölder space (Adams and Fournier, 2003, Paragraph 1.29) and
Assumption 4 (Bounded derivatives of and ).
Assume that
where denotes the maximum absolute value of the elements of a matrix.
Assumption 5 (Loss function is bounded and uniformly Lipschitz continuous in ).
The considered loss function takes values in a bounded interval:
where . Also assume
Assumption 6 (Estimated feature extractor).
Assume is independent of and that for all .
Although and are assumed to be diffeomorphisms in the classical sense (implying that they are strongly differentiable), we introduce the Sobolev space because we want to measure their difference and their difference of derivatives in terms of integration.
Assumption 7 (Entropic condition: Euclidean class (Sherman, 1994)).
The function class is Euclidean for the envelope and constants and (Sherman, 1994), i.e., if is a measure for which , then
where is the pseudo metric defined by
for , and denotes the packing number of with respect to the pseudometric and radius . Without loss of generality, we take the envelope such that .
Assumption 8.
The hypothesis class is expressive enough so that the model approximation error does not expand due to , i.e.,
The following complexity measure of , which is a version of Rademacher complexity for our problem setting, is used to state the theorem.
Definition 1 (Effective Rademacher complexity).
Define
where are independent uniform sign variables and are independent of all other random variables.
We provide the definition of the ordinary Rademacher complexity in Section C.8 and make a comparison of the two complexity measures in terms of how they depend on the input dimensionality.
C.4 Theorem statement
Our goal is to prove the following theorem. This is a detailed version of the theorem appearing in the main body of the paper.
Theorem 3 (Excess risk bound).
Then for arbitrary , we have with probability at least ,
where
and are constants determined in Lemma 11.
Proof of Theorem 3.
As it can be seen from the proof above, Theorem 3 is proved in two parts, each corresponding to the two lemmas below. The first lemma evaluates the approximation error which reflects the fact that we are approximating by .
Lemma 1 (Approximation error bound).
The second lemma evaluates the pseudo estimation error which reflects the fact that we rely on a finite sample to approximate the underlying distribution.
Lemma 2 (Pseudo estimation error bound).
C.5 V-statistic and U-statistic
The theoretical analysis is performed by interpreting the proposed risk estimator Eq. (5) as a V-statistic (explained shortly). The proofs will be based on applying the following facts in order:
-
1.
V-statistic can be represented as a weighted average of U-statistics with degrees from to , and only the degree-) term is the leading term.
-
2.
The degree- term is again decomposed into a degree- U-statistic and a set of degenerate U-statistics.
-
3.
The degree- U-statistic is an i.i.d. sum admitting a Rademacher complexity bound.
-
4.
The degenerate terms concentrate around zero following an exponential inequality under appropriate entropy conditions.
To consolidate the strategy given above, we describe what are V- and U-statistics, and how they relate to each other. These estimators emerge when we allow re-using the same data point repeatedly from a single sample to estimate a function which takes multiple data points.
V-statistic.
For a given regular statistical functional of degree Lee (1990):
(6) |
its associated von-Mises statistic (V-statistic) is the following quantity Lee (1990):
Note that Eq. (6) does not coincide with the expectation of in general, i.e., the V-statistic is generally not an unbiased estimator. However, it is known to be a consistent estimator of Eq. (6) (Lee, 1990).
U-statistic.
Similarly, for a -variate symmetric and integrable function , its corresponding U-statistic (Lee, 1990) of degree is
The V- and U-statistics are generalizations of the sample mean (which is the U- and V-statistics of degree ). The important difference from the sample mean in higher degrees is that the summands may not be independent. To deal with the dependence, the following standard decompositions have been developed Lee (1990).
Lemma 3 (Decomposition of a V-statistic (Lee, 1990)).
A V-statistic can be expressed as a sum of U-statistics of degrees from to (Lee, 1990, Section 4.2, Theorem 1):
where the weights and -variate functions are
Proof.
See (Lee, 1990, Section 4.2, Theorem 1 (p.183)). ∎
Remark 1.
The weights satisfy (Lee, 1990, Section 4.2, Theorem 1 (p.183)). We can also find the order of with respect to as:
Lemma 4 (Hoeffding decomposition of a U-statistic (Sherman, 1994, p.449)).
A U-statistic with a symmetric kernel can be decomposed as a sum of U-statistics of degrees from to as
(7) |
where are -variate, symmetric and degenerate functions. Note that . Here, a -variate symmetric function is said degenerate when
Specifically, is
(8) |
For further details, see (Sherman, 1994, p.449). Note that in (Sherman, 1994, p.449), Eq. (7) is written using in place of . This is because
holds by linearity and symmetry.
Remark 2 (Connecting the lemmas to Section C.6).
It can be easily checked by definition that the proposed risk estimator Eq. (5) takes the form of a V-statistic: for each . Let us denote . Then holds by definition. Substituting these into Eq. (8), we have that Eq. (7) applied to is equivalent to
where are symmetric degenerate functions. In Section C.6, we first decompose into a sum of U-statistics. After such conversion, we take a closer look at the leading term, .
C.6 Proof of pseudo estimation error bound
(Proof of Lemma 2).
In the last part of the proof we used the following lemmas. Because the leading term is an i.i.d. sum, the following Rademacher complexity bound can be proved.
Lemma 5 (U-process bound: the leading term).
Proof.
Applying the standard one-sided Rademacher complexity bound based on McDiarmid’s inequality (Mohri et al., 2012, Theorem 3.1) twice with the union bound, we obtain the lemma. ∎
The other terms than the leading term are degenerate U-statistics, hence the following holds under appropriate entropy assumptions.
Lemma 6 (U-process bound: degenerate terms (Sherman, 1994, Corollary 7)).
Assume Assumption 7. Then for each , there exist constants such that for any , we have with probability at least ,
where depends only on , , and .
Proof.
The proof follows a similar path as that of (Sherman, 1994, Corollary 7), but we provide more explicit expressions to inspect the order with respect to . Let . Then is Euclidean for an envelope satisfying by Lemma 6 in Sherman (1994) and Assumption 7. In addition, is a set of functions degenerate with respect to . Without loss of generality, we can take such that . Similarly to the proof of (Sherman, 1994, Corollary 4), we can apply (Sherman, 1994, Main Corollary) with in their notation to obtain
where is a universal constant (Sherman, 1994, Main Corollary), and are chosen to satisfy , and . By applying Markov inequality, we have for arbitrary ,
where denotes the probability of the event with respect to . Equating the right hand side with and solving for , we obtain the result. ∎
C.7 Proof of approximation error bound
(Proof of Lemma 1).
The above proof combined three approximation bounds, which are shown in the following lemmas. The following lemma reduces the difference in the expectation of U-statistic into the differences in the loss function and the density function. Although we apply the following Lemma 7 only with , we prove its general form for .
Lemma 7 (Approximation bound for U-statistic of degree-).
Fix . Assume Assumption 2. Then we have for any ,
Proof.
Now the following lemmas bound each approximation terms in terms of the difference between and .
Lemma 8 (Loss difference approximation).
Assume Assumption 5. Then we have for any ,
Proof.
∎
Lemma 9 (Density difference approximation).
Proof.
Proof.
We have
∎
Proof.
Applying Lemma 12 with , we obtain
Now, each term in the integrand can be bounded from above as
When powered to , this yields
where we used for , which follows from Hölder inequality. Hence,
Therefore,
∎
Lemma 11 used the following lemma to bound the difference in Jacobian determinants.
Lemma 12 (Determinant perturbation bound (Ipsen and Rehman, 2008, Corollary 2.11)).
Let and be complex matrices. Then,
C.8 Comparison of Rademacher complexities
The following consideration demonstrates how the effective complexity measure in Theorem 3 resulting from the proposed method may enjoy a relaxed dependence on the input dimensionality compared to the ordinary empirical risk minimization. To do so, we first recall the definition of the ordinary Rademacher complexity and a standard performance guarantee derived based on it.
Definition 2 (Ordinary Rademacher complexity).
The ordinary empirical risk minimization finds the candidate hypothesis by
where
and the corresponding ordinary Rademacher complexity is
where are independent uniform sign variables and we denoted by abuse of notation. This yields the standard Rademacher complexity based bound. Applying Lemma 5 and using the same proof technique, we have that with probability at least ,
Therefore, we the corresponding complexity terms are and . In Remark 3, we make a comparison of these two complexity measures by taking an example. To recall, the effective Rademacher complexity can be written as, in terms of the notation in this section,
Remark 3 (Comparison of Radmacher complexities).
As an example, consider , the set of -Lipschitz functions (with respect to infinity norm) on the unit cube . It is well-known that there exists a constant such that the following holds (Wainwright, 2019, Example 5.10, p.129) for sufficiently small :
(9) |
Here, indicates that there exist such that, for sufficiently small , it holds that . On the other hand, the well-known discretization argument implies that there exist constants and such that for any , the following relation between the Rademacher complexity and the metric entropy holds:
(10) |
Substituting Eq. (9) into Eq. (10), we can find that, for large enough , the right hand side is minimized at . This yields
(11) |
with a new constant . Therefore, by substituting in Eq. (11), the metric-entropy based bound on the ordinary Rademacher complexity exhibits exponential dependence on the input dimension as
which is a manifestation of the curse of dimensionality. On the other hand, by following a similar calculation, we can see that the effective Rademacher complexity avoids an exponential dependence on the input dimension . By substituting in Eq. (11), we can see
where . This is because the Lipschitz constant of functions in is at most (i.e., the Lipschitz constant does not increase by the marginalization procedure) because for any ,
C.9 Remark on higher order Sobolev norms
Here, we comment on how the term is treated as a higher order term of .
Remark 4 (Higher order Sobolev norms).
Let us assume that is contained in a compact set for all considered. Note that for ,
by Hölder ’s inequality, where we defined , hence we have . By applying the relation to each term in the definition of , we obtain
Thus we obtain
By also replacing with in Theorem 3, we can see more clearly that is a higher order term of .
Appendix D Details and Proofs of Theorem 1
Here, we provide the proof of Theorem 1. We reuse the notation and terminology from Section C of this Supplementary Material. We prove the uniformly minimum variance property of the proposed risk estimator under the ideal situation of .
Theorem 4 (Known causal mechanism case).
Assume . Then, for all , we have that is the uniformly minimum variance unbiased estimator of . As a special case, it has a smaller variance than the ordinary empirical risk estimator: .
Proof.
The proof is a result of the following two facts. When , the estimator becomes the generalized U-statistic of the statistical functional Eq. (6). Furthermore, when , Eq. (6) coincides with because the approximation error is zero. Since we assume we have and hence both of the statements above hold. Therefore, by Lemma 13, the first assertion of the theorem follows. The last assertion of the theorem follows as a special case as is an unbiased estimator of for .
From here, we confirm the above statements by calculation. We first show that is the generalized U-statistic. To see this, observe that the statistical functional Eq. (6) allows the following expression given :
This is a regular statistical functional of degrees with the kernel . On the other hand, we have
because the summations run through all combinations with replacement. This combined with the fact that are jointly independent when yields that is the generalized U-statistic for Eq. (6).
The following well-known lemma states that a generalized U-statistic is a uniformly minimum variance unbiased estimator.
Lemma 13 (Uniformly minimum variance property of a generalized U-statistic).
Let be a regular statistical functional with kernel (Clémençon et al., 2016), i.e.,
Given samples , let be the corresponding generalized U-statistic
where denotes that the indices run through all possible combinations (without replacement) of the indices. Then, is the uniformly minimum variance unbiased estimator of on .
Proof.
The assertion can be proved in a parallel manner as the proof of (Lee, 1990, Section 1.1, Lemma B) ∎
Remark 5 (Relation to the UMVUE property of ).
The result in Theorem 4 is not contradictory to the fact that the sample average is a U-statistic of degree- and hence the minimum variance among all unbiased estimator of on , where is a set of distributions containing all absolutely continuous distributions Lee (1990). Specifically, is not generally an unbiased estimator of on , even if . While satisfies the -sample symmetry condition, the same does not hold for . By restricting the attention to , the estimator achieves a smaller variance than .
Appendix E Further Comparison with Related Work
Here, we provide an additional detailed comparison with the related work to complement Section 5 of the main text.
E.1 Comparison with Magliacane et al. (2018)
Magliacane et al. (2018) considered domain adaptation among different interventional states by using SCMs. Their problem setting and ours do not strictly include each other (the two settings are somewhat complementary), and their assumption may be more suitable for application fields with interventional experiments such as genomics, while ours may be more suited for fields with observational data such as health record analysis or economics. At the methodological level, Magliacane et al. (2018) takes a variable selection approach to find a subset so that the conditional distribution is invariant, whereas our paper takes a data augmentation approach via the estimation of the SEMs (in the reduced form).
The essential assumptions of Magliacane et al. (2018) are the existence of a separating set (with small “incomplete information bias”) and the identifiability of such a set (yielded from Proposition 1, Assumption 1, and Assumption 2 (iii) in Magliacane et al. (2018)). A particularly plausible application conforming to the assumptions is, for example (but not limited to), genomics experiments. Part of the reason is that Assumption 2 (ii) and (iii) are likely to hold for well-targeted experiments Magliacane et al. (2018). The following is a detailed comparison.
(1) Modeling assumption and problem setup.
The two problem settings do not strictly include one another, and they are of complementing relations where ours corresponds to the intervention-free case and Magliacane et al. (2018) corresponds to the intervention case. If we try to express the problem setting of Magliacane et al. (2018) within our formulation, we would be expressing the interventions as alterations to the SEMs. We assume that such alterations do not occur in our setting since our focus is on observational data; therefore, the problem formulation of Magliacane et al. (2018) is not a subset of ours. On the other hand, if we try to express our problem setting within the formulation of Magliacane et al. (2018), our problem setup would only have as the context variable, and would be a parent of all observed variables, e.g., switches the distribution of by switching different quantile functions to perform inverse transform. This potentially allows the existence of the effect and diverges from Assumption 2 (iii) in Magliacane et al. (2018). Also, even if such an edge does not exist, it is acceptable that there are no separating sets (in the extreme case) if is a parent of all ’s. In this case, conditioning on any of the ’s would result in making and dependent. From this consideration, our problem setting is not a subset of that of Magliacane et al. (2018), either.
(2) Plausible applications.
The problem setup of Magliacane et al. (2018) is suitable especially for applications in which various experiments are conducted such as genomics Magliacane et al. (2018), whereas our problem setting may be more suitable for some fields with observational data such as health record analysis or economics.
(3) Methodology.
Our proposed method actually estimates the SEMs (though in the reduced-form) and exploits the estimated SEMs in the domain adaptation algorithm. In fact, directly using the estimated SEMs as a tool to realize domain adaptation can be seen as the first attempt to fully leverage the structural causal models in the DA algorithm. On the other hand, Magliacane et al. (2018) approaches the problem of domain adaptation via variable selection to find a subset so that the conditional distribution is invariant.
E.2 Comparison with Gong et al. (2018)
In the present paper, we assumed an invariance of structural equations between domains. Here, we clarify the difference from a related but different assumption considered by Causal Generative Domain Adaptation Network (CG-DAN; Gong et al., 2018).
(1) Problem setup.
Gong et al. (2018) presumes the anticausal scenario (i.e., is the cause of ) and that given follows a structural equation model, whereas our paper considers more general SEMs of and .
(2) Theoretical justification.
The approach of Gong et al. (2018) does not have a theoretical guarantee in terms of the identifiability of , i.e., there has been no known theoretical condition under which the learned generator is applicable across different domains. On the other hand, our method enjoys a strong theoretical justification of nonlinear ICA including the identifiability of under known theoretical conditions.
(3) Methodology.
The method of Gong et al. (2018) estimates the GCM of given using source domain data and uses it to design a generator neural network. On the other hand, we more directly exploit the estimated reduced-form SEM in the method.
E.3 Comparison with Arjovsky et al. (2020)
Arjovsky et al. (2020) proposed invariant risk minimization (IRM) for the out-of-distribution (OOD) generalization problem. The IRM approach tries to learn a feature extractor that makes the optimal predictor invariant across domains, and its theoretical validity is argued based on SCMs. Here, we compare it with the present work in terms of the problem setup, theoretical justification, and the methodology.
(1) Basic assumption and problem setup.
The OOD generalization problem tackled in Arjovsky et al. (2020) assumes no access to the target domain data. In this respect, the problem is different and intrinsically more difficult than the one considered in this paper, where a small labeled sample from the target domain is assumed to be available. In order to solve the OOD generalization problem, in a nutshell, Arjovsky et al. (2020) essentially assumes the existence of a feature extractor that elicits an invariant predictor, i.e., one that makes the optimal predictors of the different domains to be identical after the feature transformation. This can be seen as a variant of the representation learning approach for domain adaptation where we assume there exists such that is invariant across domains. Indeed, for example, when the loss function is the cross-entropy, the condition corresponds to the invariance of across domains Arjovsky et al. (2020). More technically, in addition, (Arjovsky et al., 2020, Definition 7(ii)) requires the condition , which can be violated when the latent factors corresponding to have different distributions across domains. On the other hand, our assumption can be seen as the existence of a feature extractor that can simultaneously estimate the independent components in all domains, which does not necessarily imply the existence of a common feature transformer that induces a unique optimal predictor.
(2) Theoretical justification.
Arjovsky et al. (2020) formulated a condition under which the IRM principle leads to an appropriate predictor for OOD generalization, but only under a certain linearity assumption which is essentially a relaxation of linear SEMs. Furthermore, in the theoretical guarantee, the feature extractor is restricted to be linear. In addition, Arjovsky et al. (2020) only provides the population-level analysis that the solution of the IRM objective formulated using the underlying distributions enjoys OOD generalization, and it does not discuss the condition under which the ideal feature extractor can be properly estimated by the empirical IRM. The requirement for the strong assumption of linearity likely stems from the intrinsic difficulty of the OOD problem in Arjovsky et al. (2020), namely, its formulation does not assume specific types of interventions. On the other hand, our method enjoys a stronger theoretical guarantee of an excess risk bound without such parametric assumptions on the models or the data generating process, by focusing on the case that the causal mechanisms are indifferent across the domains.
(3) Methodology.
The methodology of IRM estimates a single predictor that generalizes well to all domains by finding a feature extractor that makes the predictor optimal in all domains. The approach shares the same spirit as the representation learning approaches to domain adaptation, which try to find a feature extractor that induces invariant conditional distributions, such as transfer component analysis Pan et al. (2011). On the other hand, our method estimates the SEMs (in the reduced-form) and exploits it to make the training on the few target domain data more efficient through data augmentation.