Importance Weight Estimation and Generalization in Domain Adaptation under Label Shift
Abstract
We study generalization under labeled shift for categorical and general normed label spaces. We propose a series of methods to estimate the importance weights from labeled source to unlabeled target domain and provide confidence bounds for these estimators. We deploy these estimators and provide generalization bounds in the unlabeled target domain.
keywords: label shift, domain adaptation, integral operator, generalization.
1 Introduction
In supervised learning, we usually make predictions on an unlabeled target set by training a predictive model on a labeled source set. This is a sensible approach, especially when the source and target sets are expected to have i.i.d. samples from the same distribution. However, this assumption does not hold in many real-world applications. For example, oftentimes, predictions for medical diagnostics are based on data from a particular population (or the same population in a few years back) of patients and machines due to the limited variety of data that is available in the training phase. Let us consider the following hypothetical scenario. A classifier is trained to predict whether a patient has contracted a severe disease in country A based on the physical measurements and diagnosis (labeled source data) in country A. The disease is potentially even more prevalent in country B, where no experts are available to make a reliable diagnosis. Ergo, the following questions could be relevant in this scenario:
-
•
Suppose we send a group of volunteers to country B with a preliminary set of equipment to record measurements of patients’ symptoms. How can the data in country A and the new unlabeled data from country B be used to update the classifier to give good diagnostic predictions for patients in country B?
-
•
What if, along with the few diseases we are considering, we are also interested in predicting disease levels where the task is to predict continuous labels, therefore regression?
Similar problems arise in many other domains besides health care, but we will use the medical example throughout the paper for ease of presentation. In the above example, country A denotes the source domain, and country B denotes the target domain. In this example, the distribution of covariates-label in the source may differ from that of the target domain. Therefore, a predictor learned on the source domain might not carry on to the target domain. In the above hypothetical example, consider the case that given a person having a disease , the distribution of their symptoms is similar in both source and target domains, but the proportion of people having different diseases differs.
Consider another example where the source domain is data of a city in Fall, and the target is the data of the same city, a few months later, in Spring. The disease distribution in Spring may differ from that of Fall, but given a person caught a disease, that person expresses similar symptoms in both seasons. In domain adaptation, the label shift is a class of problems where there is a shift in label distribution from source to the target domain, but the conditional distribution of covariates stays unchanged.
We used the medical example to motivate the setting. However, the label shift paradigm is a common in practice. For example, one can use image data of a population to train a model that later is required to adapt to the different demographic environment. In scientific application, to tackle inverse problems, e.g., sensing, or partial differential equations, one may learn an inverse map that maps an outcome to a source. However, later the model needs to be adapted to new distributions of sources in different applications.
In a generic label shift domain adaptation problem, we are given a set of labeled samples from the source domain but unlabeled samples from the target domain. In such settings, the task is to learn a model with low loss in the target domain. In this paper, we study two settings, categorical label spaces, general normed label spaces. We propose a series of methods to estimate the importance weight from source to target domain for both of these settings. Prior works study categorical label spaces and propose to use a label classifier to estimate the importance weight vector, later used in empirical risk minimization (ERM), to learn a good classifier. Lipton et al. (2018) provides a method to estimate the importance weight vector and guarantee estimation error under strong assumptions that the error in the estimation of confusion matrix should be full rank, the confusion matrix should be a square matrix, the number of samples should be larger than an unknown quantity. Azizzadenesheli et al. (2019) propose another estimator and relax the assumptions in the prior work, but still based on label classifier. In this paper, we first relax the requirement to use a classifier to estimate the importance weight vector. This step improves the conditioning on the inverse operator (inverse of the transition operator from labels to predicted statistics) and exploits the spectrum of the forward operator appropriately. We propose a regularized estimator, which is an extension to Azizzadenesheli et al. (2019).
We further improve the analysis in Lipton et al. (2018) and propose a estimator using general functions (rather than being limited to classifier). For this estimator, we relax the requirement that the confusion matrix should be a square matrix and further relax the strong assumption that the error matrix is full rank. Note that, this analysis also applies to the case where classifiers are used for importance weight estimation and is considered as an improvement to Lipton et al. (2018). Moreover, this analysis results in an estimation bound as tight as the one for the regularized estimator.
For the case of general normed label spaces, we propose two estimators to estimate the importance weight functions. The first estimator is based on the traditional approach used in inverse problems of compact operators, which directly estimates the inverse operator(Kress et al., 1989). Such an estimator requires the number of samples to be larger than an unknown quantity, which can be considered as a limitation of the traditional methods in inverse problems. The second approach is a novel approach based on regularized optimization in Hilbert spaces, and the derived bound holds for any number of samples. Therefore this approach has an advantage over the traditional approaches and has its own independent importance beyond domain adaptation. As mentioned before, such problems are common in partial differential equations and reinforcement learning.
For both of the proposed estimators, we provide concentration bound on our estimate based on a desirable norm and show that the estimation errors vanish as the number of samples grows. Finally, we deploy these importance weight estimates in importance weighted ERM and provide generalization guarantees of ERM in these settings. Using the existing generalization analysis based on Rademacher complexity, Rényi divergence, and MDFR Lemma in Azizzadenesheli et al. (2019), we show the generalization property of the importance weighted empirical risk minimization on the unseen target domain for both categorical and general normed label spaces.
2 Preliminaries
Let and be two sets, denoting the sets of covariates and labels, and denote a space of measures on a measure space . For the product space of , let denote the probability space on the source domain, and respectively, denote the probability space on the target domain where . For the label space , let , generated by subsets of , denote the sub--algebra of , with and as restrictions of and to . Similarly for , i.e., and on . We consider the setting where is absolutely continuous with respect to . We define the shift between source and target domains as a label shift type, if for any random variable , integrable under , there exist a version for which is equal to a version of almost surely.
When and are normed space, for a given complex separable Hilbert space accompanied with inner product and norm , let denote a linear bounded operator . We say an operator is Hilbert-Schmidt if for any set of bases of . Note that, the separability of is required to have countable bases. The space of Hilbert-Schmidt operators () is also a Hilbert space under inner product defined as for , and respectively as the corresponding norm (Lang, 2012; Kress et al., 1989). We define a reproducing kernel Hilbert space (RKHS) , a Hilbert space of functions , accompanied with a symmetric positive definite continuous reproducing kernel , such that is finite.
In this paper, in order to account for the shifts between source and target domains, we consider the exponent of the infinite and second order Rényi divergences, the notions deployed also in prior works (Cortes et al., 2010; Azizzadenesheli et al., 2019). For the restricted probabilities measures and , we have:
where , the importance weight function, is the Radon–Nikodym change of measure function, and and denote expectation with respect to and respectively. To simplify notation, let , denote the importance weight function.
In label shift setting, we are interested in the prediction task in the target domain. In other words, for a given loss function , and a function class , we are interested in finding a function , with small expected loss, . We note that,
Problem setting | Data regime |
---|---|
1) and can be different | Data set |
2) For any random variable , a.s. | Data set |
In the label shift setting, we have access to labeled data points from the source domain , but unlabeled samples form the target domain (see Table 1). In the case where we are provided with the importance weight function , we could deploy importance weighted ERM on the source domain,
where is the empirical measure induced by , to obtain a prediction function . However, in practice, we do not have access to , and need to estimate it using the knowledge of and . Having access to the data set of size , we split the data set to two subsets. We randomly select portion of the , and use these samples and its corresponding empirical measure , along with samples in to estimate importance weight , and utilize the remaining samples of for the importance weighted ERM to obtain , i.e.,
(1) |
We propose a series of methods to estimate , up to their confidence bounds, and provide generalization guarantees for the importance weighted ERM of .
3 Problem Setup
We study estimation of importance weight functions in both classification and regression settings.
3.1 Categorical Label Spaces
Consider the task of multi class classification where the task is to predict a class given covariates. The label space where is the number of classes111For simplicity we describe the results for finite , but the standard generalization of norms to countably infinite sets extends the results in this paper to general countable label sets.. In this setting, we represent the importance weight function as a dimensional vector such that for . For any dimension and measurable function , we show how relates the and . Note that, to compute these expectations we do not need the associate labels. In the following we denote . Using the rules of conditional expectations, we have,
(2) |
Let denote the corresponding linear (matrix) operator in Eq. 3.1, i.e., , represented as .
Remark 3.1.
In Eq. 3.1, having access to and well conditioned (full column rank) , one can compute using Moore–Penrose inverse of , i.e., , and we have . In the case where a.s., a vector of all ones is a feasible in , i.e., importance weights are all one. When approximating in the presence of no additional side information about and , a homogeneous regularization around vector of all ones is desirable, i.e., finding satisfying and is closest to vector of all one. However, outputs a importance vector which satisfies , but is closest to the zero vector in -norm sense. Therefore, even in the case of no shift, using may result in estimates of importance weight with many entries equal to zero, resulting in undesirable importance weighted ERM classifier.
Reformulation technique: Instead of solving for in , we solve for where , is a dimensional vector of ones, and . Note that, when a.s., a vector of all zeros is a feasible , i.e., importance weights are all one. Using formulation, we have:
(3) |
where the right hand side is equal to zero when a.s. . It is important to note that accounts for the amount of shift. Small in value denotes small shifts, while large represent large shifts, and potentially harder problems. We denote as the maximum norm on allowed shifts. One can find a desirable using, which is a solution to and is closest to vector of all zeros (origin) in -norm sense. In this paper, we focus on estimating the importance weight through estimating .
3.2 General Normed Label Spaces
In the following, we consider the case where the label space is a normed vector space. Let denote the RKHS defined in the preliminaries section. For a given function , we define , an operator such that for any measure . In other words, , . Let, , denote a version of . We define an integral operator by a reproducing kernel and a positive measure on , as an operator such that,
denote general integral operators from to . Note that, for any , we have We define as . Let denote a constant function with value , and . Therefore, for , we have,
(4) |
where we assume for all . Similar to the categorical setting, accounts for the amount of shift. We denote as the maximum norm on allowed shifts. In the following we denote , and consider the case where has a bounded inverse.
4 Estimation and Generalization
In this section we provide a series of methods to estimate the effective shift weight .
4.1 Categorical Label Spaces
For categorical label spaces, we use Eq. 3 to approximate . However, as mentioned before, we need to estimate the vectors and matrix in equality. Let , and to be the estimates of , and respectively. We use to denote an upper bound (can be high probability upper bound) on , on , and on . For , let denote the inverse of a smallest singular value of the matrix , and denote an upper bound on of the true .
Lemma 4.1.
Consider a non-degenerate matrix , vectors , where . Also consider the corresponding estimates, , and estimation errors . When , for ,
Proof A.1. We refer to this estimator as E1. Note that the estimate depends on the choice of , and how well-conditioned the forward operator is, i.e., how small is. For the sake of notation simplicity, we drop the dependence in (and later in ).
We obtain , and by applying function to data points in , and data points in . In other words,
(5) |
where, , is the ’th standard basis vector, with all elements are zero except the ’th element is one.
Lemma 4.2.
Lemma 4.2 follows from the standard application of Hoeffding’s inequality to vectors, and concentration of adjoint matrices and dilation (Thm. 1.3 and section 2.6 in Tropp (2012)).
Theorem 4.1.
Using the direct estimator , when the number of samples ,
with probability at least .
Remark 4.1 (Comparison of Theorem 4.1 with Lipton et al. (2018)).
Lipton et al. (2018) propose to use to estimate the importance weight vector. When is large enough which is similar to the condition in the Theorem 4.1, the authors provide a concentration bound on which holds under the following additional conditions: has to be square matrix, i.e., . the estimation error in should be full rank which is an strong requirement, needs to be a classifier. The proposed analysis in this paper does not require any of these assumptions, and furthermore, it improves the bound provided in Lipton et al. (2018).
An alternative approach to estimate the is to use regularized approximation. Regularized approach has been proposed in Azizzadenesheli et al. (2019) for the special case when is a classifier. In this paper we generalize this approach to general functions. The underlying optimization for the regularized approach is as follows,
(7) |
We refer to this estimator as E2.
Lemma 4.3.
Consider a non-degenerate matrix , vectors , where . Also consider the corresponding estimates, , and estimation errors . For , a solution to Eq. 7, we have,
Proof A.2. It is important to note that the right hand side of both bounds in Lemma 4.1 and Lemma 4.3 are equal and the only difference is on the burning time required in Lemma 4.1. Such direct comparison is not possible using the bound provided in (Lipton et al., 2018). The Lemma 4.1 and Lemma 4.3 indicate that regularized estimation method is more favorable for two main reasons; it does not require minimum number of samples, the minimum number of samples required in direct inverse approach depends on a priori unknown parameters of the problem, which the regularization based approach does not require such prior knowledge.
Theorem 4.2.
This Theorem directly follows from Lemma 4.2 and Lemma 4.3. Similar to Lipton et al. (2018), Azizzadenesheli et al. (2019) also uses , and the results in the Theorem 4.2 is the generalization of the prior work to general functions .
Note that the above mentioned bounds depends on . Prior works (Lipton et al., 2018; Azizzadenesheli et al., 2019) realized Eq. 3.1 just for the special case of , In this case, the square matrix has a special form of confusion matrix, i.e., and has entries sum to one. In the best case, where the is a perfect classifier with zero loss in the special case of realizable setting, , otherwise . However, in this paper, when we allow more general to exploit the spectrum more appropriately, e.g., orthogonal points on a sphere (or cube), then . Therefore, the bounds in the Theorems 4.1, and Theorems 4.2 further improve the prior bounds in Lipton et al. (2018); Azizzadenesheli et al. (2019).
4.2 General Normed Label Spaces
For normed label space, we use Eq. 3.2 to approximate . However, as mentioned before, we need to estimate the functions and the operator in the equality. Let , and be the estimates respectively. We use to denote an upper bound (e.g., high probability) on , on , and on . Let denote an upper bound on of the true . For small enough , when , we have that exist and,
Therefore, under sufficiently small , the direct estimate of is the following estimator,
(8) |
Lemma 4.4.
Consider . For the estimates, , and estimation errors , when , then , the solution to Eq. 3.2, satisfies,
Proof A.3. We obtain , and by applying function to data points in , and data points in . Ergo,
(9) |
Note that and are in the RKHS therefore in a Hilbert space with norm , and is in the Hilbert-Schmidt , which it self is a Hilbert space with norm . In the following we deploy concentrations on Hilbert spaces.
Lemma 4.5.
Proof A.4. The proof follows from the concentration inequalities in Hilbert space (in general Banach spaces) (Pinelis, 1992; Rosasco et al., 2010) and is provided in the appendix.
Theorem 4.3.
Using the direct estimator , as the number of samples , then
with probability at least .
The Theorem 4.3 directly follows from the statements in Lemma 4.4 and Lemma 4.5. We propose an alternative approach to estimate the . This approach is based on using regularized approximation. We propose the following regularized optimization problem, the estimator E4,
(11) |
This optimization is designed such that the outcome minimizes the error in the desired objective while regularizing the shift to zero.
Lemma 4.6.
Consider an operator , functions , where . Also consider the corresponding estimates, , and estimation errors . For , a solution to Eq. 11, we have:
Proof A.5. The Lemma 4.6 is that of independent importance and is the infinite dimension extension of finite sample value learning in the field of reinforcement learning (Pires and Szepesvári, 2012). Similar to categorical setting, it is important to note that the right hand side of both bounds in Lemma 4.4 and Lemma 4.6 are equal and the only difference is on the burning time required in Lemma 4.4. The Lemma 4.4 and Lemma 4.6 indicate that regularized estimation method is more favorable for two main reasons; it does not have the minimum number of samples requirement, the minimum number of samples required in direct inverse approach depends on a priori unknown parameters of the problem, which is not needed in the regularized approach of Eq. 11.
Theorem 4.4.
4.3 Generalization
Having access to an estimate of importance weight, we deploy importance weighted ERM to learn a predictor. As motivated by Azizzadenesheli et al. (2019), when, for instance, the number samples from the target domain is small or the maximum expected shift is high, i.e., large , but at the same time, the number of required samples to have a reasonably small is not much higher than the number of samples provided, the estimate may not be a reliable estimate to be used. In this case, we may leave the and stick to the best empirical risk minimizer on the source domain, i.e., we set to zero ( to ones) in Eq.1. Motivated by such consideration, we use regularized importance weight in the empirical risk minimization, i.e., for we use,
(12) |
For a function , define a set of weighed loss functions,
and its Rademacher complexity as follows,
where is a collection of independent Rademacher random variables (Bartlett and Mendelson, 2002). Employing any of the estimators in Algorithm 1 to come up with a , and the statement of Theorem 1 in Azizzadenesheli et al. (2019), we have,
Theorem 4.5 (Generalization Guarantee).
For two probability measures and on a measure space , consider sample from the source and sample from the target domain, the algorithm 1 outputs , for which we have,
with probability at least . If direct inverse method, E1 or E3, is used, the above bound holds when satisfies for categorical lalel spaces, and for normed label spaces.
The proof of Theorem 4.5 follows from MDFR Lemma (Lemma 4), and Theorem 1 in Azizzadenesheli et al. (2019).
5 Experiment
We empirical study the performance of proposed importance weight estimators, in particular, E2, Eq. 7 for categorical, and E4, Eq 11 for normed spaces on synthetic data.


Prior works provide an empirical study for estimator in Eq. 7 when the function is a classifier. As discussed in subsection 4.1, and prescribed by the generalization in Theorem 4.2, allowing to be general form function might enhance the weight estimation. In the following, we provide comparisons in the estimation error of the estimator in Eq. 7, when is a deep neural network classifier with a softmax layer in the last layer with output on a Simplex and trained using cross entropy loss, and is a same neural network with no softmax layer, and trained using one hot encoding of label, therefore, output as corners of a Hyper Cube using L2 loss. We first study the case where the number of data points is fixed, but the number of classes varies Fig 1(left) with y-axis as the relative estimation error of importance weights. As indicated in Fig 1(left), using Hyper Cube provides a more consistent weight estimation compared with using Simplex. As the number of classes grows, we observe that both of these methods result in high error which is due to insufficient number of samples. In the second study, we keep the number of classes to be , and increase the number of samples Fig 1(right) and observe Hyper Cube provides a much better samples complexity and recovers the importance weight with much fewer number of samples compared with Simplex .


In Fig. 2 we present the result when . In this case, we use GP regression methods to learn the mapping from to , and compute the quantities in Eq. 9. We use squared version of Eq. 11 to estimate . The Fig. 2 express that, as the number of samples increases, the estimation error improves. Finally, we made the code available for further use222https://github.com/kazizzad/LabelShiftEstimator for further information. Please refer to appendix C for details.
6 Related Works
Domain adaptation is study of adapting to a new domain (target) under minimal access to labeled or unlabeled data from it (Ben-David et al., 2010). In standard supervised learning, the source and target follow the same measure Vapnik (1999); Bartlett and Mendelson (2002). In the case of arbitrary shifts in the measures, Ben-David et al. (2010) introduces a notion of H-divergence to derive generalization analysis, where the multiple sources setting has been studied (Crammer et al., 2008). Robustness against distributional shifts has been widely studied in the literature which does not make explicit modeling assumptions on the shift Esfahani and Kuhn (2018); Namkoong and Duchi (2016)). Moreover, under the covariate shift, adversarial approaches have been studied to developed robust model (Liu and Ziebart, 2014; Chen et al., 2016) where robustness is achieve only against very small changes in distributions to maintain sufficient performance.
When the shift between two domain is not unstructured, the problem of covariate shift and label shifts have been considered. These settings become apparent in the context of casualty (Schölkopf et al., 2012). In this setting, the covariates causes the label (reward in contextual bandit), denoted as causal direction, and the label can cause the symptoms (disease causes symptoms), denoted as anti-causal direction. When there is a shift in the measures, the knowledge of can be deployed for importance weighted risk minimizing. In the setting of covariate shift, when are known to the learner, the generalization power of importance weighted empirical risk minimization has been studied (Zadrozny, 2004; Cortes et al., 2010; Cortes and Mohri, 2014; Shimodaira, 2000). When is not known, kernel methods have been deployed (Huang et al., 2007; Gretton et al., 2009, 2012; Zhang et al., 2013; Zaremba et al., 2013; Shimodaira, 2000). Such approaches fall short in high dimensional setting, especially images.
Under some regularity condition, a measure over the covariates can be seen as a mixture of covariate conditional distribution. Prior works use this observation and under a strong assumption of pairwise mutual irreducibility (Scott et al., 2013) show that, using Neyman-Pearson criterion (Blanchard et al., 2010), the mixture weights can be recovered for special cases Sanderson and Scott (2014); Ramaswamy et al. (2016); Iyer et al. (2014) which impose strong assumptions and computational challenges. In the presence of label shift, Bayesian methods are among others method that impose complicated computation requirements, e.g., (Storkey, 2009; Chan and Ng, 2005) require computing posterior of label distribution for a given prior, treats the out come of a classifier as a probably distribution over labels, and and suffers from lack of generalization guarantees.
To address these challenges in label shift, also known as target shift (Zhang et al., 2013), and prior probability shift (Moreno-Torres et al., 2012; Kull and Flach, 2014; Hofer, 2015; Tasche, 2017), a work by Lipton et al. (2018) propose black box correction method which is applicable to a wide range of label shift problems and drawn from the classical grouping problemBuck et al. (1966); Forman (2008); Saerens et al. (2002). The authors provide a way to estimate the importance weights and use it for importance weighted empirical risk minimization. Following this idea, Azizzadenesheli et al. (2019) relaxes the assumptions required in (Lipton et al., 2018) such as burning period, and full-rank assumption error matrix, and propose a regularized optimization method to estimate the importance weights up to a confidence interval. Using this confidence interval, Azizzadenesheli et al. (2019) propose a novel analysis based on the second moment of importance weights and provides a first generalization guarantee for label shift. We utilize these development in our paper.
Under label shift setting, when the target domain has a balanced label distribution, but the source has imbalanced, Cao et al. (2019) proposes a margin based method to improve the generalization bound on the target domain. Calibration has been deployed for label shift problem (Shrikumar and Kundaje, 2019), where it is later shown to be connected with importance weight approach (Garg et al., 2020). Recently, Kalan et al. (2020) provides an study of transfer learning in the framework of deep neural networks, and Shui et al. (2020) extend the H-divergence based guarantees to entropy based bounds.
7 Conclusion
In this paper, we study label shift and consider two cases of categorical and normed spaces of labels. We propose a suite of methods to estimate the importance weight from source to target domain only using unlabeled samples from the target and labeled samples from the source. We show that using such estimates results in desirable generalization properties.
In our motivation examples, we discussed a medical setting where we use labeled data from a source country and send a group of volunteers to a target country with enough expertise and equipment to gather statistics of symptoms. Our task is to learn a good predictor for the target domain. Now consider a case that we have access to a limited number of specialists in the target country who can dedicate their time to diagnose patients. In case the diseases are not deadly, Which patients will we send to the doctors to be diagnosed with a goal to gain a good predictor with a few numbers of queries? In future work, we plan to study this setting where we need to actively decide whom to be diagnosed/investigated. This is an active learning domain adaptation under the label shift setting.
References
- Azizzadenesheli et al. (2019) Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, and Animashree Anandkumar. Regularized learning for domain adaptation under label shifts. arXiv preprint arXiv:1903.09734, 2019.
- Bartlett and Mendelson (2002) Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 2002.
- Ben-David et al. (2010) Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 2010.
- Blanchard et al. (2010) Gilles Blanchard, Gyemin Lee, and Clayton Scott. Semi-supervised novelty detection. Journal of Machine Learning Research, 11(Nov):2973–3009, 2010.
- Buck et al. (1966) AA Buck, JJ Gart, et al. Comparison of a screening test and a reference test in epidemiologic studies. ii. a probabilistic model for the comparison of diagnostic tests. American Journal of Epidemiology, 1966.
- Cao et al. (2019) Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems, pages 1567–1578, 2019.
- Chan and Ng (2005) Yee Seng Chan and Hwee Tou Ng. Word sense disambiguation with distribution estimation. In IJCAI, 2005.
- Chen et al. (2016) Xiangli Chen, Mathew Monfort, Anqi Liu, and Brian D Ziebart. Robust covariate shift regression. In Artificial Intelligence and Statistics, pages 1270–1279, 2016.
- Cortes and Mohri (2014) Corinna Cortes and Mehryar Mohri. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014.
- Cortes et al. (2010) Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. In Advances in neural information processing systems, pages 442–450, 2010.
- Crammer et al. (2008) Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. Journal of Machine Learning Research, 9(Aug):1757–1774, 2008.
- Cybenko (1989) George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
- Esfahani and Kuhn (2018) Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 2018.
- Forman (2008) George Forman. Quantifying counts and costs via classification. Data Mining and Knowledge Discovery, 17(2):164–206, 2008.
- Garg et al. (2020) Saurabh Garg, Yifan Wu, Sivaraman Balakrishnan, and Zachary C Lipton. A unified view of label shift estimation. arXiv preprint arXiv:2003.07554, 2020.
- Gretton et al. (2009) Arthur Gretton, Alexander J Smola, Jiayuan Huang, Marcel Schmittfull, Karsten M Borgwardt, and Bernhard Schölkopf. Covariate shift by kernel mean matching. 2009.
- Gretton et al. (2012) Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar), 2012.
- Hofer (2015) Vera Hofer. Adapting a classification rule to local and global shift when only unlabelled data are available. European Journal of Operational Research, 243(1):177–189, 2015.
- Huang et al. (2007) Jiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Schölkopf, and Alex J Smola. Correcting sample selection bias by unlabeled data. In Advances in neural information processing systems, 2007.
- Iyer et al. (2014) Arun Iyer, Saketha Nath, and Sunita Sarawagi. Maximum mean discrepancy for class ratio estimation: Convergence bounds and kernel selection. In International Conference on Machine Learning, pages 530–538, 2014.
- Kalan et al. (2020) Seyed Mohammadreza Mousavi Kalan, Zalan Fabian, A Salman Avestimehr, and Mahdi Soltanolkotabi. Minimax lower bounds for transfer learning with linear and one-hidden layer neural networks. arXiv preprint arXiv:2006.10581, 2020.
- Kress et al. (1989) Rainer Kress, V Maz’ya, and V Kozlov. Linear integral equations, volume 82. Springer, 1989.
- Kull and Flach (2014) Meelis Kull and Peter Flach. Patterns of dataset shift. In First International Workshop on Learning over Multiple Contexts (LMCE) at ECML-PKDD, 2014.
- Lang (2012) Serge Lang. Real and functional analysis, volume 142. Springer Science & Business Media, 2012.
- Li et al. (2020a) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020a.
- Li et al. (2020b) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020b.
- Li et al. (2020c) Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. Advances in Neural Information Processing Systems, 33, 2020c.
- Lipton et al. (2018) Zachary C Lipton, Yu-Xiang Wang, and Alex Smola. Detecting and correcting for label shift with black box predictors. arXiv preprint arXiv:1802.03916, 2018.
- Liu and Ziebart (2014) Anqi Liu and Brian Ziebart. Robust classification under sample selection bias. In Advances in neural information processing systems, pages 37–45, 2014.
- Moreno-Torres et al. (2012) Jose G Moreno-Torres, Troy Raeder, Rocío Alaiz-Rodríguez, Nitesh V Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. Pattern recognition, 2012.
- Namkoong and Duchi (2016) Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In Advances in Neural Information Processing Systems, pages 2208–2216, 2016.
- Nyström (1930) Evert J Nyström. Über die praktische auflösung von integralgleichungen mit anwendungen auf randwertaufgaben. Acta Mathematica, 1930.
- Pinelis (1992) Iosif Pinelis. An approach to inequalities for the distributions of infinite-dimensional martingales. In Probability in Banach Spaces, 8: Proceedings of the Eighth International Conference, pages 128–134. Springer, 1992.
- Pires and Szepesvári (2012) Bernardo Avila Pires and Csaba Szepesvári. Statistical linear estimation with penalized estimators: an application to reinforcement learning. arXiv preprint arXiv:1206.6444, 2012.
- Ramaswamy et al. (2016) Harish Ramaswamy, Clayton Scott, and Ambuj Tewari. Mixture proportion estimation via kernel embeddings of distributions. In International Conference on Machine Learning, pages 2052–2060, 2016.
- Rosasco et al. (2010) Lorenzo Rosasco, Mikhail Belkin, and Ernesto De Vito. On learning with integral operators. Journal of Machine Learning Research, 11(2), 2010.
- Saerens et al. (2002) Marco Saerens, Patrice Latinne, and Christine Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure. Neural computation, 14(1):21–41, 2002.
- Sanderson and Scott (2014) Tyler Sanderson and Clayton Scott. Class proportion estimation with application to multiclass anomaly rejection. In Artificial Intelligence and Statistics, pages 850–858, 2014.
- Schölkopf et al. (2012) Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. arXiv preprint arXiv:1206.6471, 2012.
- Scott et al. (2013) Clayton Scott, Gilles Blanchard, and Gregory Handy. Classification with asymmetric label noise: Consistency and maximal denoising. In Conference On Learning Theory, 2013.
- Shimodaira (2000) Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 2000.
- Shrikumar and Kundaje (2019) Avanti Shrikumar and Anshul Kundaje. Calibration with bias-corrected temperature scaling improves domain adaptation under label shift in modern neural networks. arXiv preprint arXiv:1901.06852, 1, 2019.
- Shui et al. (2020) Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, and Boyu Wang. Beyond -divergence: Domain adaptation theory with jensen-shannon divergence. arXiv preprint arXiv:2007.15567, 2020.
- Storkey (2009) Amos Storkey. When training and test sets are different: characterizing learning transfer. Dataset shift in machine learning, pages 3–28, 2009.
- Tasche (2017) Dirk Tasche. Fisher consistency for prior probability shift. The Journal of Machine Learning Research, 18(1):3338–3369, 2017.
- Tropp (2012) Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012.
- Vapnik (1999) Vladimir Naumovich Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks, 1999.
- Zadrozny (2004) Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. In Proceedings of the twenty-first international conference on Machine learning. ACM, 2004.
- Zaremba et al. (2013) Wojciech Zaremba, Arthur Gretton, and Matthew Blaschko. B-test: A non-parametric, low variance kernel two-sample test. In Advances in neural information processing systems, 2013.
- Zhang et al. (2013) Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, 2013.
Appendix
Appendix A Proofs
A.1 Proof of Lemma 4.1
Lemma 4.1.
By definition, we have
and
Therefore we have;
(13) |
Using Cauchy–Schwarz inequality, we have:
(14) |
Using this statement, we have;
(15) |
∎
A.2 Proof of Lemma 4.3
Lemma 4.3.
Following the theorem 3.4 in Pires and Szepesvári [2012], we have that, , the solution to minimization in Eq. 7, satisfies,
(16) |
The right hand side of Eq. A.2 is upper bounded by plugging in true instead of approaching the infimum, i.e.,
(17) |
the last line follows since . Using the equality one more time on the left hand side of Eq. A.2, we have
∎
A.3 [Proof of Lemma 4.4]
Lemma 4.4.
By definition, since has bounded inverse, we have
and since
we have
Therefore we have;
(21) |
Using Cauchy–Schwarz inequality, we have:
(22) |
Using this statement, we have;
(23) |
∎
A.4 Proof of Lemma 4.5
Lemma 4.5.
For , a collection of mean-zero independent random variables in a measure space of , if , a.s., then,
(24) |
with probability at least [Pinelis, 1992, Rosasco et al., 2010].
To develop the confidence interval for in we set , and for the ’th sample in the portion of data points in source data set . Note that, is a mean-zero random variable for all and . Ergo, and , resulting in the following
(25) |
with probability at least . With a similar argument for data-points in the target data set , we have
(26) |
with probability at least , and
with probability at least . ∎
A.5 Proof of Lemma 4.6
Lemma A.1.
For any function , we have:
(27) |
Lemma A.1.
Using triangle in equality twice we have,
(28) |
therefore,
(29) |
With a similar argument for Similarly, we have,
(30) |
therefore,
(31) |
Putting these inequalities together we have;
(32) |
which states the Lemma. ∎
Now we consider the following optimization. For a given , we define as follows;
(33) |
Lemma A.2.
For , the solution to Eq. 33, we have,
(36) |
Lemma A.2.
Lemma 4.6.
We directly apply the statement of the Lemma A.2 to derive the statement of this Lemma 4.6., the solution to Eq. 33, and , the solution to Eq. 11, are equal when we set to . Therefore, using Lemma A.2, we get
(40) |
Using the fact that, for the true , we have that , we have
(41) |
which states the first statement in the Lemma 4.6. Again using the fact that for the true , we have that , plugging in the true on the right hand side of Eq. A.5, we have
(42) |
Using the above statement, we have;
(43) |
which state the second statement in the Lemma 4.6. ∎
Appendix B Neural Operator
In this section, we provide a discussion on how one may use neural networks to approximate operators.
Disclaimer: The following study is for the sake of discussion. We did not attempt to make the results tight and did not attempt to make them general. A further significantly involved study is required to generalize the following results.
This discussion is motivated by series of works on neural operators where the kernel is learned using a deep neural network and Nyström approximation [Nyström, 1930] is deployed to approximate the integral [Li et al., 2020a, b, c].
We provide this discussion for a general case of integral operator in spaces induced by a symmetric positive definite continuous reproducing kernel , such that is finite. Consider an integral operator such that,
for a measure . We assume is finite measure. For simplicity. Note that, for any function in , and , we have .
Since is continuous, then, for the compact space , using the universal approximation results of Cybenko [1989] for neural networks with non-polynomial activation, for any , there exists a neural network , a continuous function, such that,
We define a difference function . Using this results, we derive the deviations in the induced operators, , and , where is such that for any , we have,
which exists for the finite measure if exists.
Our first results elaborates on the approximation error in .
Proposition B.1.
For any , under the above construction, we have,
(44) |
Proof.
of Proposition B.1
For any , and we have,
With a similar argument we have,
Putting these two together results in the final statement.
∎
The result in the Proposition B.1 states that for any function in , we can expect the result of neural operator is close to that of . However, this results does not provide approximation in the space of operators. One might be interested in the closeness of and in some sense.
Consider the function space , and a countable set of its bases functions . Also, for the product measure , let denote the corresponding set of basis for . Note that the set is not required to be complete.
Proposition B.2.
Under the above construction, for any countable set , we have,
Proof.
of Proposition B.2.
For any , we have
Note that, since , it is also in . Therefore, . On the other hand, we have since we did not require to form a compete bases. Putting these statements together, we have,
Ergo, we have,
∎
The result of the Proposition B.2 states that, a neural network can be used to construct a neural operator that approximates well a class of integral operators in sense.
Appendix C Details in the Experimental Study
In the first part of the experiment, we study the setting where is a finite set. For this experiment, we have , and
we use multilayered neural network, with the size of hidden layer for . When is a classifier, we have an extra softmax layer in the end. We use sklearn package to train these models. In particular, we use
MLPRegressor(solver=’lbfgs’, alpha=1e-1,
learning_rate = ’adaptive’, learning_rate_init= 1e-3 ,
max_iter=5000, activation=’relu’,
hidden_layer_sizes=(50, 200, 500, 200, 50))
When has a softmax layer in the last layer, i.e., its output is on the simplex, we use logistic regression loss to train it. When there is no softmax layer, we use one hot encoding representation of labels, and use L2 loss for training.
To generate the data, we first set the probability of labels as follows,
Py = np.zeros(nc) ; Py[0:int(nc):2] = 1/nc ; Py[1:int(nc):2] = 3/nc ; Py = Py/np.sum(np.abs(Py))
Qy = np.zeros(nc) ; Qy[0:int(nc):2] = 3/nc ; Qy[1:int(nc):2] = 1/nc ; Qy = Qy/np.sum(np.abs(Qy))
where nc is the number of classes. We first draw samples for labels, and for each label, we draw a Gaussian random variable, and set x to be the label with an additive Gaussian noise.
The relative error is the L2 norm of error divided by the L2 norm of the importance weight vector.
For the case of regression, we have and . We use Gaussian process regression methods for . In particular, we use
K_rbf = RBF(length_scale=.9, length_scale_bounds=(1e-2, 1e3))
and
kernel = 1.0 * K_rbf + WhiteKernel(noise_level=0.01,
noise_level_bounds=(1e-10, 1e+1))
for the choice of kernel, and use
GaussianProcessRegressor(kernel=kernel,alpha=0.0)
for the regression.
The kernel used in order to estimate is . For estimation, we use the ralxed objective of
Using a similar argument as in representer Theorem, for , the minimizer of the above optimization, has the following form,
Therefore, we are left with estimating for which we deploy the kernel machinery to estimate efficiently. Please refer to the code for the implementation.
For this setting, we draw samples by first drawing samples for ’s. In this case, we draw samples from a distribution with PDF for the source, and for the target for and . Note that, in this case, the importance weight function is . We set x to be y with an additive Gaussian noise.
Since the estimator is a function, in order to compute the relative error, we use a grid of 100 points in the interval of . The relative error is the L2 norm of the error computed on these points, divided by the L2 norm of the true importance weight function computed on the same grid.