This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Distributionally Robust Learning with Stable Adversarial Training

Jiashuo Liu, Zheyan Shen, Peng Cui111,, Linjun Zhou, Kun Kuang, Bo Li J. Liu, Z. Shen, P. Cui and L. Zhou are with the Department of Computer Science and Technology, Tsinghua University, Beijing, China. B. Li is with the School of Economics and Management, Tsinghua University, Beijing, China. K. Kuang is with the College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
E-mail: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected].
Peng Cui and Bo Li are the corresponding authors.
Abstract

Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data. There is an emerging literature on tackling this problem by minimizing the worst-case risk over an uncertainty set. However, existing methods mostly construct ambiguity sets by treating all variables equally regardless of the stability of their correlations with the target, resulting in the overwhelmingly-large uncertainty set and low confidence of the learner. In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target. We theoretically show that our method is tractable for stochastic gradient-based optimization and provide the performance guarantees for our method. Empirical studies on both simulation and real datasets validate the effectiveness of our method in terms of uniformly good performance across unknown distributional shifts.

Index Terms:
Stable Adversarial Learning, Spurious Correlation, Distributionally Robust Learning, Wasserstein Distance

1 Introduction

Traditional machine learning algorithms which optimize the average empirical loss often suffer from the poor generalization performance under distributional shifts induced by latent heterogeneity, unobserved confounders or selection biases in training data[1, 2, 3]. However, in high-stake applications such as medical diagnosis[4], criminal justice[5, 6] and autonomous driving [7], it is critical for the learning algorithms to ensure the robustness against potential unseen data. Therefore, robust learning methods have recently aroused much attention due to its favorable property of robustness guarantee[8, 9, 10].

Instead of optimizing the empirical cost on training data, robust learning methods seek to optimize the worst-case cost over an uncertainty set and can be further separated into two main branches named adversarially and distributionally robust learning. In adversarially robust learning, the uncertainty set is constructed point-wisely[9, 11, 10, 12]. Specifically, adversarial attack is performed independently on each data point within a L2L_{2} or LL_{\infty} norm ball around itself to maximize the loss of current classification model. In distributionally robust learning, on the other hand, the uncertainty set is characterized on a distributional level[13, 14, 15]. A joint perturbation, typically measured by Wasserstein distance or ff-divergence between distributions, is applied to the entire distribution entailed by training data. These methods can provide robustness guarantees under distributional shifts when testing distribution is captured in the built uncertainty set. However, in real scenarios, to contain the possible true testing distribution, the uncertainty set is often overwhelmingly large, and results in learned models with fairly low confidence, which is also referred to as the over pessimism or the low confidence problem[16, 17]. That is, with an overwhelmingly large set, the learner optimizes for implausible worst-case scenarios, resulting in meaningless results (e.g. the classifier assigns equal probability to all classes). Such a problem greatly hurts the generalization ability of robust learning methods in practice.

The essential problem of the above methods lies in the construction of the uncertainty set. To address the over pessimism of the learning algorithm, one should form a more practical uncertainty set which is likely to contain the potential distributional shifts in the future and meanwhile is as small as possible. More specifically, in real applications, we observe that different covariates may be perturbed in a non-uniform way, which should be considered when building a practical uncertainty set. Taking the problem of waterbirds and landbirds classification as an example[18]. There exist two types of covariates where the stable covariates (e.g. representing the bird itself) preserve immutable correlations with the target across different environments, while those unstable ones (e.g. representing the background) are pretty likely to change (e.g. waterbirds on land). Therefore, for the example above, the construction of the uncertainty set should be anisotropic which mainly focuses on the perturbation of those unstable covariates (e.g. background) to generate more practical and meaningful samples.

Further, we illustrate the anisotropic uncertainty set in figure 1, where blue points denote the observed training distribution (𝒩(0,I2)\mathcal{N}(0,I_{2})). And we sample data points from all distributions in the uncertainty set captured by an isotropic Wasserstein ball around the observed distribution, which are colored orange. We can see from figure 1 that the original distribution is perturbed equally along both the stable and unstable direction. With the intuition above, we propose that the ideal uncertainty should be like green points, which only perturb the training distribution along unstable directions. Following this, there are several work[19, 20] based on the adversarial attack which focus on perturbing the color or background of images to improve the adversarial robustness. However, these methods mainly follow a step by step routine where the segmentation is conducted first to separate the background from the foreground and cannot theoretically provide robustness guarantees under unknown distributional shifts, which greatly limits their applications on more general settings.

Refer to caption
Figure 1: Illustration for the anisotropic adversarial distribution set, where blue points denote the observed data distribution, and orange points denote the adversarial distribution set produce by an isotropic Wasserstein ball, and the green one shows the ideal set that incorporate realistic distribution shifts.

In this paper, we propose the Stable Adversarial Learning (SAL) algorithm to address this problem in a more principled and unified way, which leverages heterogeneous data sources to construct a more practical uncertainty set. Specifically, we adopt the framework of Wasserstein distributionally robust learning(WDRL) and further characterize the uncertainty set to be anisotropic according to the stability of covariates across the multiple environments, which induces stronger adversarial perturbations on unstable covariates than those stable ones. A synergistic algorithm is designed to jointly optimize the covariates differentiating process as well as the adversarial training process of model’s parameters. Compared with traditional robust learning techniques, the proposed method is able to provide robustness under strong distributional shifts while not hurting much confidence of the learner. Theoretically, we prove that our method constructs a more compact uncertainty set, which as far as we know is the first analysis of the compactness of adversarial sets in WDRL literature. Empirically, the advantages of our SAL algorithm are demonstrated on both synthetic and real-world datasets in terms of uniformly good performance across distributional shifts.

2 Related Work

In this section, we investigate several strands of related literature more thoroughly, including domain adaptation, domain generalization, stable learning and distributionally robust learning.

Domain adaptation methods[21] leverage the data from target domain to assist the model training on source domain. Therefore the resulted model could capture the possible distributional shift in testing. Shimodaira[22] proposes to assign each training data a new weight equal to the density ratio between source and target distribution, and therefore guarantee the optimality of learned model on test distribution. Then several techniques have been proposed to estimate the ratio more accurately, such as discriminative estimation[23], kernel mean matching[24] and maximum entropy[25]. Apart from reweighting methods, deep learning based methods[26, 27] learn a transformation in feature space to characterize both source and target domain. However, the deployment of domain adaptation methods in real applications, where one can hardly access data from target domain, is quite limited.

Compared with domain adaptation, domain generalization techniques do not require the availability of target domain data and become more and more popular these years due to its practicability. Different from domain adaptation, domain generalization methods propose to learn a domain-invariant classifier with multiple training domains. Muandet et al. [28] propose a kernel-based optimization algorithm to learn an invariant latent space of data across training domains. Through the lens of causality[29, 30], M. Arjovsky et al. [31] propose Invariant Risk Minimization to learn invariant representation with theoretical guarantee of the optimality of out-of-distribution generalization, which gains the most attention recently. Also, stable learning methods [32, 3, 33] propose to decorrelate the covariates via sample reweighting to estimate the real causal effects, which enhances the stability under distributional shifts. However, they only deal with the covariate shift problem and do not apply to other kinds of distributional shifts (e.g. concept shifts brought by anti-causal variables).

Distributionally robust learning (DRL), from the optimization literature, proposes to optimize for the worst-case cost over an uncertainty distribution set, so as to protect the model against the potential distributional shifts in the uncertainty set, which is constrained by moment or support conditions [34, 35], or ff-divergence [36, 17]. As the uncertainty set formulated by Wasserstein ball is much more flexible, Wasserstein Distributionally Robust Learning (WDRL) has been widely studied [37, 13, 14]. WDRL for logistic regression was proposed by Abadeh et al. [37]. Sinha et al. [13] achieved moderate levels of robustness with little computational cost relative to empirical risk minimization with a Lagrangian penalty formulation of WDRL. Esfahani and Kuhn [14] reformulated the distributionally robust optimization problems over Wasserstein balls as finite convex programs under mild assumptions. Although DRL offers an alternative to empirical risk minimization for robust performance under distributional perturbations, there has been work questioning its real effect in practice. Hu et al. [38] proved that when the DRL is applied to classification tasks, the obtained classifier ends up being optimal for the observed training distribution, and the core of the proof lies in the over-flexibility of the built uncertainty set. And Fronger et al. [16] also pointed out the problem of overwhelmingly-large decision set, and they used large number of unlabeled examples to further constrain the distribution set.

3 Problem Setting

As mentioned above, the uncertainty set built in WDRL is often overwhelmingly large in wild high-dimensional scenarios. To demonstrate this over pessimism problem of WDRL, we design a toy example in 6.1.1 to show the necessity to construct a more practical uncertainty set. Indeed, without any prior knowledge or structural assumptions, it is quite difficult to design a practical set for robustness under distributional shifts.

Therefore, in this work, we consider a dataset D={De}etrD=\left\{D^{e}\right\}_{e\in\mathcal{E}_{tr}}, which is a mixture of data De={(xie,yie)}i=1neD^{e}=\left\{(x_{i}^{e},y_{i}^{e})\right\}_{i=1}^{n_{e}} collected from multiple training environments etre\in\mathcal{E}_{tr}, xie𝒳x_{i}^{e}\in\mathcal{X} and yie𝒴y_{i}^{e}\in\mathcal{Y} are the ii-th data and label from environment ee respectively. Specifically, each dataset DeD^{e} contains examples identically and independently distributed according to some joint distribution PXYeP_{XY}^{e} on 𝒳×𝒴\mathcal{X}\times\mathcal{Y}. Given the observations that in real scenarios, different covariates have different extents of stability, we propose assumption 1.

Assumption 1.

There exists a decomposition of all the covariates X={S,V}X=\{S,V\}, where SS represents the stable covariate set and V represents the unstable one, so that for all environments ee\in\mathcal{E}, 𝔼[Ye|Se=s,Ve=v]=𝔼[Ye|Se=s]=𝔼[Y|S=s]\mathbb{E}[Y^{e}|S^{e}=s,V^{e}=v]=\mathbb{E}[Y^{e}|S^{e}=s]=\mathbb{E}[Y|S=s].

Intuitively, assumption 1 indicates that the correlation between stable covariates SS and the target YY stays invariant across environments, which is quite similar to the invariance in [39, 31, 40]. Moreover, assumption 1 also demonstrates that the influence of VV on the target YY can be wiped out as long as whole information of SS is accessible. Under the assumption 1, the disparity among covariates revealed in the heterogeneous datasets can be leveraged for better construction of the uncertainty set. And based on assumption 1, we propose our problem:

Problem 1.

Given multi-environments training data D={De}etrD=\{D^{e}\}_{e\in\mathcal{E}_{tr}}, under assumption 1, the goal is to build a more practical uncertainty set for distributionally robust learning and achieve stable performance across distributional shifts with respect to low Mean_Error\mathrm{Mean\_Error} defined as Mean_Error=1|te|etee\mathrm{Mean\_Error}=\frac{1}{|\mathcal{E}_{te}|}\sum_{e\in\mathcal{E}_{te}}\mathcal{L}^{e} and low Std_Error\mathrm{Std\_Error} defined as Std_Error=1|te|1ete(eMean_Error)2\mathrm{Std\_Error}=\sqrt{\frac{1}{|\mathcal{E}_{te}|-1}\sum_{e\in\mathcal{E}_{te}}\left(\mathcal{L}^{e}-\mathrm{Mean\_Error}\right)^{2}}.

4 Method

In this work, we propose the Stable Adversarial Learning (SAL) algorithm, which leverages heterogeneous data to build a more practical uncertainty set with covariates differentiated according to their stability.

Firstly, we introduce the Wasserstein Distributionally Robust Learning (WDRL) framework which attempts to learn a model with minimal risk against the worst-case distribution in the uncertainty set characterized by Wasserstein distance:

Definition 1.

Let 𝒵m+1\mathcal{Z}\subset\mathbb{R}^{m+1} and 𝒵=𝒳×𝒴\mathcal{Z}=\mathcal{X}\times\mathcal{Y} , given a transportation cost function c:𝒵×𝒵[0,)c:\mathcal{Z}\times\mathcal{Z}\rightarrow[0,\infty), which is nonnegative, lower semi-continuous and satisfies c(z,z)=0c(z,z)=0, for probability measures PP and QQ supported on 𝒵\mathcal{Z}, the Wasserstein distance between PP and QQ is :

Wc(P,Q)=infMΠ(P,Q)𝔼(z,z)M[c(z,z)]W_{c}(P,Q)=\inf\limits_{M\in\Pi(P,Q)}\mathbb{E}_{(z,z^{\prime})\sim M}[c(z,z^{\prime})] (1)

where Π(P,Q)\Pi(P,Q) denotes the couplings with M(A,𝒵)=P(A)M(A,\mathcal{Z})=P(A) and M(𝒵,A)=Q(A)M(\mathcal{Z},A)=Q(A) for measures MM on 𝒵×𝒵\mathcal{Z}\times\mathcal{Z}.

Following the intuition above that the uncertainty should not be isotropic along stable and unstable directions, we propose to learn an anisotropic uncertainty set with the help of heterogeneous environments. The objective function of our SAL algorithm is:

minθΘsupQ:Wcw(Q,P0)ρ𝔼X,YQ[(θ;X,Y)]\displaystyle\min\limits_{\theta\in\Theta}\sup_{Q:W_{c_{w}}(Q,P_{0})\leq\rho}\mathbb{E}_{X,Y\sim Q}[\ell(\theta;X,Y)] (2)
s.t. warg\displaystyle\text{s.t.\ }w\in\arg minw𝒲{1|tr|etre(θ)+αmaxep,eqtrepeq}\displaystyle\min\limits_{w\in\mathcal{W}}\left\{\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathcal{L}^{e}(\theta)+\alpha\max\limits_{e_{p},e_{q}\in\mathcal{E}_{tr}}\mathcal{L}^{e_{p}}-\mathcal{L}^{e_{q}}\right\} (3)

where P0P_{0} denotes the training distribution, WcwW_{c_{w}} denotes the Wasserstein distance with transportation cost function cwc_{w} defined as

cw(z,z)=w(zz)2c_{w}(z,z^{\prime})=\|w\odot(z-z^{\prime})\|^{2} (4)

and 𝒲={w:w[1,+)m+1andmin(w)=1}\mathcal{W}=\left\{w:w\in[1,+\infty)^{m+1}\ \ \text{and}\ \min(w)=1\right\} denotes the covariate weight space (min(w)\min(w) denotes the minimal element of ww), and e\mathcal{L}^{e} denotes the average loss in environment etre\in\mathcal{E}_{tr}, α\alpha is a hyper-parameter to adjust the tradeoff between average performance and the stability.

The core of our SAL is the covariate weight learning procedure in equation 3. In our algorithm, the uncertainty set is built to achieve stable performance across heterogeneous multiple environments. Intuitively, ww controls the perturbation level of each covariate and formulates an anisotropic uncertainty set compared with the conventional WDRL methods. The objective function of ww (equation 3) contains two parts: the average loss in training environments as well as the maximum margin, which aims at learning such ww that the resulting uncertainty set leads to a learner with uniformly good performance across environments. Equation 2 is the objective function of model’s parameters via distributionally robust learning with the learnable covariate weight ww. During training, the covariate weight ww and model’s parameters θ\theta are iteratively optimized.

Details of the algorithm are delineated below. We first will introduce the optimization of model’s parameter in section 4.1, then the transportation cost function learning procedure in section 4.2. The pseudo-code of the whole Stable Adversarial Learning (SAL) algorithm is shown in Algorithm 1.

Algorithm 1 Stable Adversarial Training
  Input: Multi-environments data De1,De2,,DenD^{e_{1}},\ D^{e_{2}},\ \dots,\ D^{e_{n}}, where De=(Xe,Ye)D^{e}=(X^{e},Y^{e}), ee\in\mathcal{E}
  Hyperparameters: TT, TθT_{\theta}, TwT_{w}, m, ϵx\epsilon_{x}, ϵθ\epsilon_{\theta}, ϵw\epsilon_{w}, α\alpha
  Initialize: w=[1.0,,1.0]w=[1.0,\dots,1.0]
  for  i=1i=1 to TT do
     for j=0j=0 to Tθ1T_{\theta}-1 do
        Initialize X~0\widetilde{X}_{0} as: X~0=X\widetilde{X}_{0}=X
        for k=0k=0 to m1m-1 do
           {Approximate the supreme of sλ(X)s_{\lambda}(X) for XeX^{e} from all ee\in\mathcal{E}}
           X~k+1e=X~ke+ϵxx{(θ;X~ke)λcw(X~ke,X~0e)}\widetilde{X}^{e}_{k+1}=\widetilde{X}^{e}_{k}+\epsilon_{x}\nabla_{x}\{\ell(\theta;\widetilde{X}^{e}_{k})-\lambda c_{w}(\widetilde{X}^{e}_{k},\widetilde{X}^{e}_{0})\}
        end for
        Update θ\theta as:   θj+1=θjϵθθ(θj;(X~m,Y))\theta^{j+1}=\theta^{j}-\epsilon_{\theta}\nabla_{\theta}\ell(\theta^{j};(\widetilde{X}_{m},Y))
     end for
     Calculate R(θ)R(\theta) as:  R(θ)=1||ee+α(supp,qpq)R(\theta)=\frac{1}{|\mathcal{E}|}\sum_{e\in\mathcal{E}}\mathcal{L}^{e}+\alpha\left(\sup\limits_{p,q\in\mathcal{E}}\mathcal{L}^{p}-\mathcal{L}^{q}\right)
     w0=wiw^{0}=w^{i}
     for j=0j=0 to Tw1T_{w}-1 do
        Update ww as:  wj+1=wjϵwwR(θ)w^{j+1}=w^{j}-\epsilon_{w}\nabla_{w}R(\theta)
     end for
     Update w as:  , wi+1=Proj𝒲(wtw)w^{i+1}=\mathop{Proj}_{\mathcal{W}}\left(w^{t_{w}}\right).
  end for

4.1 Tractable Optimization

In SAL algorithm, the model’s parameters θ\theta and covariate weight ww is optimized iteratively. In each iteration, given current ww, the objective function for θ\theta is:

minθΘsupQ:Wcw(Q,P0)ρ𝔼X,YQ[(θ;X,Y)]\min\limits_{\theta\in\Theta}\sup_{Q:W_{c_{w}}(Q,P_{0})\leq\rho}\mathbb{E}_{X,Y\sim Q}[\ell(\theta;X,Y)] (5)

The duality results in lemma 1 show that the infinite-dimensional optimization problem 5 can be reformulated as a finite-dimensional convex optimization problem [14]. Besides, inspired by [13], a Lagrangian relaxation is provided for computation efficiency.

Lemma 1.

Let 𝒵=𝒳×𝒴\mathcal{Z}=\mathcal{X}\times\mathcal{Y} and :Θ×𝒵\ell:\Theta\times\mathcal{Z}\rightarrow\mathbb{R} be continuous. For any distribution QQ and any ρ0\rho\geq 0, let sλ(θ;(x,y))=supξ𝒵((θ;ξ)λcw(ξ,(x,y)))s_{\lambda}(\theta;(x,y))=\sup\limits_{\xi\in\mathcal{Z}}(\ell(\theta;\xi)-\lambda c_{w}(\xi,(x,y))), 𝒫={Q:Wc(Q,P0)ρ},\mathcal{P}=\{Q:W_{c}(Q,P_{0})\leq\rho\},we have:

supQ𝒫𝔼Q[(θ;x,y)]=infλ0{λρ+𝔼P0[sλ]}\sup\limits_{Q\in\mathcal{P}}\mathbb{E}_{Q}[\ell(\theta;x,y)]=\inf\limits_{\lambda\geq 0}\{\lambda\rho+\mathbb{E}_{P_{0}}[s_{\lambda}]\} (6)

and for any λ0\lambda\geq 0, we have:

supQ{𝔼Q[(θ;(x,y))]λWcw(Q,P0)}=𝔼P0[sλ]\sup\limits_{Q}\{\mathbb{E}_{Q}[\ell(\theta;(x,y))]-\lambda W_{c_{w}}(Q,P_{0})\}=\mathbb{E}_{P_{0}}[s_{\lambda}] (7)

The original problem 5 can firstly be reformulated as equation 6 by duality. However, the infimum with respect to λ\lambda is also intractable. Therefore, we give up the prescribed amount ρ\rho of robustness in equation (5) and focus instead on the relaxed Lagrangian penalty function for efficiency in equation (7). Notice that there exists only the inner supremum in 𝔼P0[sλ(θ;(x,y))]\mathbb{E}_{P_{0}}[s_{\lambda}(\theta;(x,y))] in equation (7), which can be seen as a relaxed Lagrangian penalty function of the original objective function (5). Following lemma 1, we derive the loss function on empirical distribution P^N\hat{P}_{N} as:

^(θ)=1Ni=1Nsλ(θ;(xi,yi))\hat{\mathcal{L}}(\theta)=\frac{1}{N}\sum_{i=1}^{N}s_{\lambda}(\theta;(x_{i},y_{i})) (8)

Recall that sλ(θ;(x,y))=supξ𝒵((θ;ξ)λcw(ξ,(x,y)))s_{\lambda}(\theta;(x,y))=\sup\limits_{\xi\in\mathcal{Z}}(\ell(\theta;\xi)-\lambda c_{w}(\xi,(x,y))), we propose to convert the minimization of ^\hat{\mathcal{L}} over θ\theta to a minimax procedure as done in [13] to approximate the supremum for sλs_{\lambda}:

minθmaxX~𝔼P^N[(θ;X~,Y)λcw((X~,Y),(X,Y))]\min_{\theta}\max_{\widetilde{X}}\mathbb{E}_{\hat{P}_{N}}\left[\ell(\theta;\widetilde{X},Y)-\lambda c_{w}((\widetilde{X},Y),(X,Y))\right] (9)

Specifically, given predictor XX, we adopt gradient ascent to obtain an approximate maximizer X~\widetilde{X} of sλ(θ;(X,Y))s_{\lambda}(\theta;(X,Y)) and optimize the model’s parameter θ\theta using (X~,Y)(\widetilde{X},Y). In the following parts, we simply use X~\widetilde{X} to denote {x~}N\{\widetilde{x}\}_{N}, which means the set of maximizers for training data {x}N\{x\}_{N}. The convergence guarantee for this optimization can be referred to [13].

4.2 Learning for transportation cost ww

We introduce the learning for transportation cost function cwc_{w} in this section. In supervised scenarios, perturbations are typically only added to predictor XX and not target YY. Therefore, we simplify cw:𝒵×𝒵[0,+)(𝒵=𝒳×𝒴)c_{w}:\mathcal{Z}\times\mathcal{Z}\rightarrow[0,+\infty)(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}) to be:

cw(z1,z2)\displaystyle c_{w}(z_{1},z_{2}) =cw(x1,x2)+×𝕀(y1y2)\displaystyle=c_{w}(x_{1},x_{2})+\infty\times\mathbb{I}(y_{1}\neq y_{2}) (10)
=w(x1x2)22+×𝕀(y1y2)\displaystyle=\|w\odot(x_{1}-x_{2})\|_{2}^{2}+\infty\times\mathbb{I}(y_{1}\neq y_{2}) (11)

and omit ’yy-part’ in cwc_{w} as well as ww, that is w[1,+)mw\in[1,+\infty)^{m} in the following parts. Intuitively, ww controls the strength of adversary put on each covariate. The higher the weight is, the weaker perturbation is put on the corresponding covariate. Ideally, we hope the covariate weights on stable covariates are extremely high to protect them from being perturbed and to maintain the stable correlations, while weights on unstable covariates are nearly 11 to encourage perturbations for breaking the harmful spurious correlations. With the goal towards uniformly good performance across environments, we come up with the objective function R(θ(w))R(\theta(w)) for learning ww as:

R(θ(w))=1|tr|etre(θ(w))+αmaxep,eqtr(epeq)R(\theta(w))=\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathcal{L}^{e}(\theta(w))+\alpha\max\limits_{e_{p},e_{q}\in\mathcal{E}_{tr}}\left(\mathcal{L}^{e_{p}}-\mathcal{L}^{e_{q}}\right) (12)

where α\alpha is the hyper-parameter. R(θ(w))R(\theta(w)) contains two parts: the first is the average loss in multiple training environments; the second reflects the max margin among environments, which reflects the stability of θ(w)\theta(w), since it is easy to prove that maxep,eqtrep(θ(w))eq(θ(w))=0\max\limits_{e_{p},e_{q}\in\mathcal{E}_{tr}}\mathcal{L}^{e_{p}}(\theta(w))-\mathcal{L}^{e_{q}}(\theta(w))=0 if and only if the errors among all training environments are same. Here α\alpha is used to adjust the tradeoff between average performance and stability.

Given current θt\theta^{t}, we can update ww as:

wt+1=Proj𝒲(wtϵwR(θt)w)w^{t+1}=Proj_{\mathcal{W}}\left(w^{t}-\epsilon_{w}\frac{\partial R(\theta^{t})}{\partial w}\right) (13)

where Proj𝒲Proj_{\mathcal{W}} means projecting onto the space 𝒲\mathcal{W}. And the remaining work is how to calculate the gradient R(θ(w))/w\partial R(\theta(w))/\partial w, which we will introduce in detail in following section 4.2.1.

4.2.1 Calculation of R(θ(w))/w\partial R(\theta(w))/\partial w

In order to optimize ww, R(θ(w))/w\partial R(\theta(w))/\partial w can be approximated as following.

R(θ(w))w=RθθXAXAw\frac{\partial R(\theta(w))}{\partial w}=\frac{\partial R}{\partial\theta}\frac{\partial\theta}{\partial X_{A}}\frac{\partial X_{A}}{\partial w} (14)

Note that the first term R/θ\partial R/\partial\theta can be calculated easily. The second term can be approximated during the gradient descent process of θ\theta as :

θt+1\displaystyle\theta^{t+1} =θtϵθθ^(θt;X~,Y)\displaystyle=\theta^{t}-\epsilon_{\theta}\nabla_{\theta}\hat{\mathcal{L}}(\theta^{t};\widetilde{X},Y) (15)
θt+1X~\displaystyle\frac{\partial\theta^{t+1}}{\partial\widetilde{X}} =θtX~ϵθ^(θt;X~,Y)X~\displaystyle=\frac{\partial\theta^{t}}{\partial\widetilde{X}}-\epsilon\frac{\nabla_{\theta}\hat{\mathcal{L}}(\theta^{t};\widetilde{X},Y)}{\partial\widetilde{X}} (16)
θX~\displaystyle\frac{\partial\theta}{\partial\widetilde{X}} ϵtθ^(θt;X~,Y)X~\displaystyle\approx-\epsilon\sum_{t}\frac{\nabla_{\theta}\hat{\mathcal{L}}(\theta^{t};\widetilde{X},Y)}{\partial\widetilde{X}} (17)

where θ^(θt;X~,Y)X~\frac{\nabla_{\theta}\hat{\mathcal{L}}(\theta^{t};\widetilde{X},Y)}{\partial\widetilde{X}} can be calculated during the training process. The third term X~/w\partial\widetilde{X}/\partial w can be approximated during the adversarial learning process of X~\widetilde{X} as:

X~t+1\displaystyle\widetilde{X}^{t+1} =X~t+ϵxX~t{(θ;X~t,Y)λcw(X~t,X)}\displaystyle=\widetilde{X}^{t}+\epsilon_{x}\nabla_{\widetilde{X}^{t}}\left\{\ell(\theta;\widetilde{X}^{t},Y)-\lambda c_{w}(\widetilde{X}^{t},X)\right\} (18)
X~t+1w\displaystyle\frac{\partial\widetilde{X}^{t+1}}{\partial w} =X~tw2ϵxλDiag(X~tX)\displaystyle=\frac{\partial\widetilde{X}^{t}}{\partial w}-2\epsilon_{x}\lambda\mathrm{Diag}\left(\widetilde{X}^{t}-X\right) (19)
X~w\displaystyle\frac{\partial\widetilde{X}}{\partial w} 2ϵxλtDiag(X~tX)\displaystyle\approx-2\epsilon_{x}\lambda\sum_{t}\mathrm{Diag}(\widetilde{X}^{t}-X) (20)

which can be accumulated during the adversarial training process.

4.2.2 Approximation precision

We approximate the θ/X~\partial\theta/\partial\widetilde{X} and X~/w\partial\widetilde{X}/\partial w during the gradient descent and ascent process, where we use the average gradient as the approximate value. To better quantify the precision of our approximation, we tested the reliability of our approximation empirically. Since the gradient represents the direction to which the function declines fastest, we compare the ΔR\Delta R after updating by our R/w\partial R/\partial w with that after randomly selected directions with the same step size. Note that the ΔR\Delta R brought by the accurate gradient is largest among any other directions. Therefore, the higher possibility that our ΔR\Delta R is larger than randomly picked direction, the more accurate our approximation is. We perform random experiments for 1000 runs, and the approximation of our SAL outperforms 99.4% of them, which validates the high precision of our approximation.

5 Theoretical Analysis

Here we first provide the robustness guarantee for our method, and then we analyze the rationality of our uncertainty set, which also demonstrates the uncertainty set built in our SAL is more practical. And we finally derive the generalization bounds for our method.

5.1 Robustness Guarantee

Recall that the original objective of this work is to optimize for the worst-case error in a distribution set, which is given as minθΘsupQ:Wcw(Q,P0)ρ𝔼[(θ)]\min_{\theta\in\Theta}\sup_{Q:W_{c_{w}}(Q,P_{0})\leq\rho}\mathbb{E}[\ell(\theta)]. However, for tractable optimization in section 4.1, we have to give up the prescribed amount ρ\rho of the distributional robustness and focus on the relaxed Lagrangian penalty function:

supQ{𝔼Q[(θ;(x,y))]λWcw(Q,P0)}\sup\limits_{Q}\left\{\mathbb{E}_{Q}[\ell(\theta;(x,y))]-\lambda W_{c_{w}}(Q,P_{0})\right\} (21)

Note that in equation 21, we do not impose any constraints (e.g., within a Wasserstein ball) on the QQ, which we optimize the equation 21 with respect to. Then a natural question is, can the relaxed Lagrangian reformulation, which we actually optimize, provide some kind of robustness guarantee? Or it is just an approximation? In this subsection, we derive the robustness guarantee for the relaxed Lagrangian reformulation to answer this question.

Theorem 1 (Robustness Guarantee for Relaxed Lagrangian Reformulation).

For fixed λ0\lambda\geq 0, define the transportation map Tλ(θ;z0)=argmaxξ𝒵(θ;ξ)λcw(ξ,z0)T_{\lambda}(\theta;z_{0})=\arg\max_{\xi\in\mathcal{Z}}\ell(\theta;\xi)-\lambda c_{w}(\xi,z_{0}), and the empirical maximizer of the Lagrangian reformulation (equation 21) is given as:

Pn=argmaxQ{𝔼Q[(θ;(x,y))]λWcw(Q,P^n)}P_{n}^{*}=\arg\max_{Q}\left\{\mathbb{E}_{Q}[\ell(\theta;(x,y))]-\lambda W_{c_{w}}(Q,\hat{P}_{n})\right\} (22)

Then we denote the Wasserstein distance between the worst-case distribution PnP_{n}^{*} and the training distribution P^n\hat{P}_{n} as ρ^n=Wcw(Pn,P^n)\hat{\rho}_{n}=W_{c_{w}}(P_{n}^{*},\hat{P}_{n}), we have:

supP:Wcw(P,P^n)ρ^n𝔼P[(θ;Z)]\displaystyle\sup\limits_{P:W_{c_{w}}(P,\hat{P}_{n})\leq\hat{\rho}_{n}}\mathbb{E}_{P}[\ell(\theta;Z)] =𝔼P^n[sλ(θ;Z)]+λρ^n\displaystyle=\mathbb{E}_{\hat{P}_{n}}[s_{\lambda}(\theta;Z)]+\lambda\hat{\rho}_{n} (23)
=𝔼Pn[(θ;Z)]+λρ^n\displaystyle=\mathbb{E}_{P_{n}^{*}}[\ell(\theta;Z)]+\lambda\hat{\rho}_{n}
Proof.

By choosing ρ^n\hat{\rho}_{n} as ρ\rho in Lemma 1, it is easy to prove under the strong duality. ∎

Theorem 1 justifies that our relaxed Lagrangian reformulation in optimization can exactly guarantee the distributional robustness inside a ρ^n\hat{\rho}_{n}-radius ball, that is, given λ\lambda, our algorithm will find a distribution PnP_{n}^{*}, whose distance from the original P^n\hat{P}_{n} is ρ^n\hat{\rho}_{n}, and we can guarantee that the learned PnP_{n}^{*} is exactly the worst-case distribution in the ρ^n\hat{\rho}_{n}-radius ball centered at P^n\hat{P}_{n}. The only difference from the direct optimization is that, we cannot guarantee the robustness for a pre-given quantity ρ\rho, while we use the Lagrangian parameter λ\lambda as a qualitative factor to control how much robustness to protect.

5.2 Compactness of the Adversarial Set

Then we analyze the rationality of our method in theorem 2, where our major theoretical contribution lies on. As far as we know, it is the first analysis of the compactness of adversary sets in WDRL literature.

Assumption 2.

Given ρ>0\rho>0, Q0𝒫0\exists Q_{0}\in\mathcal{P}_{0} that satisfies:

(1)\mathrm{(1)} ϵ>0\forall\epsilon>0, |infMΠ(P0,Q0)𝔼(z1,z2M)[c(z1,z2)]|ϵ\left|\inf\limits_{M\in\Pi(P_{0},Q_{0})}\mathbb{E}_{(z_{1},z_{2}\sim M)}\left[c(z_{1},z_{2})\right]\right|\leq\epsilon, we refer to the couple minimizing the expectation as M0M_{0}.

(2)\mathrm{(2)} 𝔼MΠ(P0,Q0)M0[c(z1,z2)]ρ\mathbb{E}_{M\in\Pi(P_{0},Q_{0})-M_{0}}\left[c(z_{1},z_{2})\right]\geq\rho, where Π(P0,Q0)M0\Pi(P_{0},Q_{0})-M_{0} means excluding M0M_{0} from Π(P0,Q0)\Pi(P_{0},Q_{0}).

(3)\mathrm{(3)} Q0#SP0#SQ_{0\#S}\neq P_{0\#S}, where S={i:w(i)>1}S=\{i:w^{(i)}>1\} and w(i)w^{(i)} denotes the iith element of ww and P#SP_{\#S} denotes the marginal distribution on dimensions SS.

Assumption 3.

Given ρ0\rho\geq 0 and cwc_{w}, there exists distribution VV supported on 𝒵#U\mathcal{Z}_{\#U} that

Wcw(V,P0#U)=ρW_{c_{w}}(V,P_{0\#U})=\rho (24)

Assumption 2 describes the boundary property of the original uncertainty set 𝒫0={Q:Wc(Q,Po)ρ}\mathcal{P}_{0}=\{Q:W_{c}(Q,P_{o})\leq\rho\}, which assumes that there exists at least one distribution on the boundary whose marginal distribution on SS is not the same as the center distribution P0P_{0}’s and is easily satisfied. And Assumption 3 assumes that there exists at least one marginal distribution VV whose distance from the original marginal distribution is ρ\rho, and is easily satisfied. Based on these assumptions, we come up with the following theorem.

Theorem 2 (Compactness).

Under Assumption 2, assume the transportation cost function in Wasserstein distance takes form of c(x1,x2)=x1x21c(x_{1},x_{2})=\|x_{1}-x_{2}\|_{1} or c(x1,x2)=x1x222c(x_{1},x_{2})=\|x_{1}-x_{2}\|_{2}^{2}. Then, given observed distribution P0P_{0} supported on 𝒵\mathcal{Z} and ρ0\rho\geq 0, for the adversary set 𝒫={Q:Wcw(Q,P0)ρ}\mathcal{P}=\{Q:W_{c_{w}}(Q,P_{0})\leq\rho\} and the original 𝒫0={Q:Wc(Q,P0)ρ}\mathcal{P}_{0}=\{Q:W_{c}(Q,P_{0})\leq\rho\}, given cwc_{w} where min(w(1),,w(m))=1\min(w^{(1)},\dots,w^{(m)})=1 and max(w(1),,w(m))>1\max(w^{(1)},\dots,w^{(m)})>1, we have 𝒫𝒫0\mathcal{P}\subset\mathcal{P}_{0}. Furthermore, under Assumption 3, for the set U={i|w(i)=1}U=\{i|w^{(i)}=1\}, Q0𝒫\exists Q_{0}\in\mathcal{P} that satisfies Wcw(P0#U,Q0#U)=ρW_{c_{w}}(P_{0\#U},Q_{0\#U})=\rho.

Theorem 2 proves that the constructed uncertainty set of our method is smaller than the original. Intuitively, in adversarial learning paradigm, if stable covariates are perturbed, the target should also change correspondingly to maintain the underlying relationship. However, we have no access to the target value corresponding to the perturbed stable covariates in practice, so optimizing under an isotropic uncertainty set (e.g. P0P_{0}) which contains perturbations on both stable and unstable covariates would generally lower the confidence of the learner and produce meaningless results. Therefore, from this point of view, by adding high weights on stable covariates in the cost function, we may construct a more reasonable and practical uncertainty set in which the ineffective perturbations are avoided.

Further, we theoretically analysis the property of learned covariate weights ww in linear regression, including the optimal point of equation 3 and the reason why our method can to some extent mitigate the low confidence problem compared with the original WDRL. To begin with, we make further assumptions on the given multiple environments data.

Assumption 4 (Data Heterogeneity).

Under Assumption 1, we further assume that δS0,δV>0\exists\delta_{S}\geq 0,\delta_{V}>0, such that:
(1) e\forall e\in\mathcal{E}, |minθe(θ)minθSe(θS)|δS|\min_{\theta}\mathcal{L}^{e}(\theta)-\min_{\theta_{S}}\mathcal{L}^{e}(\theta_{S})|\leq\delta_{S}
(2) \forall linear model fθ(X)=θSTS+θVTVf_{\theta}(X)=\theta_{S}^{T}S+\theta_{V}^{T}V with θV>0\theta_{V}>0, ei,eitr\exists e_{i},e_{i}\in\mathcal{E}_{tr} such that ei(θ)ej(θ)>δV\mathcal{L}^{e_{i}}(\theta)-\mathcal{L}^{e_{j}}(\theta)>\delta_{V}. where θS\theta_{S} denotes the linear parameters on stable covariates and θV\theta_{V} on unstable covariates.

Actually, Assumption 4 assumes that (1) the predicting performance with stable features or unstable features will not differ much; (2) using unstable features for prediction will hurt the model’s stability across different environments, since 𝔼e[Y|V]\mathbb{E}^{e}[Y|V] may change greatly.

Theorem 3 (Optimal θ(w)\theta^{*}(w^{*})).

Under Assumption 4, for α>δSδV\alpha>\frac{\delta_{S}}{\delta_{V}}, the optimal point θ(w)\theta^{*}(w^{*}) of equation 3 satisfies that θV=0\theta_{V}^{*}=0 and wVw^{*}_{V}. Further, choosing c(z1,z2)=z1z22c(z_{1},z_{2})=\|z_{1}-z_{2}\|_{2}, with ρ,ρ2/wS0\rho\rightarrow\infty,\rho^{2}/w^{*}_{S}\rightarrow 0 and wV=1w^{*}_{V}=1, the minimizer θ\theta^{\prime} of equation 2 will approach to θ\theta^{*}.

Proof.

It is easy to prove the parameters of unstable features in θ\theta^{*} is 0 under Assumption 4.We move on to the property of ww^{*}. For cw=w(z1z2)2c_{w}=\|w\odot(z_{1}-z_{2})\|_{2}, the equation 2 can be reformulated to (following [14])

θ=argminθ1Ni=1Ni+ρ(θ,1)TDiag1(w)(θ,1)\theta^{\prime}=\arg\min_{\theta}\frac{1}{N}\sum_{i=1}^{N}\ell_{i}+\rho\sqrt{(-\theta,1)^{T}\text{Diag}^{-1}(w)(-\theta,1)} (25)

Then with ρ,ρ2/wS0,ρ2wV\rho\rightarrow\infty,\rho^{2}/w^{*}_{S}\rightarrow 0,\rho^{2}w^{*}_{V}\rightarrow\infty, it is easy to prove that θV0\theta^{\prime}_{V}\rightarrow 0 and θS=argminθS\theta^{\prime}_{S}=\arg\min_{\theta_{S}}\mathcal{L}. ∎

In Theorem 3, we analyze the properties of the optimal points of our method, which verifies that the learned covariate weights will greatly restrict the perturbations on stable features (wSw^{*}_{S}\rightarrow\infty) to mitigate the over-pessimism problem. Although the scenario is simple, we can also get inspirations why the original WDRL faces the low confidence problem. From the reformulation in equation 25, we see that WDRL regulates the predictor with (θ,1)2\|(-\theta,1)\|_{2} (by letting w=1w=1) and the strength of regularization is controlled by the radius ρ\rho of the ball. As ρ\rho grows to contain more potential testing distributions, WDRL puts much more penalty on the parameters of both stable features and unstable features, which lowers both θS\theta_{S} and θV\theta_{V} until they are both 0, making in the model refuse to make predictions and only output 0, that is the origin of low confidence or over-pessimism. While in our proposed method, we use the learned covariate weights ww to prevent the parameters θS\theta_{S} of stable features from being affected, and such desired weight can be learned via equation 3 as shown in Theorem 3.

5.3 Generalization Bounds

First, we provide the robustness guarantee in theorem 4 with the help of lemma 1 and Rademacher complexity[41].

Theorem 4 (Generalization Bounds).

Let Θ=Rm,x𝒳,y𝒴\Theta=R^{m},\ x\in\mathcal{X},\ y\in\mathcal{Y}. Assume |(θ;z)||\ell(\theta;z)| is bounded by T0T_{\ell}\geq 0 for all θΘ,z=(x,y)𝒳×𝒴\theta\in\Theta,\ z=(x,y)\in\mathcal{X}\times\mathcal{Y}. Let F:𝒳𝒴F:\mathcal{X}\rightarrow\mathcal{Y} be a class of prediction functions, then for θΘ,ρ0,λ0\theta\in\Theta,\ \rho\geq 0,\ \lambda\geq 0, with probability at least 1δ1-\delta, for P{P:Wcw(P,P0)ρ}P\in\{P:W_{c_{w}}(P,P_{0})\leq\rho\}, we have:

supP𝔼P[(θ;Z)]λρ+\displaystyle\sup\limits_{P}\mathbb{E}_{P}\left[\ell(\theta;Z)\right]\leq\lambda\rho+ 𝔼P^n[sλ(θ;Z)]+n(~F)\displaystyle\mathbb{E}_{\hat{P}_{n}}\left[s_{\lambda}(\theta;Z)\right]+\mathcal{R}_{n}(\widetilde{\ell}\circ F) (26)
+kTln(1/δ)/n\displaystyle+kT_{\ell}\sqrt{\ln(1/\delta)/n}

where ~F={(x,y)(f(x),y)(0,y):fF}\widetilde{\ell}\circ F=\{(x,y)\mapsto\ell(f(x),y)-\ell(0,y):f\in F\} and n\mathcal{R}_{n} denotes the Rademacher complexity[41] and kk is a numerical constant no less than 0.

Proof.

From lemma 1, for all λ0,ρ0\lambda\geq 0,\ \rho\geq 0, we have

supP:Wcw(P,P0)𝔼P[(θ;X,Y)]λρ+𝔼P0[sλ(θ;X,Y)]\sup\limits_{P:W_{c_{w}}(P,P_{0})}\mathbb{E}_{P}[\ell(\theta;X,Y)]\leq\lambda\rho+\mathbb{E}_{P_{0}}[s_{\lambda}(\theta;X,Y)] (27)

Applying the standard results on Rademacher complexity[41], with probability at least 1δ1-\delta, we have:

𝔼P0[sλ]𝔼P^n[sλ]+n(l~F)+kTln(1/δ)n\displaystyle\mathbb{E}_{P_{0}}[s_{\lambda}]\leq\mathbb{E}_{\hat{P}_{n}}[s_{\lambda}]+\mathcal{R}_{n}(\widetilde{l}\circ F)+kT_{\ell}\sqrt{\frac{\ln(1/\delta)}{n}} (28)

then combing with equation 27, the result follows. ∎

Since the Rademacher complexity n\mathcal{R}_{n} also requires the expectation over sample distribution, we further derive the bound of the Rademacher complexity in theorem 4 which only depends on empirical data points. We introduce the definition of ϵ\epsilon-cover and ϵ\epsilon-covering number as follows, which can be used to measure the size of continuous sets.

Definition 2 (ϵ\epsilon-cover).

𝒞𝒰\mathcal{C}\subset\mathcal{U} is an ϵ\epsilon-cover of a functional class 𝒢𝒰\mathcal{G}\subset\mathcal{U} if and only if for all g𝒢g\in\mathcal{G}, there exists some h𝒞h\in\mathcal{C} such that dn(g,h)ϵd_{n}(g,h)\leq\epsilon, where dn(,)d_{n}(\cdot,\cdot) is function distance metric defined with respect to a tuple of data points (z1,,zn)d(z_{1},\dots,z_{n})\in\mathbb{R}^{d} as:

dn(g,h)=1ni=1n(g(zi)h(zi))2d_{n}(g,h)=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(g(z_{i})-h(z_{i}))^{2}} (29)
Definition 3 (ϵ\epsilon-covering number).

The ϵ\epsilon-covering number of a function class 𝒢\mathcal{G} is defined as:

N(𝒢,ϵ,dn(,))=inf{|𝒞|:𝒞isanϵcoverof𝒢}N(\mathcal{G},\epsilon,d_{n}(\cdot,\cdot))=\inf\{|\mathcal{C}|:\mathcal{C}\ is\ an\ \epsilon\mathrm{-cover}\ of\ \mathcal{G}\} (30)

where dn(,)d_{n}(\cdot,\cdot) denotes the function distance metric as equation 29.

Then we derive the bound of Rademacher complexity n\mathcal{R}_{n} with respect to the ϵ\epsilon-covering number.

Theorem 5 (^n\hat{\mathcal{R}}_{n}).

For the Rademacher complexity in theorem 4, for function set 𝒢\mathcal{G} and assume that g𝒢\forall g\in\mathcal{G}, g:𝒵g:\mathcal{Z}\rightarrow\mathbb{R} is a function and is bounded by T0T_{\ell}\geq 0, with probability at least 1δ1-\delta, we have:

n(𝒢)^n(𝒢)+2Tlog1/δ/2n\mathcal{R}_{n}(\mathcal{G})\leq\hat{\mathcal{R}}_{n}(\mathcal{G})+2T_{\ell}\sqrt{\log 1/\delta/2n} (31)
Proof.

Easy to prove with bounded difference inequality. ∎

Finally, we would like to derive the bound for ^n\hat{\mathcal{R}}_{n} with ϵ\epsilon-covering number.

Theorem 6.

(Bound of ^n\hat{\mathcal{R}}_{n}) For function class 𝒢\mathcal{G} containing functions G:𝒵G:\mathcal{Z}\rightarrow\mathbb{R}, we have:

^n(𝒢)infϵ0{4ϵ+12ϵsupG𝒢𝔼^[G2]logN(𝒢,τ,dn(,))/n𝑑τ}\hat{\mathcal{R}}_{n}(\mathcal{G})\leq\inf_{\epsilon\geq 0}\left\{4\epsilon+12\int_{\epsilon}^{\sup_{G\in\mathcal{G}}\sqrt{\hat{\mathbb{E}}[G^{2}]}}\sqrt{\log N(\mathcal{G},\tau,d_{n}(\cdot,\cdot))/n}d\tau\right\} (32)

Specifically, assume that G𝒢:𝒵\forall G\in\mathcal{G}:\mathcal{Z}\rightarrow\mathbb{R}, |G||G| is bounded by T0T_{\ell}\geq 0, we have:

^n(𝒢)infϵ0{4ϵ+12ϵTlogN(𝒢,τ,dn(,))n𝑑τ}\hat{\mathcal{R}}_{n}(\mathcal{G})\leq\inf_{\epsilon\geq 0}\left\{4\epsilon+12\int_{\epsilon}^{T_{\ell}}\sqrt{\frac{\log N(\mathcal{G},\tau,d_{n}(\cdot,\cdot))}{n}}d\tau\right\} (33)
Proof.

Let τ0=supG𝒢𝔼^[G2]\tau_{0}=\sup_{G\in\mathcal{G}}\sqrt{\hat{\mathbb{E}}[G^{2}]} and for any j+j\in\mathbb{Z}_{+} let τj=2jτ0\tau_{j}=2^{-j}\tau_{0}. For each jj, let 𝒞j\mathcal{C}_{j} be a τj\tau_{j}-cover of 𝒢\mathcal{G} with respect to dn(,)d_{n}(\cdot,\cdot). For each G𝒢G\in\mathcal{G} and jj, pick an Gj^𝒞j\hat{G_{j}}\in\mathcal{C}_{j} such that Gj^\hat{G_{j}} is an αj\alpha_{j} approximation of GG. Then for N+N\in\mathbb{Z}_{+}, GG can be expressed as G=GG^N+i=1N(G^NG^N1)G=G-\hat{G}_{N}+\sum_{i=1}^{N}(\hat{G}_{N}-\hat{G}_{N-1}) where G^0=0\hat{G}_{0}=0. Then for any NN, we have:

^n(𝒢)\displaystyle\hat{\mathcal{R}}_{n}(\mathcal{G}) =1n𝔼σ[supG𝒢i=1nσi(G(xi)G^N(xi)+j=1N(G^j(xi)G^j1(xi)))]\displaystyle=\frac{1}{n}\mathbb{E}_{\sigma}[\sup_{G\in\mathcal{G}}\sum_{i=1}^{n}\sigma_{i}(G(x_{i})-\hat{G}_{N}(x_{i})+\sum_{j=1}^{N}(\hat{G}_{j}(x_{i})-\hat{G}_{j-1}(x_{i})))] (34)
τN+j=1N1n𝔼σ[supG𝒢i=1nσi(G^j(xi)G^j1(xi))]\displaystyle\leq\tau_{N}+\sum_{j=1}^{N}\frac{1}{n}\mathbb{E}_{\sigma}\left[\sup_{G\in\mathcal{G}}\sum_{i=1}^{n}\sigma_{i}(\hat{G}_{j}(x_{i})-\hat{G}_{j-1}(x_{i}))\right] (35)
Note that dn(G^jG^j1)2\displaystyle\text{Note that\quad}d_{n}(\hat{G}_{j}-\hat{G}_{j-1})^{2} =dn(G^jG+GG^j1)2\displaystyle=d_{n}(\hat{G}_{j}-G+G-\hat{G}_{j-1})^{2} (36)
(τj+τj1)2=9τj2\displaystyle\leq(\tau_{j}+\tau_{j-1})^{2}=9\tau_{j}^{2} (37)

Then apply Massart’s finite class lemma to function classes {ff:f𝒞j,f𝒞j1}\{f-f^{\prime}:f\in\mathcal{C}_{j},f^{\prime}\in\mathcal{C}_{j-1}\}(for each jj), we have for any NN that,

R^n(𝒢)τN+12αN+1α0logN(𝒢,τ,dn(,))n𝑑τ\displaystyle\hat{R}_{n}(\mathcal{G})\leq\tau_{N}+12\int_{\alpha_{N+1}}^{\alpha_{0}}\sqrt{\frac{\log N(\mathcal{G},\tau,d_{n}(\cdot,\cdot))}{n}}d\tau (38)

Then for any ϵ\epsilon, choose N=sup{j:αj>2ϵ}N=\sup\{j:\alpha_{j}>2\epsilon\}. We have αN4ϵ\alpha_{N}\leq 4\epsilon and

^n(𝒢)4ϵ+12ϵsupG𝒢𝔼^[G2]logN(𝒢,τ,dn(,))n𝑑τ\hat{\mathcal{R}}_{n}(\mathcal{G})\leq 4\epsilon+12\int_{\epsilon}^{\sup_{G\in\mathcal{G}}\sqrt{\hat{\mathbb{E}}[G^{2}]}}\sqrt{\frac{\log N(\mathcal{G},\tau,d_{n}(\cdot,\cdot))}{n}}d\tau (39)

Since ϵ\epsilon is arbitrarily chosen, we take an infimum over ϵ\epsilon. ∎

Remark 1.

Merging Theorem 4, 5, 6 together, we obtain the final bound as:

supP𝔼P[(θ;Z)]λρ+𝔼P^[sλ(θ;Z)]+kTlog(1/δ)n+\displaystyle\sup_{P}\mathbb{E}_{P}[\ell(\theta;Z)]\leq\lambda\rho+\mathbb{E}_{\hat{P}}[s_{\lambda}(\theta;Z)]+kT_{\ell}\sqrt{\frac{\log(1/\delta)}{n}}+ (40)
infϵ0{4ϵ+12ϵsupG𝒢𝔼^[G2]logN(𝒢,τ,dn(,))n𝑑τ}\displaystyle\inf_{\epsilon\geq 0}\left\{4\epsilon+12\int_{\epsilon}^{\sup_{G\in\mathcal{G}}\sqrt{\hat{\mathbb{E}}[G^{2}]}}\sqrt{\frac{\log N(\mathcal{G},\tau,d_{n}(\cdot,\cdot))}{n}}d\tau\right\}

6 Experiments

In this section, we validate the effectiveness of our method on simulation data and real-world data.

Baselines    We compare our proposed SAL with the following methods.

  • Empirical Risk Minimization(ERM):

    minθ𝔼P0[(θ;X,Y)]\min\limits_{\theta}\mathbb{E}_{P_{0}}\left[\ell(\theta;X,Y)\right] (41)
  • Wasserstein Distributionally Robust Learning(WDRL):

    minθsupQW(Q,P0)ρ𝔼Q[(θ;X,Y)]\min\limits_{\theta}\sup\limits_{Q\in W(Q,P_{0})\leq\rho}\mathbb{E}_{Q}\left[\ell(\theta;X,Y)\right] (42)
  • Invariant Risk Minimization(IRM[31]):

    minθee+λw|w=1.0e(wθ)2\min\limits_{\theta}\sum_{e\in\mathcal{E}}\mathcal{L}^{e}+\lambda\|\nabla_{w|w=1.0}\mathcal{L}^{e}(w\cdot\theta)\|^{2} (43)

For completeness, we also compare with LASSO [42], and Ridge regression [43].

For ERM and WDRL, we simply pool the multiple environments data for training. For fairness, we search the hyper-parameter λ\lambda in {0.01,0.1,,1e0,1e1,,1e4}\{0.01,0.1,\dots,1e0,1e1,\dots,1e4\} for IRM and the hyper-parameter ρ\rho in {1,5,10,20,50,80,100}\{1,5,10,20,50,80,100\} for WDRL. And we search the hyper-parameters λ\lambda for LASSO and Ridge in {1e3,1e2,,1e1,,1e1}\{1e-3,1e-2,\dots,1e-1,\dots,1e1\}. The best hyper-parameter is selected according to the validation set, which is sampled i.i.d from the training environments.

Kinds of Distributional Shifts  To demonstrate the superiority of our methods, we design two typical kinds of distributional shifts, including selection bias[32, 33] and anti-causal effects[31]. In our simulation data, we introduce strong distributional shifts, where the spurious correlation between training and testing data varies a lot.

Evaluation Metrics  We use Mean_Error\mathrm{Mean\_Error} defined as Mean_Error=1|te|etee\mathrm{Mean\_Error}=\frac{1}{|\mathcal{E}_{te}|}\sum_{e\in\mathcal{E}_{te}}\mathcal{L}^{e} and Std_Error\mathrm{Std\_Error} defined as Std_Error=1|te|1ete(eMean_Error)2\mathrm{Std\_Error}=\sqrt{\frac{1}{|\mathcal{E}_{te}|-1}\sum_{e\in\mathcal{E}_{te}}\left(\mathcal{L}^{e}-\mathrm{Mean\_Error}\right)^{2}} which are the mean and standard deviation error across testing environments etee\in\mathcal{E}_{te}.

Imbalanced Mixture In our experiments, we perform a non-uniform sampling among different environments in training set which follows the natural phenomena that empirical data follow a power-law distribution. It is widely accepted that only a few environments/subgroups are common and the rest majority are rare[44, 17, 45].

6.1 Simulation Data

Firstly, we design one toy example to demonstrate the over pessimism problem of conventional WDRL. Then, we design two mechanisms to simulate the varying correlations of unstable covariates across environments, named by selection bias and anti-causal effect.

6.1.1 Toy Example

Refer to caption
(a) Testing performance for each environment.
Refer to caption
(b) Testing performance with respect to radius
Refer to caption
(c) The learned coefficients of SS and VV w.r.t. radius
Figure 2: Results of the toy example. The left figure shows the testing performance in different environments under fixed radius, where RMSE\mathrm{RMSE} is root mean square error for the prediction. The middle and right denotes the prediction error and the learned coefficients of WDRL and SAL w.r.t. radius.
Refer to caption
(a) Visualization of the stable feature SS and unstable feature VV.
Refer to caption
(b) Visualization of the stable feature SS and target YY.
Refer to caption
(c) Visualization of the unstable feature VV and target YY.
Figure 3: Visualization of the toy example. We plot the observed data points, as well as the learned worst-case distribution of WDRL, our SAL and the ideal case. The first subfigure visualizes the stable covariate SS and the unstable VV, and the second one shows SS and YY, and the third one shows VV and YY.

In this setting, the goal is to predict yy\in\mathcal{R} from xdx\in\mathcal{R}^{d}, and we use (θ;(x,y))=|yθTx|\ell(\theta;(x,y))=|y-\theta^{T}x| as the loss function. We take d=2d=2 and generate X=[S,V]TX=[S,V]^{T}, where Siid𝒩(0,0.5)S\stackrel{{\scriptstyle iid}}{{\sim}}\mathcal{N}(0,0.5). We then generate YY and VV as following:

Y=5S+S2+ϵ1,V=αY+ϵ2\displaystyle Y=5*S+S^{2}+\epsilon_{1},\ V=\alpha Y+\epsilon_{2} (44)

where ϵ1iid𝒩(0,0.1)\epsilon_{1}\stackrel{{\scriptstyle iid}}{{\sim}}\mathcal{N}(0,0.1) and ϵ2iid𝒩(0,1.0)\epsilon_{2}\stackrel{{\scriptstyle iid}}{{\sim}}\mathcal{N}(0,1.0). In this experiment, the effect of SS on YY stays invariant, but the correlation between VV and YY, i.e. the parameter α\alpha, varies across environments. In training, we generate 180 data points with α=1\alpha=1 for environment 1 and 20 data points with α=0.1\alpha=-0.1 for environment 2. We compared methods for linear regression across testing environments with α{2.0,1.5,,1.5,2.0}\alpha\in\{-2.0,-1.5,\dots,1.5,2.0\}.

We first set the radius for WDRL and SAL to be 20.0, and the results are shown in Figure 2(a). We find the ERM induces high estimation error as it puts high regression coefficient on VV. Therefore, it performs poor in terms of prediction error under distribution shifts. While WDRL achieves more robust performances than ERM across environments, the prediction error is much higher than the others. Our method SAL achieves not only the smallest prediction error, but also the most robust performance across environments.

Furthermore, we train SAL and WDRL for linear regression with a varying radius ρ{0.0,0.01,,20.0}\rho\in\{0.0,0.01,\dots,20.0\}. From the results shown in Figure 2(b), we can see that, with the radius growing larger, the robustness of WDRL becomes better, but meanwhile, its performance maintains poor in terms of high Mean_ErrorMean\_Error and much worse than ERM (ρ=0\rho=0). This further verifies the limitation of WDRL with respect to the overwhelmingly-large adversary distribution set. In contrast, SAL achieves not only better prediction performance but also better robustness across environments. The plausible reason for the performance difference between WDRL and SAL can be explained by Figure 2(c). As the radius ρ\rho grows larger, WDRL tends to conservatively estimate small coefficients for both SS and VV so that the model can produce robust prediction performances over the overwhelmingly-large uncertainty set. Comparatively, as our SAL provides a mechanism to differentiate covariates and focus on the robustness optimization over unstable ones, the learned coefficient of unstable covariate VV is gradually decreased to improve robustness, while the coefficient of stable covariate SS does not change much to guarantee high prediction accuracy.

To better demonstrate the superiority of our proposed SAL, we further visualize the learned worst-case distribution of WDRL, out SAL compared with the observed data points in Figure 3. From Figure 3(a), we can see that WDRL (green points) perturbs the observed data greatly along both the stable and unstable direction, while the learned perturbations of our SAL (red points) mainly focus on the unstable direction, which is similar to the ideal case. To better understand why the original distribution set of WDRL is undesirable, we draw Figure 3(b), which shows the relationship between the stable covariate SS and the target YY. It show that WDRL (green points) greatly affects such stable relationship, while the proposed SAL does not hurt much, which is analogous to the ideal case. From Figure 3(c), we can see that our proposed SAL greatly perturbs the relationship between the unstable feature VV and target YY.

6.1.2 Selection Bias

In this setting, the correlations between unstable covariates and the target are perturbed through selection bias mechanism. We assume X=[S,V]TpX=[S,V]^{T}\in\mathcal{R}^{p} and S=[S1,S2,,Sns]TnsS=[S_{1},S_{2},\dots,S_{n_{s}}]^{T}\in\mathcal{R}^{n_{s}} is independent from V=[V1,V2,,Vnv]nvV=[V_{1},V_{2},\dots,V_{n_{v}}]\in\mathcal{R}^{n_{v}} while the covariates in SS are dependent with each other. According to assumption 1, we assume Y=f(S)+ϵY=f(S)+\epsilon and P(Y|S)P(Y|S) remains invariant across environments while P(Y|V)P(Y|V) can arbitrarily change.

Therefore, we generate training data points with the help of auxiliary variables ZdZ\in\mathcal{R}^{d} as following:

Z1,,Zdiid𝒩(0,1.0),V1,,Vnviid𝒩(0,1.0)\displaystyle Z_{1},\dots,Z_{d}\stackrel{{\scriptstyle iid}}{{\sim}}\mathcal{N}(0,1.0),\ V_{1},\dots,V_{n_{v}}\stackrel{{\scriptstyle iid}}{{\sim}}\mathcal{N}(0,1.0) (45)
Si=0.8Zi+0.2Zi+1fori=1,,ns\displaystyle S_{i}=0.8*Z_{i}+0.2*Z_{i+1}\ \ \ \ \ for\ \ i=1,\dots,n_{s} (46)

To induce model misspecification, we generate YY as:

Y=f(S)+ϵ=θsST+βS1S2S3+ϵY=f(S)+\epsilon=\theta_{s}*S^{T}+\beta*S_{1}S_{2}S_{3}+\epsilon (47)

where θs=[13,23,1,13,23,1,]ns\theta_{s}=[\frac{1}{3},-\frac{2}{3},1,-\frac{1}{3},\frac{2}{3},-1,\dots]\in\mathcal{R}^{n_{s}}, and ϵ𝒩(0,0.3)\epsilon\sim\mathcal{N}(0,0.3). As we assume that P(Y|S)P(Y|S) remains unchanged while P(Y|V)P(Y|V) can vary across environments, we design a data selection mechanism to induce this kind of distribution shifts. For simplicity, we select data points according to a certain variable set VbVV_{b}\subset V:

P^=ΠviVb|r|5|f(s)sign(r)vi|μUni(0,1)\displaystyle\hat{P}=\Pi_{v_{i}\in V_{b}}|r|^{-5*|f(s)-sign(r)*v_{i}|}\ \mu\sim Uni(0,1) (48)
M(r;(x,y))={1,μP^ 0,otherwise\displaystyle M(r;(x,y))=\begin{cases}1,\ \ \ \ \ &\text{$\mu\leq\hat{P}$ }\\ 0,\ \ \ \ \ &\text{otherwise}\end{cases} (49)

where |r|>1|r|>1 and VbnbV_{b}\in\mathcal{R}^{n_{b}}. Given a certain rr, a data point (x,y)(x,y) is selected if and only if M(r;(x,y))=1M(r;(x,y))=1 (i.e. if r>0r>0, a data point whose viv_{i} is close to its yy is more probably to be selected.) Intuitively, rr eventually controls the strengths and direction of the spurious correlation between VbV_{b} and YY(i.e. if r>0r>0, a data point whose VbV_{b} is close to its yy is more probably to be selected.). The larger value of |r||r| means the stronger spurious correlation between VbV_{b} and YY, and r0r\geq 0 means positive correlation and vice versa. Therefore, here we use rr to define different environments. In training, we generate nn data points, where κn\kappa n points from environment e1e_{1} with a predefined rr and (1κ)n(1-\kappa)n points from e2e_{2} with r=1.1r=-1.1. In testing, we generate data points for 10 environments with r[3,2,1.7,,1.7,2,3]r\in[-3,-2,-1.7,\dots,1.7,2,3]. β\beta is set to 1.0.

TABLE I: Results in selection bias simulation experiments of different methods with varying selection bias rr, ratio κ\kappa, sample size nn and unstable covariates’ dimension nbn_{b} of training data, and each result is averaged over ten times runs.
Scenario 1: varying selection bias rate rr (n=2000,p=10,κ=0.95,nb=1n=2000,p=10,\kappa=0.95,n_{b}=1)
rr r=1.5r=1.5 r=1.7r=1.7 r=2.0r=2.0
Methods Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error}
ERM 0.484 0.058 0.561 0.124 0.572 0.140
LASSO 0.482 0.046 0.561 0.124 0.572 0.140
Ridge 0.483 0.045 0.560 0.125 0.572 0.140
WDRL 0.482 0.044 0.550 0.114 0.532 0.112
IRM 0.475 0.014 0.464 0.015 0.477 0.015
SAL 0.450 0.019 0.449 0.015 0.452 0.017
Scenario 2: varying ratio κ\kappa and sample size nn (p=10,r=1.7,nb=1p=10,r=1.7,n_{b}=1)
κ,n\kappa,n κ=0.90,n=500\kappa=0.90,n=500 κ=0.90,n=1000\kappa=0.90,n=1000 κ=0.975,n=4000\kappa=0.975,n=4000
Methods Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error}
ERM 0.580 0.103 0.562 0.113 0.555 0.110
LASSO 0.562 0.110 0.514 0.078 0.555 0.122
Ridge 0.561 0.107 0.517 0.080 0.555 0.121
WDRL 0.563 0.101 0.527 0.083 0.536 0.108
IRM 0.460 0.014 0.464 0.015 0.459 0.014
SAL 0.454 0.015 0.451 0.015 0.448 0.014
Scenario 3: varying ratio κ\kappa and sample size nn(p=10p=10, r=2.0r=2.0, nb=3n_{b}=3)
κ,n\kappa,n κ=0.9,n=1000\kappa=0.9,n=1000 κ=0.95,n=2000\kappa=0.95,n=2000 κ=0.975,n=4000\kappa=0.975,n=4000
Methods Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error} Mean_Error\mathrm{Mean\_Error} Std_Error\mathrm{Std\_Error}
ERM 0.440 0.069 0.466 0.105 0.489 0.133
LASSO 0.433 0.059 0.460 0.097 0.482 0.124
Ridge 0.434 0.061 0.457 0.095 0.481 0.124
WDRL 0.433 0.058 0.459 0.095 0.481 0.122
IRM 0.458 0.007 0.458 0.008 0.458 0.008
SAL 0.415 0.019 0.411 0.015 0.411 0.016

We compare our SAL with ERM, LASSO, Ridge, IRM and WDRL for Linear Regression. We conduct extensive experiments with different settings on rr, nn, nbn_{b} and κ\kappa. In each setting, we carry out the procedure 10 times and report the average results. The results are shown in Table I.

From the results, we have the following observations and analysis: ERM(as well as LASSO & Ridge) suffers from the distributional shifts in testing and yields poor performance in most of the settings. Compared with ERM, the other three robust learning methods achieve better average performance due to the consideration of robustness during the training process. When the distributional shift becomes serious as rr grows, WDRL suffers from the overwhelmingly-large distribution set and performs poorly in terms of prediction error, which is consistent with our analysis. IRM sacrifices the average performance for the stability across environments, which might owe to its harsh requirements on the diversity of different training environments. Compared with other robust learning baselines, our SAL achieves nearly perfect performance with respect to average performance and stability, which reflects the effectiveness of assigning different weights to covariates for constructing the uncertainty set.

6.1.3 Illustration of the Confidence Problem

As mentioned above, WDRL is faced with the low confidence problem, which is also called the over-pessimism problem. We conduct a classification experiment to directly show the confidence problem of WDRL as well as the superiority of our SAL. We make a slight modification to the selection bias setting and turn it into a classification problem. Specifically, we modify the generation of YY as:

Y=sign(θsST+βS1S2S3+ϵ)Y=\mathrm{sign}(\theta_{s}*S^{T}+\beta*S_{1}S_{2}S_{3}+\epsilon) (50)

where sign(x)=1x0\text{sign}(x)=1_{x\geq 0}. In this experiment, we set n=2000n=2000, κ=0.95\kappa=0.95, p=10p=10, nb=1n_{b}=1 and compare the SAL with WDRL under radius of {1e2,1e1,1e0,1e1}\{1e-2,1e-1,1e0,1e1\}. The confidence of a binary classifier fθ(.)f_{\theta}(.) is defined as the maximal prediction possibility assigned to classes:

Conf=𝔼[max(fθ(x),1fθ(x))]\mathrm{Conf}=\mathbb{E}[\max(f_{\theta}(x),1-f_{\theta}(x))] (51)

We report the accuracy and confidence of SAL and WDRL in Table II. As the radius of the uncertainty set increasing, the confidence of a WDRL classifier decreases sharply to 0.5, which means that the binary classifier cannot make a decision and it just randomly guess the answer.

TABLE II: Results of the classification problem under selection bias setting. Acc\mathrm{Acc} denotes the average accuracy and Conf\mathrm{Conf} the confidence.
Classification under selection bias(n=2000,p=10,κ=0.95,nb=1n=2000,p=10,\kappa=0.95,n_{b}=1)
Radius\mathrm{Radius} 1e-2 1e-1 1e0 1e1
Methods Acc\mathrm{Acc} Conf\mathrm{Conf} Acc\mathrm{Acc} Conf\mathrm{Conf} Acc\mathrm{Acc} Conf\mathrm{Conf} Acc\mathrm{Acc} Conf\mathrm{Conf}
WDRL 0.765 0.702 0.581 0.585 0.377 0.529 0.361 0.504
SAL 0.799 0.759 0.812 0.785 0.818 0.811 0.824 0.817

6.1.4 Anti-causal Effect

Inspired by [31], in this setting, we introduce the spurious correlation by using anti-causal relationship from the target YY to the unstable covariates VV. Assume X=[S,V]TmX=[S,V]^{T}\in\mathcal{R}^{m} and S=[S1,,Sns]TnsS=[S_{1},\dots,S_{n_{s}}]^{T}\in\mathcal{R}^{n_{s}}, V=[V1,,Vnv]TnvV=[V_{1},\dots,V_{n_{v}}]^{T}\in\mathcal{R}^{n_{v}}, and the data generation process is as following:

S\displaystyle S i=1kzk𝒩(μi,I),Y=θsTS+βS1S2S3+𝒩(0,0.3)\displaystyle\sim\sum_{i=1}^{k}z_{k}\mathcal{N}(\mu_{i},I),Y=\theta_{s}^{T}S+\beta S_{1}S_{2}S_{3}+\mathcal{N}(0,0.3) (52)
V\displaystyle V =θvY+𝒩(0,σ(μi)2)\displaystyle=\theta_{v}Y+\mathcal{N}(0,\sigma(\mu_{i})^{2}) (53)

where i=1kzi=1&zi>=0\sum_{i=1}^{k}z_{i}=1\ \&\ z_{i}>=0 is the mixture weight of kk Gaussian components, σ(μi)\sigma(\mu_{i}) means the Gaussian noise added to VV depends on which component stable covariates SS belong to and θvnv\theta_{v}\in\mathcal{R}^{n_{v}}. Intuitively, in different Gaussian components, the corresponding correlations between VV and YY are varying due to the different value of σ(μi)\sigma(\mu_{i}). The larger the σ(μi)\sigma(\mu_{i}) is, the weaker correlation between VV and YY is.

We use the mixture weight Z=[z1,,zk]TZ=[z_{1},\dots,z_{k}]^{T} to define different environments, where different mixture weights represent different overall strength of the effect YY on VV. In this experiment, we set β=0.1\beta=0.1 and build 10 environments with varying σ\sigma and the dimension of S,VS,V, the first three for training and the last seven for testing. Specifically, we set β=0.1\beta=0.1, μ1=[0,0,0,1,1]T,μ2=[0,0,0,1,1]T,μ2=[0,0,0,1,1]T,μ4=μ5==μ10=[0,0,0,1,1]T\mu_{1}=[0,0,0,1,1]^{T},\mu_{2}=[0,0,0,1,-1]^{T},\mu_{2}=[0,0,0,-1,1]^{T},\mu_{4}=\mu_{5}=\dots=\mu_{10}=[0,0,0,-1,-1]^{T}, σ(μ1)=0.2,σ(μ2)=0.5,σ(μ3)=1.0\sigma(\mu_{1})=0.2,\sigma(\mu_{2})=0.5,\sigma(\mu_{3})=1.0 and [σ(μ4),σ(μ5),,σ(μ10)]=[3.0,5.0,,15.0][\sigma(\mu_{4}),\sigma(\mu_{5}),\dots,\sigma(\mu_{10})]=[3.0,5.0,\dots,15.0]. θs,θv\theta_{s},\theta_{v} are randomly sampled from 𝒩(1,I5)\mathcal{N}(1,I_{5}) and 𝒩(0,0.1I5)\mathcal{N}(0,0.1I_{5}) respectively in each run We run experiments for 15 times and average the results.

The average prediction errors are shown in Table III, where the first three environments are used for training and the last seven are not captured in training with weaker correlation between VV and YY. ERM and IRM achieve the best training performance with respect to their prediction errors on training environments e1,e2,e3e_{1},e_{2},e_{3}, while their performances in testing are poor. WDRL performs worst due to its over pessimism problem. SAL achieves nearly uniformly good performance in training environments as well as the testing ones, which validates the effectiveness of our method and proves the excellent generalization ability of SAL.

TABLE III: Results of the anti-causal effect experiment. The average prediction errors of 15 runs are reported.
Scenario 1: ns=5,nv=5n_{s}=5,\ n_{v}=5
ee Training environments Testing environments
Methods e1e_{1} e2e_{2} e3e_{3} e4e_{4} e5e_{5} e6e_{6} e7e_{7} e8e_{8} e9e_{9} e10e_{10}
ERM 0.281 0.305 0.341 0.461 0.555 0.636 0.703 0.733 0.765 0.824
LASSO 0.277 0.305 0.341 0.470 0.569 0.648 0.722 0.752 0.795 0.843
Ridge 0.258 0.306 0.347 0.483 0.588 0.673 0.751 0.783 0.828 0.879
IRM 0.287 0.293 0.329 0.345 0.382 0.420 0.444 0.461 0.478 0.504
WDRL 0.282 0.331 0.399 0.599 0.750 0.875 0.983 1.030 1.072 1.165
SAL 0.324 0.329 0.331 0.358 0.381 0.403 0.425 0.435 0.446 0.458
Scenario 2: ns=9,nv=1n_{s}=9,\ n_{v}=1
ee Training environments Testing environments
Methods e1e_{1} e2e_{2} e3e_{3} e4e_{4} e5e_{5} e6e_{6} e7e_{7} e8e_{8} e9e_{9} e10e_{10}
ERM 0.272 0.280 0.298 0.526 0.362 0.411 0.460 0.504 0.534 0.580
LASSO 0.309 0.312 0.327 0.360 0.397 0.425 0.457 0.461 0.473 0.494
Ridge 0.309 0.313 0.330 0.367 0.408 0.439 0.474 0.479 0.493 0.517
IRM 0.306 0.312 0.325 0.328 0.343 0.358 0.365 0.374 0.377 0.394
WDRL 0.299 0.314 0.332 0.545 0.396 0.441 0.483 0.529 0.555 0.596
SAL 0.290 0.284 0.288 0.293 0.287 0.288 0.287 0.290 0.284 0.294

6.2 Real Data

In this section, we test our method on two real-world datasets, and we combine LASSO and Ridge into ERM by setting the coefficient of the regularizer to be λ0\lambda\geq 0 due to their similar performances.

Refer to caption
(a) Mean_ErrorMean\_Error and Std_ErrorStd\_Error.
Refer to caption
(b) Prediction error with respect to build year.
Refer to caption
(c) Results of the Adult dataset.
Figure 4: Results of the real-world dataset. Figure (a) and (b) are the real regression data and Figure (c) is the Adult dataset.

Regression  In this experiment, we use a real-world regression dataset (Kaggle) of house sales prices from King County, USA, which includes the houses sold between May 2014 and May 2015 222https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data. The target variable is the transaction price of the house and each sample contains 17 predictive variables such as the built year of the house, number of bedrooms, and square footage of home, etc. We normalize all the predictive covariates to get rid of the influence by their original scales.

To test the stability of different algorithms, we simulate different environments according to the built year of the house. It is fairly reasonable to assume the correlations between parts of the covariates and the target may vary along time, due to the changing popular style of architecture. Specifically, the houses in this dataset were built between 190020151900\sim 2015 and we split the dataset into 6 periods, where each period approximately covers a time span of two decades. In training, we train all methods on the first and second decade where builtyear[1900,1910)and[1910,1920)built\ year\in[1900,1910)\ and\ [1910,1920) respectively and validate on 100 data points sampled from the second period.

From the results shown in figure 4(a), we can find that SAL achieves not only the smallest Mean_ErrorMean\_Error but also the lowest Std_ErrorStd\_Error compared with baselines. From figure 4(b), we can find that from period 4 and so on, where large distribution shifts occurs, ERM performs poorly and has larger prediction errors. IRM performs stably across the first 4 environments but it also fails on the last two, whose distributional shifts are stronger. WDRL maintains stable across environments while the mean error is high, which is consistent with our analysis in 6.1.1 that WDRL equally perturbs all covariates and sacrifices accuracy for robustness. From figure 4(b), we can find that from period 3 and so on, SAL performs better than ERM, IRM and WDRL, especially when distributional shifts are large. In periods 1-2 with slight distributional shift, the SAL method incurs a performance drop compared with IRM and WDRL, while SAL performs much better when larger distributional shifts occur, which is consistent with our intuition that our method sacrifice a little performance in nearly I.I.D. setting for its superior robustness under unknown distribution shifts.

Classification  Finally, we validate the effectiveness of our SAL on an income prediction task. In this task we use the Adult dataset[46] which involves predicting personal income levels as above or below $50,000 per year based on personal details. We split the dataset into 10 environments according to demographic attributes, among which distributional shifts might exist. In training phase, we train all methods on 693 data points from environment 1 and 200 points from the second respectively and validate on 100 points sampled from both. We normalize all the predictive covariates to get rid of the influence by their original scales. In testing phase, we test all methods on the 10 environments and report the mis-classification rate on all environments in figure 4(c). From the results shown in figure 4(c), we can find that the SAL outperforms baselines on almost all environments except a slight drop on the first. However, our SAL outperforms the others in the rest 8 environments where agnostic distributional shifts occur.

7 Conclusion

In this paper, we address a practical problem of overwhelmingly-large uncertainty set in robust learning, which often results in unsatisfactory performance under distributional shifts in real situations. We propose the Stable Adversarial Learning (SAL) algorithm that anisotropically considers each covariate to achieve more realistic robustness. We theoretically show that our method constructs a better uncertainty set and provide the theoretical guarantee for our method. Empirical studies validate the effectiveness of our methods in terms of uniformly good performance across different distributed data.

Acknowledgments

We would like to thank the anonymous reviewers for their constructive suggestions and efforts to improve this paper. This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004), National Natural Science Foundation of China (No. U1936219, 62141607), Beijing Academy of Artificial Intelligence (BAAI). Kun Kuang’s research was supported by National Natural Science Foundation of China (U20A20387, 62006207), National Key Research and Development Program of China (2021YFC3340300), Young Elite Scientists Sponsorship Program by CAST (2021QNRC001) Bo Li’s research was supported by the National Natural Science Foundation of China (No.72171131); the Tsinghua University Initiative Scientific Research Grant (No. 2019THZWJC11); Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403.

References

  • [1] H. Daume and D. Marcu, “Domain adaptation for statistical classifiers,” Journal of Artificial Intelligence Research, vol. 26, no. 1, pp. 101–126, 2006.
  • [2] A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” pp. 1521–1528, 2011.
  • [3] Z. Shen, P. Cui, T. Zhang, and K. Kuang, “Stable learning via sample reweighting,” arXiv: Learning, 2019.
  • [4] M. Kukar, “Transductive reliability estimation for medical diagnosis,” Artificial Intelligence in Medicine, vol. 29, no. 1-2, pp. 81–106, 2003.
  • [5] R. Berk, H. Heidari, S. Jabbari, M. Kearns, and A. Roth, “Fairness in criminal justice risk assessments: The state of the art,” Sociological Methods & Research, p. 0049124118782533, 2018.
  • [6] C. Rudin and B. Ustun, “Optimized scoring systems: Toward trust in machine learning for healthcare and criminal justice,” Interfaces, vol. 48, no. 5, pp. 449–466, 2018.
  • [7] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue et al., “An empirical evaluation of deep learning on highway driving,” arXiv preprint arXiv:1504.01716, 2015.
  • [8] A. Ben-Tal and A. Nemirovski, “Robust convex optimization,” Mathematics of operations research, vol. 23, no. 4, pp. 769–805, 1998.
  • [9] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  • [10] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  • [11] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European symposium on security and privacy (EuroS&P).   IEEE, 2016, pp. 372–387.
  • [12] N. Ye and Z. Zhu, “Bayesian adversarial learning,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems.   Curran Associates Inc., 2018, pp. 6892–6901.
  • [13] A. Sinha, H. Namkoong, and J. Duchi, “Certifying some distributional robustness with principled adversarial training,” International Conference on Learning Representations, 2018.
  • [14] P. M. Esfahani and D. Kuhn, “Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations,” Mathematical Programming, vol. 171, no. 1-2, pp. 115–166, 2018.
  • [15] J. Duchi and H. Namkoong, “Learning models with uniform performance via distributionally robust optimization,” arXiv preprint arXiv:1810.08750, 2018.
  • [16] C. Frogner, S. Claici, E. Chien, and J. Solomon, “Incorporating unlabeled data into distributionally robust learning,” arXiv preprint arXiv:1912.07729, 2019.
  • [17] S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang, “Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization,” arXiv preprint arXiv:1911.08731, 2019.
  • [18] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
  • [19] A. Bhattad, M. J. Chong, K. Liang, B. Li, and D. A. Forsyth, “Big but imperceptible adversarial perturbations via semantic manipulation,” CoRR, vol. abs/1904.06347, 2019. [Online]. Available: http://arxiv.org/abs/1904.06347
  • [20] P. Vaishnavi, T. Cong, K. Eykholt, A. Prakash, and A. Rahmati, “Can attention masks improve adversarial robustness?” arXiv preprint arXiv:1911.11946, 2019.
  • [21] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010.
  • [22] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of Statistical Planning and Inference, vol. 90, no. 2, pp. 227–244, 2000.
  • [23] S. Bickel, M. Brückner, and T. Scheffer, “Discriminative learning under covariate shift,” J. Mach. Learn. Res., vol. 10, pp. 2137–2155, 2009.
  • [24] M. Dudík, R. E. Schapire, and S. J. Phillips, “Correcting sample selection bias in maximum entropy density estimation,” in Advances in Neural Information Processing Systems 18 [Neural Information Processing Systems, NIPS 2005].
  • [25] J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Schölkopf, “Correcting sample selection bias by unlabeled data,” in Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006.
  • [26] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars, “Unsupervised visual domain adaptation using subspace alignment,” in IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013.   IEEE Computer Society, 2013, pp. 2960–2967.
  • [27] Y. Ganin and V. S. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, F. R. Bach and D. M. Blei, Eds.
  • [28] K. Muandet, D. Balduzzi, and B. Schölkopf, “Domain generalization via invariant feature representation,” in Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, ser. JMLR Workshop and Conference Proceedings.
  • [29] J. Peters, P. Bühlmann, and N. Meinshausen, “Causal inference using invariant prediction: identification and confidence intervals,” Statistics, vol. 78, no. 5, pp. 947–1012, 2015.
  • [30] M. Rojas-Carulla, B. Schölkopf, R. E. Turner, and J. Peters, “Invariant models for causal transfer learning,” J. Mach. Learn. Res., vol. 19, pp. 36:1–36:34, 2018.
  • [31] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz, “Invariant risk minimization,” arXiv preprint arXiv:1907.02893, 2019.
  • [32] K. Kuang, P. Cui, S. Athey, R. Xiong, and B. Li, “Stable prediction across unknown environments,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1617–1626.
  • [33] P. Cui and S. Athey, “Stable learning establishes some common ground between causal inference and machine learning,” Nature Machine Intelligence, vol. 4, no. 2, pp. 110–115, 2022.
  • [34] E. Delage and Y. Ye, “Distributionally robust optimization under moment uncertainty with application to data-driven problems,” Oper. Res., vol. 58, no. 3, p. 595–612, May 2010.
  • [35] D. Bertsimas, V. Gupta, and N. Kallus, “Data-driven robust optimization,” Mathematical Programming, vol. 167, no. 2, pp. 235–292, 2018.
  • [36] H. Namkoong and J. C. Duchi, “Stochastic gradient methods for distributionally robust optimization with f-divergences,” Neural Information Processing Systems (NIPS), pp. 2208–2216, 2016.
  • [37] S. Abadeh, Shafieezadeh, P. M. Esfahani, and D. Kuhn, “Distributionally robust logistic regression,” arXiv: Optimization and Control, 2015.
  • [38] W. Hu, G. Niu, I. Sato, and M. Sugiyama, “Does distributionally robust supervised learning give robust classifiers?” Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2029-2037, 2016.
  • [39] K. Kuang, P. Cui, S. Athey, R. Xiong, and B. Li, “Stable prediction across unknown environments,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018.
  • [40] K. Kuang, R. Xiong, P. Cui, S. Athey, and B. Li, “Stable prediction with model misspecification and agnostic distribution shift,” in The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020.
  • [41] P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” Journal of Machine Learning Research, vol. 3, no. Nov, pp. 463–482, 2002.
  • [42] R. Tibshirani, “Regression shrinkage and selection via the lasso: a retrospective.” Journal of the Royal Statal Society: Series B (Statal Methodology), 73(3), 1996.
  • [43] A. E. Hoerl and R. W. Kennard, “Ridge regression: applications to nonorthogonal problems,” Technometrics, vol. 12, no. 1, pp. 69–82, 1970.
  • [44] Z. Shen, P. Cui, K. Kuang, B. Li, and P. Chen, “Causally regularized learning with agnostic data selection bias,” in 2018 ACM Multimedia Conference, 2018.
  • [45] S. Sagawa, A. Raghunathan, P. W. Koh, and P. Liang, “An investigation of why overparameterization exacerbates spurious correlations,” 2020.
  • [46] D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
[Uncaptioned image] Jiashuo Liu received his BE degree from the Department of Computer Science and Technology, Tsinghua University in 2020. He is currently pursuing a Ph.D. Degree in the Department of Computer Science and Technology at Tsinghua University. His research interests focus on causally-regularized machine learning and distributionally robust learning, especially in developing algorithms with stable performance under distributional shifts.
[Uncaptioned image] Zheyan Shen is a Ph.D candidate in Department of Computer Science and Technology, Tsinghua University. He received his B.S. from the Department of Computer Science and Technology, Tsinghua University in 2017. His research interests include causal inference, stable prediction under selection bias and interpretability of machine learning.
[Uncaptioned image] Peng Cui is an Associate Professor with tenure in Tsinghua University. He got his PhD degree from Tsinghua University in 2010. His research interests include causally-regularized machine learning, network representation learning, and social dynamics modeling. He has published more than 100 papers in prestigious conferences and journals in data mining and multimedia. His recent research won the IEEE Multimedia Best Department Paper Award, SIGKDD 2016 Best Paper Finalist, ICDM 2015 Best Student Paper Award, SIGKDD 2014 Best Paper Finalist, IEEE ICME 2014 Best Paper Award, ACM MM12 Grand Challenge Multimodal Award, and MMM13 Best Paper Award. He is PC co-chair of CIKM2019 and MMM2020, SPC or area chair of WWW, ACM Multimedia, IJCAI, AAAI, etc., and Associate Editors of IEEE TKDE, IEEE TBD, ACM TIST, and ACM TOMM etc. He received ACM China Rising Star Award in 2015, and CCF-IEEE CS Young Scientist Award in 2018. He is now a Distinguished Member of ACM and CCF, and a Senior Member of IEEE.
[Uncaptioned image] Linjun Zhou received BE degree from the Department of Computer Science and Technology of Tsinghua University in 2016. He is a Ph.D. candidate from Lab of Media and Network, Department of Computer Science and Technology, Tsinghua University now. His research interests include few-shot learning and robust learning.
[Uncaptioned image] Kun Kuang , Associate Professor in the College of Computer Science and Technology, Zhejiang University. He received his Ph.D. in the Department of Computer Science and Technology at Tsinghua University in 2019. He was a visiting scholar at Stanford University. His main research interests include causal inference and causally regularized machine learning. He has published over 30 papers in major international journals and conferences, inclu ding SIGKDD, ICML, ACM MM, AAAI, IJCAI, TKDE, TKDD, Engineering, and ICDM, etc.
[Uncaptioned image] Bo Li received a Ph.D degree in Statistics from the University of California, Berkeley, and a bachelor’s degree in Mathematics from Peking University. He is an Associate Professor at the School of Economics and Management, Tsinghua University. His research interests are business analytics and risk-sensitive Artificial Intelligence. He has published widely in academic journals across a range of fields including statistics, management science and economics.
Proof of Theorem 2.

First, we prove that 𝒫𝒫0\mathcal{P}\subseteq\mathcal{P}_{0}. P𝒫\forall P\in\mathcal{P}, there exists measure M0M_{0} on 𝒵×𝒵\mathcal{Z}\times\mathcal{Z} satisfying:

𝔼(z,z)M0[cw(z,z)]ρ\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c_{w}(z,z^{\prime})]\leq\rho (54)

Note that cwc_{w} is optimal if and only if min(w(i))=1\min(w^{(i)})=1 and max(w(i))>1\max(w^{(i)})>1. Therefore, we have z,z𝒵,c(z,z)<cw(z,z)\forall z,z^{\prime}\in\mathcal{Z},\ c(z,z^{\prime})<c_{w}(z,z^{\prime}). Therefore, we have:

Wc(P,P0)\displaystyle W_{c}(P,P_{0}) =infMΠ(P,Q)𝔼(z,z)M[c(z,z)]\displaystyle=\mathop{inf}\limits_{M\in\Pi(P,Q)}\mathbb{E}_{(z,z^{\prime})\sim M}[c(z,z^{\prime})] (55)
𝔼(z,z)M0[c(z,z)]\displaystyle\leq\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c(z,z^{\prime})] (56)
<𝔼(z,z)M0[cw(z,z)]ρ\displaystyle<\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c_{w}(z,z^{\prime})]\leq\rho (57)

and therefore P𝒫0P\in\mathcal{P}_{0} and 𝒫𝒫0\mathcal{P}\subseteq\mathcal{P}_{0}.

Second, we prove that Q0𝒫0,s.t.Q0𝒫\exists Q_{0}\in\mathcal{P}_{0},\ s.t.\ Q_{0}\notin\mathcal{P} under Assumption 2 and 3. We have:

𝔼(z,z)M0[cw(z,z)]>𝔼(z,z)M0[c(z,z)]ρ\displaystyle\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c_{w}(z,z^{\prime})]>\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c(z,z^{\prime})]\geq\rho (58)

and

𝔼MΠ(P0,Q0)M0[cw(z,z)]>ρ\mathbb{E}_{M\in\Pi(P_{0},Q_{0})-M_{0}}[c_{w}(z,z^{\prime})]>\rho (59)

which leverages the property that .1\|.\|_{1} and .22\|.\|_{2}^{2} are strictly increasing against the absolute value of each covariate of the independent variable.

For distribution Q0Q_{0} satisfying Assumption 3, we have:

𝔼(z,z)M0[cw(z,z)]\displaystyle\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c_{w}(z,z^{\prime})] >ρ\displaystyle>\rho (60)
𝔼MΠ(P0,Q0)M0[cw(z,z)]\displaystyle\mathbb{E}_{M\in\Pi(P_{0},Q_{0})-M_{0}}[c_{w}(z,z^{\prime})] >ρ\displaystyle>\rho (61)

and therefore:

infMΠ(P,Q)𝔼(z,z)M[cw(z,z)]>ρ\mathop{inf}\limits_{M\in\Pi(P,Q)}\mathbb{E}_{(z,z^{\prime})\sim M}[c_{w}(z,z^{\prime})]>\rho (62)

which proves that Q0𝒫Q_{0}\notin\mathcal{P}. Therefore, we have 𝒫𝒫0\mathcal{P}\subset\mathcal{P}_{0}.

Furthermore, we prove that for the set U={i|w(i)=1},Q0𝒫U=\{i|w^{(i)}=1\},\ \exists Q_{0}\in\mathcal{P} that satisfies Wcw(P0#U,Q0#U)=ρW_{c_{w}}(P_{0\#U},Q_{0\#U})=\rho with the help of Assumption 3. Assume that distribution HH satisfies Assumption 3, we firstly construct a distribution Q0Q_{0} as following:

Q0#U\displaystyle Q_{0\#U} =H\displaystyle=H (63)
v𝒵#U,s𝒵#S\displaystyle\forall v\in\mathcal{Z}_{\#U},\ \forall s\in\mathcal{Z}_{\#S} ,Q0(s|v)=P0#S(s)\displaystyle,\ \ Q_{0}(s|v)=P_{0\#S}(s) (64)

where S={i|w(i)>1}S=\{i|w^{(i)}>1\}. Since Wcw(Q0#U,P0#U)=ρW_{c_{w}}(Q_{0\#U},P_{0\#U})=\rho, we have:

infMΠ(P0#U,Q0#U)𝔼(z,z)M[c(z,z)]ρ\inf_{M\in\Pi(P_{0\#U},Q_{0\#U})}\mathbb{E}_{(z,z^{\prime})\sim M}[c(z,z^{\prime})]\leq\rho (65)

where we refer to the couple minimizing 𝔼(z,z)M0[cw(z,z)]\mathbb{E}_{(z,z^{\prime})\sim M_{0}}[c_{w}(z,z^{\prime})] as M0M_{0}. Then we construct joint couple MM supported on 𝒵×𝒵\mathcal{Z}\times\mathcal{Z}, where M(z,z),z𝒵,z𝒵M(z,z^{\prime}),\ z\in\mathcal{Z},\ z^{\prime}\in\mathcal{Z} denotes the probability of transferring zz to zz^{\prime}.

Assume Z=[S,V]Z=[S,V], where S𝒵#SS\in\mathcal{Z}_{\#S}, V𝒵#UV\in\mathcal{Z}_{\#U}. v1,v2𝒵#U\forall v_{1},v_{2}\in\mathcal{Z}_{\#U}, according to equation 64, distribution P0(S|V=v1)P_{0}(S|V=v_{1}) is the same as Q0(S|V=v2)Q_{0}(S|V=v_{2}) and the optimal transportation cost between them is zero.

For some transportation scheme M^\hat{M} on 𝒵×𝒵\mathcal{Z}\times\mathcal{Z},

z𝒵z𝒵cw(z,z)M^(z,z)𝑑z𝑑z\displaystyle\int_{z\in\mathcal{Z}}\int_{z^{\prime}\in\mathcal{Z}}c_{w}(z,z^{\prime})\hat{M}(z,z^{\prime})dzdz^{\prime} (66)
=\displaystyle= v𝒵#Uv𝒵#Ucw(v,v)M^(v,v)𝑑v𝑑v\displaystyle\int_{v\in\mathcal{Z}_{\#U}}\int_{v^{\prime}\in\mathcal{Z}_{\#U}}c_{w}(v,v^{\prime})\hat{M}^{*}(v,v^{\prime})dvdv^{\prime} (67)

M^\hat{M}^{*} in equation 67 denotes the distribution on 𝒵#U×𝒵#U\mathcal{Z}_{\#U}\times\mathcal{Z}_{\#U}. Therefore, we have

Wcw(P0,Q0)\displaystyle W_{c_{w}}(P_{0},Q_{0}) (68)
=infMΠ(P0,Q0)z𝒵z𝒵cw(z,z)M(z,z)𝑑z𝑑z\displaystyle=\inf_{M\in\Pi(P_{0},Q_{0})}\int_{z\in\mathcal{Z}}\int_{z^{\prime}\in\mathcal{Z}}c_{w}(z,z^{\prime})M(z,z^{\prime})dzdz^{\prime} (69)
=infMΠ(P0#U,Q0#U)v𝒵#Uv𝒵#Ucw(v,v)M(v,v)𝑑v𝑑v\displaystyle=\inf_{M\in\Pi(P_{0\#U},Q_{0\#U})}\int_{v\in\mathcal{Z}_{\#U}}\int_{v^{\prime}\in\mathcal{Z}_{\#U}}c_{w}(v,v^{\prime})M(v,v^{\prime})dvdv^{\prime} (70)
=Wcw(P0#U,Q0#U)=ρ\displaystyle=W_{c_{w}}(P_{0\#U},Q_{0\#U})=\rho (71)