This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Rethinking Multi-domain Generalization with A General Learning Objective

Zhaorui Tan1,2, Xi Yang1, Kaizhu Huang  3
1Xi’an Jiaotong-Liverpool University, 2University of Liverpool, 3Duke Kunshan University
[email protected], [email protected], [email protected]
Corresponding authors
Abstract

Multi-domain generalization (mDG) is universally aimed to minimize the discrepancy between training and testing distributions to enhance marginal-to-label distribution mapping. However, existing mDG literature lacks a general learning objective paradigm and often imposes constraints on static target marginal distributions. In this paper, we propose to leverage a 𝐘{\mathbf{Y}}-mapping to relax the constraint. We rethink the learning objective for mDG and design a new general learning objective to interpret and analyze most existing mDG wisdom. This general objective is bifurcated into two synergistic amis: learning domain-independent conditional features and maximizing a posterior. Explorations also extend to two effective regularization terms that incorporate prior information and suppress invalid causality, alleviating the issues that come with relaxed constraints. We theoretically contribute an upper bound for the domain alignment of domain-independent conditional features, disclosing that many previous mDG endeavors actually optimize partially the objective and thus lead to limited performance. As such, our study distills a general learning objective into four practical components, providing a general, robust, and flexible mechanism to handle complex domain shifts. Extensive empirical results indicate that the proposed objective with 𝐘{\mathbf{Y}}-mapping leads to substantially better mDG performance in various downstream tasks, including regression, segmentation, and classification. Code is available at https://github.com/zhaorui-tan/GMDG/tree/main.

1 Introduction

Domain shift, which breaks the independent and identical distributed (i.i.d.) assumption amid training and test distributions [51], poses a common yet challenging problem in real-world scenarios. Multi-domain generalization (mDG) [3]) is garnering increasing attention owing to its promising capacity to utilize multiple distinct but related source domains for model optimization, ultimately intending to generalize well to unseen domains. Intrinsically, the primary objective for mDG is the maximization of the joint distribution between observations 𝐗{\mathbf{X}} and targets 𝐘{\mathbf{Y}} across all domains 𝒟\mathcal{D}:

maxP(𝐗,𝐘𝒟)=P(𝐘𝒟)P(𝐗𝐘,𝒟)=P(𝐗𝒟)P(𝐘𝐗,𝒟).\begin{split}\max P({{\mathbf{X}},{\mathbf{Y}}\mid\mathcal{D}})=&P({{\mathbf{Y}}\mid\mathcal{D}})P({{\mathbf{X}}\mid{\mathbf{Y}},\mathcal{D}})\\ =&P({{\mathbf{X}}\mid\mathcal{D}})P({{\mathbf{Y}}\mid{\mathbf{X}},\mathcal{D}}).\end{split} (1)

A prevalent approach initiates by maximizing the marginal distribution P(𝐗|𝒟)P({\mathbf{X}}|\mathcal{D}) before presuming an invariant P(𝐘|𝐗)=P(𝐘|𝐗,𝒟)P({\mathbf{Y}}|{\mathbf{X}})=P({\mathbf{Y}}|{\mathbf{X}},\mathcal{D}) across domains [58], anchored on an assumption that P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) remains consistency across 𝒟\mathcal{D}.

Aim1: Learning domain invariance
Others None
DANN minϕH(P(ϕ(𝐗)𝒟))\min_{\phi}H(P(\phi({\mathbf{X}})\mid\mathcal{D}))
CDANN, CIDG, MDA minϕH(P(ϕ(𝐗),𝐘𝒟))\min_{\phi}H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}\mid\mathcal{D}))
Ours: GAim1 minϕ,ψH(P(ϕ(𝐗),ψ(𝐘)𝒟))\min_{\phi,\psi}H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))
Reg1: Integrating prior
Others None
MIRO, SIMPLE minϕDKL(P(ϕ(𝐗),𝐘)𝒪){\min_{\phi}D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),{\mathbf{Y}}})\|\mathcal{O})}
Ours: GReg1 minϕ,ψDKL(P(ϕ(𝐗),ψ(𝐘))𝒪){\min_{\phi,\psi}D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),\psi{({\mathbf{Y}})}})\|\mathcal{O})}
Aim2: Maximizing A Posterior (MAP)
Others minϕH(P(𝐘,ϕ(𝐗)))\min_{\phi}H(P({{\mathbf{Y}},\phi({\mathbf{X}})}))
Ours: GAim2 minϕ,ψH(P(𝐘,ϕ(𝐗)))+H(P(𝐘,ψ(𝐘)))\min_{\phi,\psi}H(P({{\mathbf{Y}},\phi({\mathbf{X}})}))+H(P({{\mathbf{Y}},\psi({\mathbf{Y}})}))
Reg2: Suppressing invalid causality
Others None
CORAL minϕH(P(ϕ(𝐗)𝒟))+H(P(ϕ(𝐗)))\min_{\phi}-H(P({\phi({\mathbf{X}})\mid\mathcal{D}}))+H(P({\phi({\mathbf{X}})}))
MDA,RobutsNet minϕH(P(ϕ(𝐗)𝐘))+H(P(ϕ(𝐗)))\min_{\phi}-H(P({\phi({\mathbf{X}})\mid{\mathbf{Y}}}))+H(P(\phi({\mathbf{X}})))
Ours: GReg2 minϕ,ψH(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))){\min_{\phi,\psi}-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}})))}
Table 1: A summary of objectives of ERM [14], DANN [13], CORAL [48], CDANN [29], CIDG [28], MDA [16], MIRO [19], SIMPLE [30], RobustNet [10], VA-DepthNet [32] and Ours. All constants are omitted here. ‘Others’ denotes no other specified methods. For more details, see Supplementary Material 7.

Is P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) truly static across domains? In other words, does 𝐘{\mathbf{Y}} truly lack domain-dependent features? In classification tasks, typically, the influence of 𝒟\mathcal{D} on 𝐘{\mathbf{Y}} is substantially marginal. However, this assumption is not universally applicable, particularly in tasks such as regression or segmentation. Consequently, MDA [16] relaxes the assumption of stable P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) by providing an average class discrepancy, allowing both P(𝐗|𝐘,𝒟)P({\mathbf{X}}|{\mathbf{Y}},\mathcal{D}) and P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) vary across 𝒟\mathcal{D}. However, MDA has to conduct class-specific sample selection under domains for obtaining P(𝐗|𝐘,𝒟)P({\mathbf{X}}|{\mathbf{Y}},\mathcal{D}), which constrains its objective’s universality and struggles with tasks beyond basic classification, especially where 𝐘{\mathbf{Y}} is not discrete.

To better tackle the 𝒟\mathcal{D}-dependent variations in both 𝐗{\mathbf{X}} and 𝐘{\mathbf{Y}} for border tasks beside classification, we introduce two learnable mappings, ϕ\phi and ψ\psi, that project 𝐗{\mathbf{X}} and 𝐘{\mathbf{Y}} into the same latent Reproducing Kernel Hilbert Space (RKHS), assumed to extract 𝒟\mathcal{D}-independent features from 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}}. Incorporating these, Eq. 1 can be changed as

maxϕ,ψP(ϕ(𝐗),ψ(𝐘)), s.t., ϕ(𝐗),ψ(𝐘)𝒟.\displaystyle\max\nolimits_{\phi,\psi}P(\phi({\mathbf{X}}),\psi({\mathbf{Y}})),\;\text{ s.t., }\phi({\mathbf{X}}),\psi({\mathbf{Y}})\perp\!\!\!\perp\mathcal{D}. (2)

Built upon the optimization of Eq. 2, we further identify two additional issues that warrant consideration. 1). The synergy of integrating prior information and domain-invariant feature learning plays a crucial role. pre-trained (oracle) models can be used as priors [19, 30] to regulate feature learning. 2). Issues regarding invalid causality predicament within the P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption relaxation during learning the invariance come to light. This is aligned with the causality premise ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) to maximize P(ϕ(𝐗),ψ(𝐘)𝒟)P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})\mid\mathcal{D}}). Efforts must be made to suppress invalid causality ψ(𝐘)ϕ(𝐗)\psi({\mathbf{Y}})\to\phi({\mathbf{X}}) during invariant-feature learning (Refer to Eq. 8 for derivation).

Considering these findings, the general objective for mDG, which copes with the above issues and effectively relaxes the static target distribution assumption, is crucial. To be specific, it should consist of four key parts: Aim1- Learning domain-invariant representations and Aim2- Maximizing the posterior; with two regularization Reg1- Integrating prior information and the Reg2- Suppression of invalid causality. In essence, the objective should certify invariant representations of 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}} across domains while preserving the prediction relationship in 𝐗𝐘{\mathbf{X}}\!\to\!{\mathbf{Y}}. As a notable contribution, we redesign the conventional mDG paradigm and uniformly simplify most previous works’ empirical objectives, as summarized in Table 1 while Notations are shown in Table 2.

Symbols Descriptions
dn𝒟,nNd_{n}\in\mathcal{D},n\leq N ; d𝒟d^{\prime}\in\mathcal{D}: The nn-th observed domains in all domains; Unseen domains in all domains.
𝐗,𝐘{\mathbf{X}},{\mathbf{Y}}; 𝐗n,𝐘n{\mathbf{X}}_{n},{\mathbf{Y}}_{n}; 𝐗,𝐘{\mathbf{X}}^{\prime},{\mathbf{Y}}^{\prime}. All observations and targets; Observations and targets in dnd_{n}; Observations and targets in dd^{\prime}.
P(x)P(x): Distributions where xx corresponds to the random variables.
ϕ,ψ\phi,\psi: Learnable transformations that codify 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}} into the same latent RKHS.
ϕ(𝐗),ψ(𝐘)\phi({\mathbf{X}}),\psi({\mathbf{Y}}): Mapped 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}}. Within the RKHS realm, ϕ(𝐗),ψ(𝐘)\phi({\mathbf{X}}),\psi({\mathbf{Y}}) follow Multivariate Gaussian Distributions.
𝒪\mathcal{O}; R()R(\cdot); σ,\sigma_{\cdot,\cdot}: Prior knowledge (oracle model); Empirical risks; Covariance between two variables.
𝒞:ϕ(𝐗),ψ(𝐘)𝐘\mathcal{C}:\phi({\mathbf{X}}),\psi({\mathbf{Y}})\to{\mathbf{Y}}: Predictor that predicts 𝐘{\mathbf{Y}} from ϕ(𝐗),ψ(𝐘)\phi({\mathbf{X}}),\psi({\mathbf{Y}}).
DKL();H();Hc(,)D_{\mathrm{KL}}(\cdot\|\cdot);H(\cdot);H_{c}(\cdot,\cdot): KL divergence; Entropy; Cross-entropy.
Table 2: A summary of notations.

Most current mDG studies only focus on classification. SOTA methods such as MIRO [19] and SIMPLE [30] propose learning similar features by “oracle” models as a substitute for learning domain-invariant representations for mDG. Worth mentioning, we counter MIRO’s argument by confirming the persisting necessity of domain-invariant features, even under prior distribution, by theoretically deviating from minimizing the Generalized Jensen-Shannon Divergence (GJSD). MDA [16] pioneered the relaxation of the P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption, yet without explicitly introducing a 𝐘{\mathbf{Y}}-mapping function, and overlooked the emergence of invalid causality that arises upon the relaxation. Beyond classification, RobustNet [10] and VA-DepthNet [32] explore their methods on mDG settings in segmentation and regression but propose no explicit objective for mDG. Importantly, our theoretical analysis and empirical findings suggest that mere aggregation of all the aforementioned objectives fails to yield a comprehensive general objective for mDG. For instance, term H(P(ϕ(𝐗|𝒟)))\!-H(P({\phi({\mathbf{X}}|\mathcal{D})})), coupled with prior knowledge utilization, could inadvertently precipitate performance degradation.

In this paper, we introduce the General Multi-Domain Generalization Objective (GMDG) to overcome current limitations in current methods, relaxing the static assumption of P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) (overall formulation is shown in Section 3). Meanwhile, we propose an actionable solution to the invalidated causality through the minimization of the Conditional Feature Shift (CFS). Our main contributions can be summarized as follows:

  • We theoretically prove that domain generalization can be improved through the minimization of Generalized Jensen-Shannon Divergence (GJSD), with the incorporation of prior knowledge, leading to the derivation of an alignment upper bound (PUB) (Section 3).

  • We analyze existing approaches, demonstrating their incomplete optimization against the GMDG and identifying unexpected terms they inadvertently introduce (Section 4).

  • Our approach is the first try that is designed as compatible with existing mDG frameworks and exhibits performance improvements in a suite of tasks, including regression, segmentation, and classification, as confirmed by our comprehensive experiments (Section 5).

Notably, our results that only used one pre-trained model as prior in classification tasks exceed the SOTA SIMPLE++, which employs 283 pre-trained models as an ensemble oracle, while yielding consistent improvement in regression and segmentation, as shown in Figure 1. This further suggests the superiority of GMDG.

2 Related work

Multi-domain generalization. Most current mDG methods focus only on classification tasks. To learn better DD-independent representations for mDG, DANN [13] minimizes feature divergences between the source domains. CDANN [29], CIDG [28], and MDA [16] additionally take conditions into consideration and aim to learn conditionally invariant features across domains. [5, 7, 19, 30] point out that learning invariant representation to source domains is insufficient for mDG. Thus, MIRO [19] and SIMPLE [19] adopt pre-trained models as an oracle for seeking better general representations across various domains, including unseen target domains. Meanwhile, RobustNet [10] constrains conditional covariance shifts and conducts mDG segmentation, and little exploration focuses on mDG regression, though many other works pay attention to single domain generalization [37, 20, 24, 32]. Our study shows that their objectives optimize partially GMDG, leading to sub-optimal results.

Multi-domain generalization assumptions. In the literature, different assumptions are proposed to simplify the task as described by the original objective in Eq. 1. One assumption is that the P(𝐘|𝐗,𝒟)P({\mathbf{Y}}|{\mathbf{X}},\mathcal{D}) is stable while only marginal P(𝐗|𝒟)P({\mathbf{X}}|\mathcal{D}) changes across domains [45, 55]. [54] point out that 𝐗{\mathbf{X}} is usually caused by 𝐘{\mathbf{Y}} thus P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) changes while P(𝐗|𝐘,𝒟)P({\mathbf{X}}|{\mathbf{Y}},\mathcal{D}) is sable or P(𝐗|𝐘,𝒟)P({\mathbf{X}}|{\mathbf{Y}},\mathcal{D}) changes but P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) stays stable, or a combination of both. Thus, MDA [16] allows both P(𝐘|𝐗,𝒟)P({\mathbf{Y}}|{\mathbf{X}},\mathcal{D}) and P(𝐗|𝒟)P({\mathbf{X}}|\mathcal{D}) change across domains but needs selecting samples of each class for the calculation. Moreover, it considers no prior. This paper further relaxes these assumptions by extracting domain-invariant features in 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}}.

Using pre-trained models as an oracle. Previous methods such as MIRO [19] have employed pre-trained models as the oracle to regularize ϕ\phi. SIMPLE [30] employs at most 283283 pre-trained models as an ensemble and adaptively composes the most suitable oracle model. RobustNet [10] and VA-DepthNet [32], only use pre-trained models as initialization rather than additional supervision.

3 A general multi-Domain generalization objective

General Multi-Domain Generalization Objective (GMDG) essentially comprises a weighted combination of Four terms, each term designated by an alias:

minϕ,ψvA1H(P(ϕ(𝐗),ψ(𝐘)𝒟))GAim1+vA2[H(P(ψ(𝐘),ϕ(𝐗)))+H(P(𝐘,ψ(𝐘)))]GAim2+vR1DKL(P(ϕ(𝐗),ψ(𝐘))𝒪)GReg1vR2H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗)))GReg2.\begin{split}\min\nolimits_{\phi,\psi}&\underbrace{v_{A1}H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))}_{\text{{GAim1}}}\\ &+\underbrace{v_{A2}[H(P({\psi({\mathbf{Y}}),\phi({\mathbf{X}})}))+H(P({{\mathbf{Y}},\psi({\mathbf{Y}})}))]}_{\text{{GAim2}}}\\ &+\underbrace{v_{R1}D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})})\|\mathcal{O})}_{\text{{GReg1}}}\\ &\underbrace{-v_{R2}H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}})))}_{\text{{GReg2}}}.\end{split}

(3)

Theoretically, we justify that GAim1 and GReg1 can be effectively revised by minimizing the Generalized Jensen-Shannon Divergence (GJSD) with prior knowledge between visible domains for optimization. Meanwhile, we derive an upper bound termed as an alignment Upper Bound with Prior of mDG (PUB). Importantly, we demonstrate using GReg2 not only cope with the invalid causality brought by P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption relaxation. Regarding GReg2, it can be simplified by minimizing the Conditional Feature Shift (CFS), i.e., the shift between unconditional and conditional features, which can be calculated by ψ\psi. More theoretical details are provided as follows.

Refer to caption
Figure 1: Segmentation and regression results of baselines and +GMDG on samples in unseen domains.

3.1 Theoretical Details

Learning of 𝒟\mathcal{D}-independent conditional features under prior. The generalization alignment upper bound (PUB), a novel GJSD variational upper bound that is tied to domain generalization alignment, is derived based on the generalized Jensen-Shannon divergence (GJSD) [31].

Definition 1 (GJSD).

Given JJ distributions, {P(𝐙j)}j=1J\{P({{\mathbf{Z}}_{j}})\}_{j=1}^{J} and a corresponding probability weight vector ww, GJSDw({P(𝐙j)}j=1J)GJSD_{w}(\{P({{\mathbf{Z}}_{j}})\}_{j=1}^{J}) is defined as:

j=1JwjDKL(P(𝐙j)j=1JwjP(𝐙j))H(j=1JwjP(𝐙j))j=1JwjH(P(𝐙j)).\begin{split}&\sum\nolimits_{j=1}^{J}w_{j}D_{\mathrm{KL}}(P({{\mathbf{Z}}_{j}})\|\sum\nolimits_{j=1}^{J}w_{j}P({{\mathbf{Z}}_{j}}))\\ \equiv&H(\sum\nolimits_{j=1}^{J}w_{j}P({{\mathbf{Z}}_{j}}))-\sum\nolimits_{j=1}^{J}w_{j}H(P({{\mathbf{Z}}_{j}})).\end{split} (4)

Our method addresses the standard scenario in which the weights are evenly distributed across domains: w1==wN=1/Nw_{1}=...=w_{N}=1/N. To achieve ϕ(𝐗),ψ(𝐘)𝒟\phi({\mathbf{X}}),\psi({\mathbf{Y}})\perp\!\!\!\perp\mathcal{D}, minimizing domain gap between P(ϕ(𝐗n),ψ(𝐘n))P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}) can be converted to minimizing GJSD across all domains:

minϕ,ψGJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)minϕ,ψH(P(ϕ(𝐗),ψ(𝐘)𝒟))𝔼[H(P(ϕ(𝐗n),ψ(𝐘n)))].\begin{split}&\min\nolimits_{\phi,\psi}GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ \equiv&\min\nolimits_{\phi,\psi}H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))\\ &-\mathbb{E}[H(P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}))].\end{split} (5)

We further involve a prior knowledge distribution 𝒪\mathcal{O} under the consideration of a variational density model class 𝒬\mathcal{Q}. Drawing upon [9], we have a variational upper bound:

GJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)Hc(𝔼[P(ϕ(𝐗),ψ(𝐘)]𝒟),𝒪)a,\begin{split}&GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ \leq&H_{c}(\mathbb{E}[P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})]\mid\mathcal{D}}),\mathcal{O})-a,\end{split} (6)

where an=1NH(P(ϕ(𝐗n),ψ(𝐘n)))a\triangleq\sum\nolimits_{n=1}^{N}H(P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})) is constant w.r.t ϕ,ψ\phi,\psi, hence ignored during optimization. The novel PUB is derived from Eq. LABEL:eq:bound2, is:

minϕ,ψPUB({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)minϕ,ψP(ϕ(𝐗),ψ(𝐘)𝒟))+DKL(P(ϕ(𝐗),ψ(𝐘))𝒪))a.\begin{split}&\min\nolimits_{\phi,\psi}PUB(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ \triangleq&\min\nolimits_{\phi,\psi}{P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))}\\ &+{D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})})\|\mathcal{O}))}-{a}.\end{split} (7)

Minimizing PUB is the proposed objective for GAim1 and GReg1. This implies that methods like MIRO, solely minimizing GReg1, might result in substantial suboptimality, leaving the domain gap unresolved. We discuss two situations of 𝒪\mathcal{O} in Section 4.

Suppressing invalid causality. The relaxation of P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption may lead to unexpected causality while learning the invariance. GAim1 is reformed as:

GAim1=H(P(ϕ(𝐗)𝒟))+H(P(ψ(𝐘)ϕ(𝐗),𝒟))=H(P(ψ(𝐘)𝒟))+H(P(ϕ(𝐗)ψ(𝐘),𝒟)),\begin{split}\text{{GAim1}}=&H(P(\phi({\mathbf{X}})\!\mid\!\mathcal{D}))\!\!+\!\!H(P(\psi({\mathbf{Y}})\!\mid\!\phi({\mathbf{X}}),\mathcal{D}))\\ =&H(P(\psi({\mathbf{Y}})\!\mid\!\mathcal{D}))\!\!+\!\!H(P(\phi({\mathbf{X}})\!\mid\!\psi({\mathbf{Y}}),\mathcal{D})),\end{split} (8)

where minimizing GAim1 with relaxation of P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption may lead to ψ(𝐘)ϕ(𝐗)\psi({\mathbf{Y}})\!\to\!\phi({\mathbf{X}}) since the term H(P(ψ(𝐘)|ϕ(𝐗),𝒟))H(P(\psi({\mathbf{Y}})|\phi({\mathbf{X}}),\mathcal{D})) and ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\!\to\!\psi({\mathbf{Y}}) since the term H(P(ϕ(𝐗)|ψ(𝐘),𝒟))H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}}),\mathcal{D})).

Refer to caption
Figure 2: (a) Diagram of causality in the proposed method. (b) Depth predictions on the unseen domain sample between model trained without and with GReg2.

Figure 2 graphically demonstrates the causal diagram under this scenario. Since the prediction relationship from ϕ(𝐗)𝐘\phi({\mathbf{X}})\to{\mathbf{Y}} and the casual path ψ(𝐘)𝐘\psi({\mathbf{Y}})\to{\mathbf{Y}}, ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) should be preserved for prediction. However, the casual path ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) may compromise the predication from ϕ(𝐗)𝐘\phi({\mathbf{X}})\to{\mathbf{Y}} when ψ(𝐘)\psi({\mathbf{Y}}) is unknown during the inference, leading to generalization degradation. This unveils that the invalid causality from ψ(𝐗)ϕ(𝐘)\psi({\mathbf{X}})\to\phi({\mathbf{Y}}) that may happen during the learning invariance needs to be suppressed as maxϕ,ψH(P(ϕ(𝐗)|ψ(𝐘)),𝒟)\max_{\phi,\psi}H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}})),\mathcal{D}) while minϕ,ψH(P(ψ(𝐘)|ϕ(𝐗)),𝒟)\min_{\phi,\psi}H(P(\psi({\mathbf{Y}})|\phi({\mathbf{X}})),\mathcal{D}), which can be simplified as:

minϕ,ψH(P(ϕ(𝐗)))H(P(ϕ(𝐗))|P(ψ(𝐘))),\min_{\phi,\psi}H(P(\phi({\mathbf{X}})))-H(P(\phi({\mathbf{X}}))|P(\psi({\mathbf{Y}}))), (9)

where is GReg2. See more mathematical details in Supplementary 7. Our experiments also unveil the phenomenon of invalid causality within invariant feature learning, where suppressing it could improve generalizability. The investigation of previous objectives also discloses that, in addressing the varying P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}), constructs akin to GReg2 are often implicitly included (see Table 1), though their efficacy was not explicitly stated. Moreover, their efficacy may be compromised due to the lack of ψ\psi and other objective terms.

Then, we assume that ϕ(X),ψ(Y)\phi(X),\psi(Y) in the RKSH follow Multivariate Gaussian-like Distributions which are denoted as 𝒩(ϕ(𝐗);μ𝐗,Σ𝐗𝐗),𝒩(ψ(𝐘);μ𝐘,Σ𝐘𝐘)\mathcal{N}(\phi({\mathbf{X}});\mu_{\mathbf{X}},\Sigma_{{\mathbf{X}}{\mathbf{X}}}),\mathcal{N}(\psi({\mathbf{Y}});\mu_{\mathbf{Y}},\Sigma_{{\mathbf{Y}}{\mathbf{Y}}}). P(ϕ(𝐗)|ψ(𝐘))P(\phi({\mathbf{X}})|\psi({\mathbf{Y}})) follows 𝒩(ϕ(𝐗)|ψ(𝐘);μ𝐗|𝐘,Σ𝐗𝐗|𝐘)\mathcal{N}(\phi({\mathbf{X}})|\psi({\mathbf{Y}});\mu_{{\mathbf{X}}|{\mathbf{Y}}},\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}). GReg2 can be simplified as:

H(𝒩(ϕ(𝐗);μ𝐗,Σ𝐗𝐗))H(𝒩(ϕ(𝐗)ψ(𝐘);μ𝐗|𝐘,Σ𝐗𝐗|𝐘))=12ln(|Σ𝐗𝐗||Σ𝐗𝐗|𝐘|)0,\begin{split}&H(\mathcal{N}(\phi({\mathbf{X}});\mu_{\mathbf{X}},\Sigma_{{\mathbf{X}}{\mathbf{X}}}))\\ &-H(\mathcal{N}(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}});\mu_{{\mathbf{X}}|{\mathbf{Y}}},\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}))\\ =&\frac{1}{2}\ln(\frac{|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|}{|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|})\geq 0,\end{split} (10)

where the inequality stands owing to the Condition Reducing Entropy. This implies H(𝒩(ϕ(𝐗);μ𝐗,Σ𝐗𝐗))H(𝒩(ϕ(𝐗)ψ(𝐘);μ𝐗|𝐘,Σ𝐗𝐗|𝐘))H(\mathcal{N}(\phi({\mathbf{X}});\mu_{\mathbf{X}},\Sigma_{{\mathbf{X}}{\mathbf{X}}}))\!\!\geq\!\!H(\mathcal{N}(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}});\mu_{{\mathbf{X}}|{\mathbf{Y}}},\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}})), deduced from |Σ𝐗𝐗||Σ𝐗𝐗|𝐘|0|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|\!\!\geq\!\!|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|\!\!\geq\!\!0, considering they are positive semi-definite. Distinct from the [52, 18] which decompose causal effects through extra networks, our method is based on transfer entropy (TE) by ensuring TE(ϕ(𝐗)ψ(Y))TE(ϕ(𝐗)ψ(Y))TE(\phi({\mathbf{X}})\!\!\!\to\!\!\!\psi(Y))\!\!\!\geq\!\!\!TE(\phi({\mathbf{X}})\!\!\!\to\!\!\!\psi(Y)), i.e., H(ϕ(𝐗)|ψ(Y))H(ψ(Y)|ϕ(𝐗)).H(\phi({\mathbf{X}})|\psi(Y))\!\!\!\geq\!\!\!H(\psi(Y)|\phi({\mathbf{X}})). Thus, minimization of Eq. LABEL:eq:cfs_detial occurs iff |Σ𝐗𝐗|=|Σ𝐗𝐗|𝐘||\Sigma_{{\mathbf{X}}{\mathbf{X}}}|=|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|, reformulating the task as ​​minϕ,ψ|Σ𝐗𝐗||Σ𝐗𝐗|𝐘|min_{\phi,\psi}|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|\!\!\!-\!\!\!|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|, where Σ𝐗𝐗𝐘=Σ𝐗𝐗Σ𝐗𝐘Σ𝐘𝐘1Σ𝐘𝐗\Sigma_{{\mathbf{X}}{\mathbf{X}}\mid{\mathbf{Y}}}=\Sigma_{{\mathbf{X}}{\mathbf{X}}}\!\!-\!\!\Sigma_{{\mathbf{X}}{\mathbf{Y}}}\Sigma_{{\mathbf{Y}}{\mathbf{Y}}}^{-1}\Sigma_{{\mathbf{Y}}{\mathbf{X}}}, per [21]. Therefore, GReg2 is simplified as minimizing Conditional Feature Shift (CFS):

minϕ,ψ|Σ𝐗𝐘Σ𝐘𝐘1Σ𝐘𝐗|.\displaystyle\min_{\phi,\psi}|\Sigma_{{\mathbf{X}}{\mathbf{Y}}}\Sigma_{{\mathbf{Y}}{\mathbf{Y}}}^{-1}\Sigma_{{\mathbf{Y}}{\mathbf{X}}}|. (11)

3.2 Empirical losses derivations

This section presents the empirical losses used to implement Eq. 3. More detailed derivation can be referred to in Supplementary 7. We introduce the mapping ψ\psi to relax the static target distribution. The implementation of ψ\psi varies across tasks, utilizing MLPs for classification and regression, and ResNet-50 for segmentation. To promote a consistent latent space, the mapped ψ(𝐘)\psi({\mathbf{Y}}) retains the same dimension as that of ϕ(𝐗)\phi({\mathbf{X}}). ψ(𝐘)\psi({\mathbf{Y}}) and ϕ(𝐗)\phi({\mathbf{X}}) are separately fed into 𝒞\mathcal{C} for making predictions and obtaining A2\mathcal{L}_{A2} for posterior maximization:

A2(𝒞,ϕ,ψ)=Hc(ϕ(𝐗),𝐘)+Hc(ψ(𝐘),𝐘).\displaystyle\mathcal{L}_{A2}(\mathcal{C},\phi,\psi)=H_{c}(\phi({\mathbf{X}}),{\mathbf{Y}})+H_{c}(\psi({\mathbf{Y}}),{\mathbf{Y}}). (12)

To mitigate domain shifts and learn domain invariance, we minimize cross-domain conditional feature distribution discrepancies. Specifically, the mean and variance of the joint distribution of (ϕ(𝐗),ψ(𝐘))(\phi({\mathbf{X}}),\psi({\mathbf{Y}})) in each domain are estimated using VAE encoders. Consider nn-pairs means and variance of nn domains, we derive a joint Gaussian distribution expression P(ϕ(𝐗n),ψ(𝐘n))𝒩(𝒙n,𝒚n;μn,Σn)P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n}))\triangleq\mathcal{N}({\bm{x}}_{n},{\bm{y}}_{n};\mu_{n},\Sigma_{n}). Accordingly, we establish 𝔼[P(ϕ(𝐗n),ψ(𝐘n))]𝒩(𝒙¯,𝒚¯;μ¯,Σ¯)\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]\triangleq\mathcal{N}(\bar{{\bm{x}}},\bar{{\bm{y}}};\bar{\mu},\bar{\Sigma}) where μ¯=𝔼[μn],Σ¯=𝔼[Σn]\bar{\mu}=\mathbb{E}[\mu_{n}],\bar{\Sigma}=\mathbb{E}[\Sigma_{n}]. Base on PUB in Eq. LABEL:eq:PUB, we introduce A1\mathcal{L}_{A1} to minimize the conditional feature gap across domains:

A1(ϕ)=i=1n(log|Σi|+μ¯μiΣi12).\displaystyle\mathcal{L}_{A1}(\phi)=\sum\nolimits_{i=1}^{n}(\log|\Sigma_{i}|+||\bar{\mu}-\mu_{i}||^{2}_{\Sigma_{i}^{-1}}). (13)

To integrate prior information, similar to MIRO, we utilize VAE encoders to capture the means and variances of 𝐗{\mathbf{X}}: P(ϕ(𝐗))𝒩(𝒙;μx,Σx)P(\phi({\mathbf{X}}))\triangleq\mathcal{N}({\bm{x}};\mu_{x},\Sigma_{x}) and the output features x𝒪x_{\mathcal{O}} form 𝒪\mathcal{O}. Given that 𝒪\mathcal{O} preserves the correlation between 𝐗{\mathbf{X}} (ϕ(𝐗)\phi({\mathbf{X}})) and 𝐘{\mathbf{Y}} (ψ(𝐘)\psi({\mathbf{Y}})), and is frozen during training, 𝐘,ψ(𝐘){\mathbf{Y}},\psi({\mathbf{Y}}) is omitted in empirical loss. We propose R1\mathcal{L}_{R1} to minimize the divergence between features and 𝒪\mathcal{O}:

R1(ϕ)=log|Σx|+x𝒪μxΣx12.\displaystyle\mathcal{L}_{R1}(\phi)=\log|\Sigma_{x}|+||x_{\mathcal{O}}-\mu_{x}||^{2}_{\Sigma_{x}^{-1}}. (14)

For suppressing the invalid causality, derived from Eq. 11, the loss is designed to minimize the CFS:

R2(ϕ)=Σ𝐗𝐘Σ𝐘𝐘1Σ𝐘𝐗2,\displaystyle\mathcal{L}_{R2}(\phi)=||\Sigma_{{\mathbf{X}}{\mathbf{Y}}}\Sigma_{{\mathbf{Y}}{\mathbf{Y}}}^{-1}\Sigma_{{\mathbf{Y}}{\mathbf{X}}}||_{2}, (15)

where Σ𝐗𝐘=𝔼[(ϕ(𝐗)𝔼[ϕ(𝐗)])(ϕ(𝐘)𝔼[ϕ(𝐘)])]\Sigma_{{\mathbf{X}}{\mathbf{Y}}}=\mathbb{E}[(\phi({\mathbf{X}})-\mathbb{E}[\phi({\mathbf{X}})])^{\top}(\phi({\mathbf{Y}})-\mathbb{E}[\phi({\mathbf{Y}})])], and a similar calculation process is done for Σ𝐘𝐘\Sigma_{{\mathbf{Y}}{\mathbf{Y}}} and Σ𝐘𝐗\Sigma_{{\mathbf{Y}}{\mathbf{X}}}. The final loss is a weighted combination of the above losses111See detailed hyper-parameters settings in Supplementary 10.:

(𝒞,ϕ,ψ)=vA1A1+vA2A2+vR1R1+vR2R2.\displaystyle\mathcal{L}(\mathcal{C},\phi,\psi)\!\!=\!\!v_{A1}\mathcal{L}_{A1}\!\!+\!\!v_{A2}\mathcal{L}_{A2}\!\!+\!\!v_{R1}\mathcal{L}_{R1}\!\!+\!\!v_{R2}\mathcal{L}_{R2}. (16)

4 Connection to previous methods

We validate our objective function’s efficiency theoretically and demonstrate its connections with previous objectives, indicating that previous mDG efforts have partly optimized the proposed objective. Refer to Table 1 and Supplementary 8 for a detailed understanding of previous objectives.

Using ψ\psi v.s. not using ψ\psi. Previous works rarely employed ψ\psi to map 𝐘{\mathbf{Y}}, whereas we show its benefits for mDG tasks. Employing Jensen’s inequality, we obtain H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])H(𝔼[P(ϕ(𝐗n),𝐘n)])H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])\geq H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})]). When other objectives remain the same, we compare the model with parameters θψ\theta^{\psi} optimized via the ψ\psi mapping, against another model without ψ\psi using parameters θnψ\theta^{n\psi}:

supR(θnψ)supR(θψ).\displaystyle\sup R({\theta}^{n\psi})\geq\sup R({\theta}^{\psi}). (17)

The equivalence is valid only if ψ\psi serves as a bijection, a condition prevalent in practical scenarios like classification. Thus, this mapping does not hinder model performance in classification tasks. It also implies that using ψ(𝐘)\psi({\mathbf{Y}}) can lower generalization risks after optimization, especially when 𝐘{\mathbf{Y}} contains features dependent on 𝒟\mathcal{D}. This could potentially yield superior generalization in segmentation and regression tasks. Detailed proof can be seen in Supplementary 7.

Remark 1 (Importance of 𝐘{\mathbf{Y}} mapping ψ\psi).

Besides relaxing the static distribution assumption of 𝐘{\mathbf{Y}}, ψ\psi conveys two other notable benefits: 1). 𝐗{\mathbf{X}} and 𝐘{\mathbf{Y}} may originate from different sample spaces with distinct shapes. By applying mappings, ψ(𝐘)\psi({\mathbf{Y}}) can be adapted to the same shape as ϕ(𝐗)\phi({\mathbf{X}}). In practice, concatenating ϕ(𝐗)\phi({\mathbf{X}}) and ψ(𝐘)\psi({\mathbf{Y}}) is often used as input for VAE encoders to capture P(ψ(𝐘),ϕ(𝐗))P({\psi({\mathbf{Y}})},\phi({\mathbf{X}})). 2). The derivation of Eq. 11 requires the computation of covariance, which mandates that two variables occupy the same sample space - a condition fulfilled by applying ψ(𝐘)\psi({\mathbf{Y}}).

Incorporating conditions leads to lower generalization risk on learning invariant representations. A few past works [13, 48] minimize domain gaps between features without condition consideration. Its objective for Aim1 is:

H(P(ϕ(𝐗)𝒟))H(P(ϕ(𝐗)𝒟))+H(P(ψ(𝐘)ϕ(𝐗)),𝒟)=GAim1.\begin{split}H(P({\phi({\mathbf{X}})\mid\mathcal{D}}))\leq&H(P({\phi({\mathbf{X}})\mid\mathcal{D}}))+\\ &H(P(\psi({\mathbf{Y}})\mid\phi({\mathbf{X}})),\mathcal{D})=\text{{GAim1}}.\end{split}

(18)

While the other objectives are identical, we consider a model with parameters θnc{\theta}^{nc}, trained with minψH(P(ϕ(𝐗n)))\min_{\psi}H(P({\phi({\mathbf{X}}_{n})})), against another model with θc{\theta}^{c} parameters, trained with GAim1. In this scenario, their empirical risks satisfy:

supR(θnc)supR(θc).\displaystyle\sup R({\theta}^{nc})\geq\sup R({\theta}^{c}). (19)

See the mathematical details in Supplementary 7. This reveals that without condition consideration, the minimization of generalization risk is merely partial due to the overlooked risk correlated to 𝐘{\mathbf{Y}}. Additional evidence supporting the importance of condition consideration is provided by CDANN [29] and CIDG [28]. Our experiments, conducted through a uniform implementation, also lend support to it.

Effect of oracle model 𝒪\mathcal{O}. As stated by MIRO [19] and SIMPLE [30], a generalized 𝒪\mathcal{O} comprising both seen and unseen domains yields significant improvements. During the derivation of Eq. LABEL:eq:PUB, we find that the disregard GAim1 term in MIRO [19] and SIMPLE [30] may result in inferior outcomes to our proposed objective.

Remark 2 (Synergy of learning invariance, integrating prior knowledge and suppressing invalid causally).

For readability, we have divided the overall mDG objective into four aspects despite all terms being interconnected. Specifically, as shown by PUB in Eq. LABEL:eq:PUB, GReg1 collaborating with GAim1 brings more performance gains than the case when it is solely applied. Moreover, Eq. 8 shows that the side effect of invalid causality in GAim1 is alleviated by combining with GReg2, underscoring the significance of combining learning invariance, integrating prior knowledge, and suppressing invalid causally. It also suggests that all terms are synergistic and contribute together to improved results.

Validating our assertions via experiments, Section 5.5 ablation study finds that simple cross-domain covariance limitation (GReg2) cannot ensure improved results with prior knowledge.

5 Experiments

Four groups of experiments are done to validate the proposed GMDG. A toy example validates the relaxation of P(𝐘|𝒟)P({\mathbf{Y}}|\mathcal{D}) static assumption brought by ψ\psi of 𝐘{\mathbf{Y}}. Furthermore, we conduct experiments on regression, segmentation, and classification tasks and use complex benchmark datasets. For a simplification, please refer to Table 4 for the formulations of terms and their alias.

5.1 Toy experiments on synthetic datasets

We perform a regression task on synthetic data to illustrate the impact of using ψ\psi, showcasing its potential for superior results if ψ\psi is not bijective.

Affine transformations Squared and cubed transformations
ERM +A1(ϕ)\mathcal{L}_{A1}(\phi) +A1(ϕ,ψ)\mathcal{L}_{A1}(\phi,\psi) ERM +A1(ϕ)\mathcal{L}_{A1}(\phi) +A1(ϕ,ψ)\mathcal{L}_{A1}(\phi,\psi)
No DCDS 0.3485 0.3537 0.3369 1.5150 0.4652 0.3370
With DCDS 0.4144 0.2290 0.1777 0.8720 1.5868 0.8241
Table 3: Toy experimental results: MSE losses on testing set.+A1(ϕ)\mathcal{L}_{A1}(\phi) denotes A1\mathcal{L}_{A1} is used without ψ\psi while +A1(ϕ,ψ)\mathcal{L}_{A1}(\phi,\psi) denotes ψ\psi is used. Best results are highlighted as bold. DCDS denotes domain-conditioned distribution shift.
GAim2 H(P(ψ(𝐘)ϕ(𝐗)))+H(P(𝐘ψ(𝐘)))H(P({\psi({\mathbf{Y}})\mid\phi({\mathbf{X}})}))+H(P({{\mathbf{Y}}\mid\psi({\mathbf{Y}})})) GReg1 DKL(P(ϕ(𝐗),𝐘𝒟)𝒪))D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),{\mathbf{Y}}}\mid\mathcal{D})\|\mathcal{O}))
iAim1 H(P(ϕ(𝐗)𝒟))H(P({\phi({\mathbf{X}})}\mid\mathcal{D})) GAim1 H(P(ϕ(𝐗),𝐘)𝒟)H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})\mid\mathcal{D})
iReg2 H(P(ϕ(𝐗),𝒟)+H(P(ϕ(𝐗)))-H(P({\phi({\mathbf{X}})},\mathcal{D})+H(P({\phi({\mathbf{X}})})) GReg2 H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))){-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}})))}
Table 4: Notations for terms.

Synthetic data. Supplementary Figure 3 illustrates the construction of synthetic data, built on 𝐗{\mathbf{X}}-𝐘{\mathbf{Y}} pair latent features with a linear relationship, ensuring invariant existence. To better explore this issue, we created four distinct data groups: without and with distribution shift, used affine or squared and cubed transformations as domain-conditioned transformations, and their cross combinations. More description can be seen in Supplementary 10.

Experimental setup. We use two of three constructed domains for training and validation and the last one for testing. Validation and test losses are calculated by MSE. To maintain fairness, all experiments adopt the same network which is selected by the best validation results. Learning aims to find invariant hidden features of X,YX,Y while preserving predictive ability from unseen XX to YY.

Results. Toy experiment results are reported in Table 3, which are also visualized in Figure 4. It is observed that across all settings, employing ψ\psi with A1\mathcal{L}_{A1} yields superior results, outperforming ERM and ERM+A1\mathcal{L}_{A1}(ϕ\phi) without ψ\psi, validating the enhanced generalization effect brought by utilizing ψ\psi whenever YY varies per domain, supporting Eq. 17. Supplementary Figure 4 shows that ψ\psi does learn the invariant representations for YY to relax previous YY-invariant assumption. Specifically, learning the invariance of YY with ψ\psi results in superior invariant representations as the latent representations of X,YX,Y are primarily linear, aligning with XX and YY’s linear relationship during data construction. The bottom-left figures reveal that though ERM has learned the most invariant ϕ(X)\phi(X), it suffers the worst test loss, indicating that a well-learned invariant ϕ(X)\phi(X) is not sufficient when YY also has domain-dependent traits. The results also suggest that assuming that YY vary across domains, using A1\mathcal{L}_{A1} without ψ\psi may not yield superior results.

SILog\downarrow Abs Rel\downarrow RMS\downarrow Sq Rel\downarrow RMS log\downarrow δ1\delta_{1}\uparrow δ2\delta_{2}\uparrow TD
Backbone (Swin-L) 11.1473 10.98 56.11 8.86 14.32 87.47 98.05
VA-DepthNet (GAim2) 10.9357 11.15 56.36 9.02 14.41 87.73 98.02 S
GReg1+GAim2 10.6548 10.49 52.63 8.04 13.72 89.75 98.12
GAim1+GReg1+GAim2 10.1924 10.39 50.52 7.68 13.39 89.86 98.17
GAim1+GReg1+
GAim2+GReg2 (GMDG)
10.1402 10.27 50.59 7.72 13.22 90.53 97.98
Backbone (Swin-L) 14.2078 16.20 81.22 16.30 20.59 72.44 96.35
VA-DepthNet (GAim2) 14.7080 16.76 83.17 17.07 21.44 71.46 95.11 Co
GReg1+GAim2 14.1600 16.41 80.78 16.56 21.02 72.06 95.38
GAim1+GReg1+GAim2 13.9978 15.90 77.97 15.70 20.25 73.37 95.86
GAim1+GReg1+
GAim2+GReg2 (GMDG)
14.2803 15.57 77.45 15.27 19.94 74.40 95.47
Backbone (Swin-L) 11.6132 12.87 44.51 8.01 15.58 84.57 97.87
VA-DepthNet (GAim2) 11.5080 12.50 43.98 7.67 15.37 84.78 97.87 O
GReg1+GAim2 10.4061 11.71 39.02 6.87 13.83 88.27 98.17
GAim1+GReg1+GAim2 10.4907 11.53 38.43 6.69 13.68 88.46 98.17
GAim1+GReg1+
GAim2+GReg2 (GMDG)
10.4438 11.33 38.95 6.66 13.67 88.86 98.16
Backbone (Swin-L) 14.7350 18.36 52.31 13.05 20.06 74.74 93.94
VA-DepthNet (GAim2) 15.0300 17.99 56.54 13.20 20.64 72.40 94.38 H
GReg1+GAim2 14.7377 17.02 55.39 12.06 19.86 74.05 95.17
GAim1+GReg1+GAim2 14.5018 17.14 52.10 12.01 19.37 76.13 94.95
GAim1+GReg1+
GAim2+GReg2 (GMDG)
14.1414 15.90 52.22 10.72 18.95 76.27 96.10
Backbone (Swin-L) 12.9258 14.60 58.54 11.56 17.64 79.81 96.55
VA-DepthNet(GAim2) 13.0454 14.60 60.01 11.74 17.97 79.09 96.35 Avg.
GReg1+GAim2 12.4897 13.91 56.96 10.88 17.11 81.03 96.71
GAim1+GReg1+GAim2 12.2957 13.74 54.76 10.52 16.67 81.96 96.79
GAim1+GReg1+
GAim2+GReg2 (GMDG)
12.2514 13.27 54.80 10.09 16.45 82.52 96.93
Table 5: Regression results: Comparison of results between proposed and previous methods. Added terms to the baseline are highlighted as blue. The best results for each group are highlighted in bold. TD: Test Domain.

5.2 Regression on benchmark datasets: Monocular depth estimation

We conduct the Monocular Depth Estimation task as the real-world regression task to further verify GMDG.

Experimental setup. We employ VA-DepthNet [32] with Swin-L [33] backbone as the baseline and follow their hyperparameter settings. Experiments are conducted on NYU Depth V2 [46]. To construct multiple domains, we split the dataset into four categories: ‘School’ (S), ‘Office’ (O), ‘Home’ (H), and ‘Commercial’ (Co). We conduct the standard leave-one-out cross-validation as an evaluation method. We use the best checkpoint on the seen domains for the evaluation. Note that all models are trained on the newly constructed dataset. Statistical results on popular evaluation metrics such as the square root of the Scale Invariant Logarithmic error (SILog), Relative Squared error (Sq Rel), Relative Absolute Error (Abs Rel), Root Mean Squared error (RMS), and threshold accuracy (δ1,δ2\delta_{1},\delta_{2}) are used as evaluation metrics. See more experimental details in the Supplementary 10.

Results. The Monocular Depth Estimation results are exhibited in Table 5. It can be seen that using terms that are proposed in GMDG leads to better generalization on unseen domains, and using the full GMDG leads to the best results in most metrics. The improvements suggest the feasibility of our GMDG in real-world regression tasks. Specifically, using GReg2 with other terms significantly improves the results when Home is the unseen domain, which is the most difficult domain to be generalized for the VA-DepthNet, and barely compromises the performances while using other domains as the unseen. This reveals that suppressing the causality can improve the generalization of the model (refer to Figure 2 (b) for visual results). Note that, except SILog, all metrics results are scaled by 100100 for readability; due to the lack of objective targeting on the mDG problem, VA-DepthNet performs worse than its baseline. See more visualizations in Supplementary 11.

5.3 Segmentation on benchmark datasets

Experimental setup. We follow the experimental setup of RobustNet [10] for mDG segmentation experiments, particularly using DeepLabV3+ [8] as the semantic segmentation model architecture, with ResNet-50 backbone and SGD optimizer. As shown in Table 1, RobustNet’s objective is equivalent to using GAim2 and GReg2. Consistent with previous methods, mIoU serves as our evaluation metric. Datasets comprise real-world datasets (Cityscapes [11] (Ci), BDD-100K [53] (B), Mapillary [35] (M)) and synthetic datasets (GTAV [41], SYNTHIA [42]). Specifically, we train a model on GTAV and Cityscapes, testing on other datasets. We compare our results to DeepLabv3+[8], IBN-NET[36] and RobustNet [10]. We use Intersection over Union (mIoU) as the evaluation metric. See Supplementary 10 for more experimental details.

TD Ci B M Avg.
DeepLabv3+ 35.46 25.09 31.94 30.83
IBN-Net 35.55 32.18 38.09 35.27
RobustNet (GAim2, GReg2) 37.69 34.09 38.49 36.76
GAim1+GAim2+GReg2 38.58 34.72 39.11 37.47
GReg1+GAim2+GReg2 38.13 35.02 39.29 37.48
GAim1+GReg1+GAim2+GReg2 (GMDG) 38.62 34.71 39.63 37.65
Table 6: Segmentation results: Comparison of mIoU(%\%) between proposed and previous methods. The models are trained on GTAV and SYNTHIA domains. The added objective terms are highlighted as blue. The best results are highlighted in bold.

Results. Table 6 shows the efficacy of our proposed objective in segmentation tasks upon introducing ψ\psi. Ablation results highlight that using ψ\psi alongside GAim1 can enhance baseline performance, experimentally substantiating that the introduction of ψ\psi, in relaxing assumptions, boosts performance for better generalization. Using GReg1 alone also improves average mIoU. Importantly, the most enhancement in average mIoU is observed when GReg1 and GAim1 are used together, which finds validation in the PUB derivation in Eq. LABEL:eq:PUB. See more results and visualizations in Supplementary 11.

5.4 Classification on benchmark datasets

Experimental setup. We operate on the DomainBed suite [14] and leverage standard leave-one-out cross-validation as an evaluation method. We experiment on 55 real-world benchmark datasets, including PACS [25], VLCS [12], OfficeHome [50], TerraIncognita [2], and DomainNet [38]. The results are the averages from three trials of each experiment. Following MIRO, two backbones are used for the training (ResNet-50 [15] pre-trained in the ImageNet [15] and RegNetY-16GF backbone with SWAG pre-training [47]). The backbones are trained with our proposed objective barely and further with SWAD [6], respectively. See Supplementary 10 for more experimental details.

Non-ensemble methods
TD PACS VLCS OfficeHome TerraInc DomainNet Avg.
MMD [27] 84.7±0.5 77.5±0.9 66.3±0.1 42.2±1.6 23.4±9.5 58.8
Mixstyle [57] 85.2±0.3 77.9±0.5 60.4±0.3 44.0±0.7 34.0±0.1 60.3
GroupDRO [43] 84.4±0.8 76.7±0.6 66.0±0.7 43.2±1.1 33.3±0.2 60.7
IRM [1] 83.5±0.8 78.5±0.5 64.3±2.2 47.6±0.8 33.9±2.8 61.6
ARM [56] 85.1±0.4 77.6±0.3 64.8±0.3 45.5±0.3 35.5±0.2 61.7
VREx [23] 84.9±0.6 78.3±0.2 66.4±0.6 46.4±0.6 33.6±2.9 61.9
CDANN [29] 82.6±0.9 77.5±0.1 65.8±1.3 45.8±1.6 38.3±0.3 62.0
DANN [13] 83.6±0.4 78.6±0.4 65.9±0.6 46.7±0.5 38.3±0.1 62.6
RSC [17] 85.2±0.9 77.1±0.5 65.5±0.9 46.6±1.0 38.9±0.5 62.7
MTL [4] 84.6±0.5 77.2±0.4 66.4±0.5 45.6±1.2 40.6±0.1 62.9
MLDG [26] 84.9±1.0 77.2±0.4 66.8±0.6 47.7±0.9 41.2±0.1 63.6
Fish [44] 85.5±0.3 77.8±0.3 68.6±0.4 45.1±1.3 42.7±0.2 63.9
ERM [49] 84.2±0.1 77.3±0.1 67.6±0.2 47.8±0.6 44.0±0.1 64.2
SagNet [34] 86.3±0.2 77.8±0.5 68.1±0.1 48.6±1.0 40.3±0.1 64.2
SelfReg [22] 85.6±0.4 77.8±0.9 67.9±0.7 47.0±0.3 42.8±0.0 64.2
CORAL [48] 86.2±0.3 78.8±0.6 68.7±0.3 47.6±1.0 41.5±0.1 64.5
mDSDI [5] 86.2±0.2 79.0±0.3 69.2±0.4 48.1±1.4 42.8±0.1 65.1
Use ResNet-50 [15] as oracle model.
Style Neophile [20] 89.11 - 65.89 - 44.60 -
MIRO [19] (GReg1) 85.4±0.4 79.0±0.3 70.5±0.4 50.4±1.1 44.3±0.2 65.9
GMDG 85.6±0.3 79.2±0.3 70.7±0.2 51.1±0.9 44.6±0.1 66.3
Use RegNetY-16GF [47] as oracle model.
MIRO 97.4±0.2 79.9±0.6 80.4±0.2 58.9±1.3 53.8±0.1 74.1
GMDG 97.3±0.1 82.4±0.6 80.8±0.6 60.7±1.8 54.6±0.1 75.1
Ensemble methods
PACS VLCS OfficeHome TerraInc DomainNet Avg.
Use multiple oracle models.
SIMPLE [30] 88.6±0.4 79.9±0.5 84.6±0.5 57.6±0.8 49.2±1.1 72.0
SIMPLE++ [30] 99.0±0.1 82.7±0.4 87.7±0.4 59.0±0.6 61.9±0.5 78.1
Use ResNet-50 [15] as oracle model.
MIRO + SWAD 88.4±0.1 79.6±0.2 72.4±0.1 52.9±0.2 47.0±0.0 68.1
GMDG + SWAD
88.4±0.1 79.6±0.1 72.5±0.2 53.0±0.7 47.3±0.1 68.2
Use RegNetY-16GF [47] as oracle model.
MIRO + SWAD 96.8±0.2 81.7±0.1 83.3±0.1 64.3±0.3 60.7±0.0 77.3
GMDG + SWAD 97.9±0.3 82.2±0.3 84.7±0.2 65.0±0.2 61.3±0.2 78.2
Table 7: Classification results: Comparison of results between the proposed and previous non-ensemble and ensemble mDG methods. The best results for each group are highlighted in bold.

Results. Table 7 displays the results of non-ensemble algorithms and ensemble algorithms that employ pre-trained models as oracle models. Specifically, our proposed objectives demonstrate more substantial improvements when a higher-quality pre-trained oracle model (𝒪\mathcal{O}) is applied. When employing the ResNet-50 model, our approach yields average improvements of approximately 0.3%0.3\% and 0.1%0.1\% without and with SWAD, respectively, compared to MIRO. In contrast, when RegNetY-16GF serves as an oracle, GMDG results in significant average improvements of 1.1%1.1\% and 0.9%0.9\% without and with SWAD, respectively. Remarkably, our approach outperforms 0.1%0.1\% more than the SOTA method, SIMPLE++, which relies on an ensemble of 283 pre-trained models as oracle models, whereas ours only engages a single pre-trained model. Overall, these results strongly support GMDG’s effectiveness in classification tasks. See more results in Supplementary 11.

5.5 Ablation studies

To better compare our objective with previous objectives, we conduct a systematic ablation study on the classification task since most previous objectives are only available for classification due to the lack of ψ\psi. Experimental setup. In the ablation studies, we test varied terms (see Table 8) combinations on the HomeOffice dataset using SWAG pre-training [47] and SWAD [6]. Every experiment is repeated in three trials, sharing the same hyper-parameter settings for evaluation. See Supplementary 10 for more details.

Results. Table 8 presents ablation study results. The first column denotes previous methods equivalent to term combinations. The main findings are as follows. See Supplementary 11.1 for more other findings.

Used objectives Art Clipart Product Real Avg. Imp.
Without 𝒪\mathcal{O} (GReg1)
GAim2 (ERM) 78.4±0.7 68.3±0.5 85.8±0.4 85.8±0.3 79.6±0.2 0.0
GAim2 + iAim1 (DANN) 79.1±1.0 68.6±0.0 85.6±0.8 86.1±0.5 79.8±0.2 +0.2
GAim2 + GAim1 (CDANN, CIDG) 79.1±0.7 69.1±0.1 85.7±0.5 86.3±0.6 79.9±0.4 +0.3
GAim2 +iReg2 (CORAL+ψ\psi) 79.1±0.1 69.9±0.4 86.0±0.1 86.3±0.4 80.3±0.2 +0.7
GAim2 + GReg2 79.2±0.1 69.9±1.4 86.1±0.5 86.1±0.1 80.3±0.3 +0.7
GAim2 + GAim1 + GReg2 (MDA+ψ\psi) 79.5±1.1 69.2±1.2 86.2±0.2 86.5±0.2 80.3±0.0 +0.7
With 𝒪\mathcal{O} (GReg1)
GAim2 + GReg1 (MIRO, SIMPLE) 83.2±0.6 72.6±1.1 89.9±0.5 90.2±0.1 84.0±0.2 0.0
GAim2 + GReg1 +iAim1 83.4±0.5 73.1±0.8 89.7±0.4 90.1±0.3 84.1±0.2 +0.1
GAim2 + GReg1 + GAim1 83.7±0.3 74.0±0.6 90.1±0.3 90.3±0.2 84.5±0.2 +0.4
GAim2 + GReg1 + iReg2 82.9±0.5 72.5±0.3 90.3±0.3 90.0±0.3 83.9±0.1 -0.1
GAim2 + GReg1 + GReg2 83.4±0.2 72.3±0.2 90.1±0.3 90.1±0.3 84.0±0.2 +0.0
GAim2 + GReg1 + GAim1 + GReg2 (GMDG) 84.1±0.2 74.3±0.9 89.9±0.4 90.6±0.1 84.7±0.2 +0.7
Table 8: Ablation studies: Results of using different combinations of terms on HomeOffice. Imp. denotes Improvement that gained form GAim2 and GAim2 + GReg1, respectively.

1). Previous methods that partially utilize our proposed objectives often yield suboptimal results. Note that iAim1 is the unconditional version of GAim1. By eliminating other factors, it can be seen that employing our proposed full objectives offers the most significant improvements, while previous objectives may lead to inferior results.

2). The effectiveness of using conditions. By conducting uniform implementation and testing, it can be observed that the use of conditions yields superior results compared to the unconditional approach. This observation aligns with Eq. 19, suggesting that minimizing the gap between conditional features across domains leads to improved generalization. The disparity in performance between CDANN and DANN might be attributed to differences in their implementation details.

3). Learning invariance is crucial, regardless of whether integrating prior knowledge. Evidently, learning invariance facilitates improvement whether prior is applied or not, as validated in the PUB derivation in Eq. LABEL:eq:PUB. This contradicts MIRO’s argument that achieving similar representations to a prior can replace the need for learning invariance.

4). Impacts of using prior. The significant improvement owes to the use of a pre-trained oracle model (𝒪\mathcal{O}) preserving correlations between 𝐗{\mathbf{X}} and 𝐘{\mathbf{Y}} - a concept validated by MIRO and SIMPLE. However, utilizing our full set of objectives can further enhance this improvement by an additional 0.7%0.7\%. Notably, the invalid causality may not work when using prior knowledge, while the invariance across domains is not permitted. We hypothesize that such invalid causality is inherently eliminated within a ‘good’ feature space obtained by 𝒪\mathcal{O}, but may be reintroduced when we minimize the domain gap with 𝒪\mathcal{O}. Thus, using the full objective can synergistically produce optimal results.

5). Constraining only the covariance shifts of features across domains (iReg2) does not guarantee better results when prior knowledge is available. We find that using the objectives of CORAL performs better than DANN, CDANN, and CIDG. The results suggest that considering the covariance shifts of features does lead to improvements, which we hypothesize are primarily driven by H(P(ϕ(𝐗)))H(P(\phi({\mathbf{X}}))). However, when a large pre-trained oracle model (𝒪\mathcal{O}) is provided, the performance actually degrades. This implies that the use of 𝒪\mathcal{O} implicitly minimizes the covariance shifts of features across domains. Under this scenario, the unexpected effect of H(P(ϕ(𝐗|𝒟)))-H(P({\phi({\mathbf{X}}|\mathcal{D})})) hinders improvement, while the benefits brought by H(P(ϕ(𝐗)))H(P(\phi({\mathbf{X}}))) are diminished by the use of prior knowledge. In contrast, GReg2 continues to yield improvements. This suggests that GMDG is more versatile and suitable for various situations.

6 Conclusion

In this paper, we propose a general objective, namely GMDG, by relaxing the static distribution assumption of 𝐘{\mathbf{Y}} through a learnable mapping ψ\psi. GMDG is applicable to diverse mDG tasks, including regression, segmentation, and classification. Empirically, we design a suite of losses to achieve the overall GMDG, adaptable across various frameworks. Extensive experiments validate the viability of our objective across applications where previous objectives may yield suboptimal results compared to ours. Both theoretical analyses and empirical results demonstrate the synergistic effect of distinct terms in the proposed objective. Simplistically, we assume equal domain weights whilst minimizing GJSD, presenting the future scope for dealing with imbalance situations triggering unequal domain weights.

Acknowledgments

The work was partially supported by the following: National Natural Science Foundation of China under No. 92370119, No. 62376113, and No. 62206225; Jiangsu Science and Technology Program (Natural Science Foundation of Jiangsu Province) under No. BE2020006-4; Natural Science Foundation of the Jiangsu Higher Education Institutions of China under No. 22KJB520039.

References

  • Arjovsky et al. [2019] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
  • Beery et al. [2018] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV), pages 456–473, 2018.
  • Blanchard et al. [2011] Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. Advances in neural information processing systems, 24, 2011.
  • Blanchard et al. [2021] Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. The Journal of Machine Learning Research, 22(1):46–100, 2021.
  • Bui et al. [2021] Manh-Ha Bui, Toan Tran, Anh Tran, and Dinh Phung. Exploiting domain-specific features to enhance domain generalization. Advances in Neural Information Processing Systems, 34:21189–21201, 2021.
  • Cha et al. [2021] Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. Advances in Neural Information Processing Systems, 34:22405–22418, 2021.
  • Chattopadhyay et al. [2020] Prithvijit Chattopadhyay, Yogesh Balaji, and Judy Hoffman. Learning to balance specificity and invariance for in and out of domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, pages 301–318. Springer, 2020.
  • Chen et al. [2018] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018.
  • Cho et al. [2022] Wonwoong Cho, Ziyu Gong, and David I Inouye. Cooperative distribution alignment via jsd upper bound. Advances in Neural Information Processing Systems, 35:21101–21112, 2022.
  • Choi et al. [2021] Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne T Kim, Seungryong Kim, and Jaegul Choo. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11580–11590, 2021.
  • Cordts et al. [2016] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
  • Fang et al. [2013] Chen Fang, Ye Xu, and Daniel N Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In Proceedings of the IEEE International Conference on Computer Vision, pages 1657–1664, 2013.
  • Ganin et al. [2016] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  • Gulrajani and Lopez-Paz [2020] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. arXiv preprint arXiv:2007.01434, 2020.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • Hu et al. [2020] Shoubo Hu, Kun Zhang, Zhitang Chen, and Laiwan Chan. Domain generalization via multidomain discriminant analysis. In Uncertainty in Artificial Intelligence, pages 292–302. PMLR, 2020.
  • Huang et al. [2020] Zeyi Huang, Haohan Wang, Eric P Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 124–140. Springer, 2020.
  • Huang et al. [2022] Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, and Tongliang Liu. Harnessing out-of-distribution examples via augmenting content and style. arXiv preprint arXiv:2207.03162, 2022.
  • Junbum et al. [2022] Cha Junbum, Lee Kyungjae, Park Sungrae, and Chun Sanghyuk. Domain generalization by mutual-information regularization with pre-trained models. European Conference on Computer Vision (ECCV), 2022.
  • Kang et al. [2022] Juwon Kang, Sohyun Lee, Namyup Kim, and Suha Kwak. Style neophile: Constantly seeking novel styles for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7130–7140, 2022.
  • Kay [1993] Steven M Kay. Fundamentals of statistical signal processing: estimation theory. Prentice-Hall, Inc., 1993.
  • Kim et al. [2021] Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9619–9628, 2021.
  • Krueger et al. [2021] David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pages 5815–5826. PMLR, 2021.
  • Lee et al. [2022] Suhyeon Lee, Hongje Seong, Seongwon Lee, and Euntai Kim. Wildnet: Learning domain generalized semantic segmentation from the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9936–9946, 2022.
  • Li et al. [2017] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pages 5542–5550, 2017.
  • Li et al. [2018a] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI conference on artificial intelligence, 2018a.
  • Li et al. [2018b] Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5400–5409, 2018b.
  • Li et al. [2018c] Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, and Dacheng Tao. Domain generalization via conditional invariant representations. In Proceedings of the AAAI conference on artificial intelligence, 2018c.
  • Li et al. [2018d] Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European conference on computer vision (ECCV), pages 624–639, 2018d.
  • Li et al. [2022] Ziyue Li, Kan Ren, Xinyang Jiang, Yifei Shen, Haipeng Zhang, and Dongsheng Li. Simple: Specialized model-sample matching for domain generalization. In The Eleventh International Conference on Learning Representations, 2022.
  • Lin [1991] Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145–151, 1991.
  • Liu et al. [2023] Ce Liu, Suryansh Kumar, Shuhang Gu, Radu Timofte, and Luc Van Gool. Va-depthnet: A variational approach to single image depth prediction. arXiv preprint arXiv:2302.06556, 2023.
  • Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10012–10022, 2021.
  • Nam et al. [2021] Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8690–8699, 2021.
  • Neuhold et al. [2017] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pages 4990–4999, 2017.
  • Pan et al. [2018] Xingang Pan, Ping Luo, Jianping Shi, and Xiaoou Tang. Two at once: Enhancing learning and generalization capacities via ibn-net. In Proceedings of the European Conference on Computer Vision (ECCV), pages 464–479, 2018.
  • Peng et al. [2022] Duo Peng, Yinjie Lei, Munawar Hayat, Yulan Guo, and Wen Li. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2594–2605, 2022.
  • Peng et al. [2019] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406–1415, 2019.
  • Perlaza et al. [2022] Samir M Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, and Stefano Rini. Empirical risk minimization with relative entropy regularization: Optimality and sensitivity analysis. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 684–689. IEEE, 2022.
  • Rame et al. [2022] Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. Advances in Neural Information Processing Systems, 35:10821–10836, 2022.
  • Richter et al. [2016] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 102–118. Springer, 2016.
  • Ros et al. [2016] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3234–3243, 2016.
  • Sagawa et al. [2019] Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
  • Shi et al. [2021] Yuge Shi, Jeffrey Seely, Philip HS Torr, N Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. arXiv preprint arXiv:2104.09937, 2021.
  • Shimodaira [2000] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
  • Silberman et al. [2012] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12, pages 746–760. Springer, 2012.
  • Singh et al. [2022] Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 804–814, 2022.
  • Sun and Saenko [2016] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pages 443–450. Springer, 2016.
  • Vapnik [1998] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
  • Venkateswara et al. [2017] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018–5027, 2017.
  • Wang et al. [2022] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 2022.
  • Yang et al. [2020] Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. Causalvae: Structured causal disentanglement in variational autoencoder. arXiv preprint arXiv:2004.08697, 2020.
  • Yu et al. [2020] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2636–2645, 2020.
  • Zhang et al. [2013] Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In International conference on machine learning, pages 819–827. PMLR, 2013.
  • Zhang et al. [2015] Kun Zhang, Mingming Gong, and Bernhard Schölkopf. Multi-source domain adaptation: A causal view. In Proceedings of the AAAI Conference on Artificial Intelligence, 2015.
  • Zhang et al. [2021] Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk minimization: Learning to adapt to domain shift. Advances in Neural Information Processing Systems, 34:23664–23678, 2021.
  • Zhou et al. [2021] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. arXiv preprint arXiv:2104.02008, 2021.
  • Zhou et al. [2022] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
\thetitle

Supplementary Material

7 More mathematical details of our method

7.1 Derivation details of PUB

Details for Eq. LABEL:eq:bound1.

We denote PmixnwnP(ϕ(𝐗n),ψ(𝐘n))P_{mix}\triangleq\sum_{n}w_{n}P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}). Therefore for GJSD, we have:

GJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)=nwnKL(P(ϕ(𝐗n),ψ(𝐘n))Pmix)=nwn[Hc(P(ϕ(𝐗n),ψ(𝐘n)),Pmix)H(P(ϕ(𝐗n),ψ(𝐘n)))]=nwnHc(P(ϕ(𝐗n),ψ(𝐘n)),Pmix)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=nwnϕ(𝐗n),ψ(𝐘n)P(x,y)lnPmix(x,y)d(x,y)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=ϕ(𝐗n),ψ(𝐘n)nwnP(x,y)lnPmix(x,y)d(x,y)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=ϕ(𝐗n),ψ(𝐘n)Pmix(x,y)lnPmix(x,y)d(x,y)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=H(Pmix)nwnH(P(ϕ(𝐗n),ψ(𝐘n))).\begin{split}&GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ =&\sum\nolimits_{n}w_{n}KL(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n}))\|P_{mix})\\ =&\sum\nolimits_{n}w_{n}[H_{c}(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})),P_{mix})\\ &-H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))]\\ =&\sum\nolimits_{n}w_{n}H_{c}(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})),P_{mix})\\ &-\sum\nolimits_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&\sum\nolimits_{n}w_{n}\int_{\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}-P(x,y)\ln P_{mix}(x,y)d(x,y)\\ &-\sum\nolimits_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&\int_{\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}-\sum\nolimits_{n}w_{n}P(x,y)\ln P_{mix}(x,y)d(x,y)\\ &-\sum\nolimits_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&\int_{\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}-P_{mix}(x,y)\ln P_{mix}(x,y)d(x,y)\\ &-\sum\nolimits_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&H(P_{mix})-\sum\nolimits_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n}))).\end{split} (20)

Therefore, the minimization of GJSD can be written as follows:

minϕ,ψGJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)minϕ,ψH(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])𝔼[H(P(ϕ(𝐗n),ψ(𝐘n)))]minϕ,ψH(P(ϕ(𝐗),ψ(𝐘)𝒟))𝔼[H(P(ϕ(𝐗n),ψ(𝐘n)))].\begin{split}&\min_{\phi,\psi}GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ \equiv&\min_{\phi,\psi}H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])-\mathbb{E}[H(P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}))]\\ \equiv&\min_{\phi,\psi}H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))-\mathbb{E}[H(P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})}))].\end{split}

(21)

Details for Eq. LABEL:eq:bound2. Taking into account 𝒪\mathcal{O}, similar to [9], we have the upper bound for GJSD as:

GJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)=Hc(Pmix,𝒪)Hc(Pmix,𝒪)+H(Pmix)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=Hc(Pmix,𝒪)DKL(Pmix𝒪)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))Hc(Pmix,𝒪)nwnH(P(ϕ(𝐗n),ψ(𝐘n))).\begin{split}&GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ =&H_{c}(P_{mix},\mathcal{O})-H_{c}(P_{mix},\mathcal{O})+H(P_{mix})\\ &-\sum_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&H_{c}(P_{mix},\mathcal{O})-D_{\mathrm{KL}}(P_{mix}\|\mathcal{O})\\ &-\sum_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ \leq&H_{c}(P_{mix},\mathcal{O})-\sum_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n}))).\end{split} (22)

For the standard situation where w1=w2==wn=1/Nw_{1}=w_{2}=...=w_{n}=1/N, we further have:

GJSD({P(ϕ(𝐗n),ψ(𝐘n))}n=1N)Hc(Pmix,𝒪)nwnH(P(ϕ(𝐗n),ψ(𝐘n)))=Hc(𝔼[P(ϕ(𝐗n),ψ(𝐘n))],𝒪)𝔼[H(P(ϕ(𝐗n),ψ(𝐘n)))].\begin{split}&GJSD(\{P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})\}_{n=1}^{N})\\ \leq&H_{c}(P_{mix},\mathcal{O})-\sum_{n}w_{n}H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))\\ =&H_{c}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})],\mathcal{O})\\ &-\mathbb{E}[H(P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})))].\end{split} (23)

Details for Eq. LABEL:eq:PUB. The above bound can be further reformed as:

Hc(𝔼[P(ϕ(𝐗n),ψ(𝐘n))],𝒪)a=H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])+DKL(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]𝒪)a,=H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])+DKL(P(ϕ(𝐗),ψ(𝐘))𝒪)a.\begin{split}&H_{c}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})],\mathcal{O})-a\\ =&H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])\\ &+D_{\mathrm{KL}}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]\|\mathcal{O})-a,\\ =&H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])\\ &+D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})})\|\mathcal{O})-a.\end{split} (24)

Derivation details of Eq. 8.

GAim1=H(P(ϕ(𝐗),ψ(𝐘)𝒟))=H(P(ϕ(𝐗),ψ(𝐘),𝒟))H(P(𝒟))=H(P(ϕ(𝐗),𝒟))H(P(𝒟))+H(P(ψ(𝐘)ϕ(𝐗),𝒟))=H(P(ϕ(𝐗)𝒟))+H(P(ψ(𝐘)ϕ(𝐗),𝒟))=H(P(ψ(𝐘)𝒟))+H(P(ϕ(𝐗)ψ(𝐘),𝒟)).\begin{split}\text{{GAim1}}=&H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D}))\\ =&H(P(\phi({\mathbf{X}}),\psi({\mathbf{Y}}),\mathcal{D}))-H(P(\mathcal{D}))\\ =&H(P(\phi({\mathbf{X}}),\mathcal{D}))-H(P(\mathcal{D}))\\ &+H(P(\psi({\mathbf{Y}})\mid\phi({\mathbf{X}}),\mathcal{D}))\\ =&H(P(\phi({\mathbf{X}})\mid\mathcal{D}))+H(P(\psi({\mathbf{Y}})\mid\phi({\mathbf{X}}),\mathcal{D}))\\ =&H(P(\psi({\mathbf{Y}})\mid\mathcal{D}))+H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}}),\mathcal{D})).\end{split} (25)

Derivation details of Eq. 9. Due to Eq. 8, we want to maintain ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) but suppressing ψ(𝐘)ϕ(𝐗)\psi({\mathbf{Y}})\to\phi({\mathbf{X}}). Thus we want to maxϕ,ψH(P(ϕ(𝐗)|ψ(𝐘)),𝒟)\max_{\phi,\psi}H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}})),\mathcal{D}) while minϕ,ψH(P(ψ(𝐘)|ϕ(𝐗)),𝒟)\min_{\phi,\psi}H(P(\psi({\mathbf{Y}})|\phi({\mathbf{X}})),\mathcal{D}). which problem can be simplified as:

minϕ,ψH(P(ψ(𝐘)|ϕ(𝐗),𝒟))H(P(ϕ(𝐗)|ψ(𝐘),𝒟))=minϕ,ψH(ϕ(𝐗),ψ(𝐘)|𝒟))I(ϕ(𝐗),ψ(𝐘)|𝒟))H(P(ϕ(𝐗)|ψ(𝐘),𝒟))+H(P(ϕ(𝐗)|ψ(𝐘),𝒟))=minϕ,ψH(ϕ(𝐗),ψ(𝐘)|𝒟))I(ϕ(𝐗),ψ(𝐘)|𝒟)),\begin{split}&\min_{\phi,\psi}H(P(\psi({\mathbf{Y}})|\phi({\mathbf{X}}),\mathcal{D}))-H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}}),\mathcal{D}))\\ =&\min_{\phi,\psi}H(\phi({\mathbf{X}}),\psi({\mathbf{Y}})|\mathcal{D}))-I(\phi({\mathbf{X}}),\psi({\mathbf{Y}})|\mathcal{D}))-\\ &H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}}),\mathcal{D}))+H(P(\phi({\mathbf{X}})|\psi({\mathbf{Y}}),\mathcal{D}))\\ =&\min_{\phi,\psi}H(\phi({\mathbf{X}}),\psi({\mathbf{Y}})|\mathcal{D}))-I(\phi({\mathbf{X}}),\psi({\mathbf{Y}})|\mathcal{D})),\end{split} (26)

where the first term is already in GAim2, thus GReg2 should deal with the second term, which is:

minϕ,ψI(ϕ(𝐗),ψ(𝐘)|𝒟))=minϕ,ψH(P(ϕ(𝐗))|𝒟)H(P(ϕ(𝐗))|P(ψ(𝐘)),𝒟).\begin{split}&\min_{\phi,\psi}I(\phi({\mathbf{X}}),\psi({\mathbf{Y}})|\mathcal{D}))=\\ &\min_{\phi,\psi}H(P(\phi({\mathbf{X}}))|\mathcal{D})-H(P(\phi({\mathbf{X}}))|P(\psi({\mathbf{Y}})),\mathcal{D}).\end{split} (27)

Also, due to the effect of 𝒟\mathcal{D} being alleviated through the mappings, the above equation is approximated as

minϕ,ψH(P(ϕ(𝐗)))H(P(ϕ(𝐗))|P(ψ(𝐘))),\begin{split}\min_{\phi,\psi}H(P(\phi({\mathbf{X}})))-H(P(\phi({\mathbf{X}}))|P(\psi({\mathbf{Y}}))),\end{split} (28)

which is GReg2.

Derivation details of Eq. LABEL:eq:cfs_detial. For a Gaussian distribution 𝒩(x;μ,Σ)\mathcal{N}(x;\mu,\Sigma) with DD dimension, its entropy is:

H(x)=p(x)lnp(x)𝑑x=p(x)[ln((2π)D2|Σ|12)12(xμ)Σ1(xμ)]dx=ln((2π)D2|Σ|12)+12p(x)[(xμ)Σ1(xμ)]𝑑x=ln((2π)D2|Σ|12)+12p(y)×y𝑑y=ln((2π)D2|Σ|12)+12d=1D𝔼[yd2]=ln((2π)D2|Σ|12)+D2=D2(1+ln(2π))+12ln|Σ|.\begin{split}H(x)=&-\int p(x)\ln{p(x)}dx\\ =&-\int p(x)[\ln{((2\pi)^{-\frac{D}{2}}|\Sigma|^{-\frac{1}{2}})}\\ &-\frac{1}{2}(x-\mu)^{\top}\Sigma^{-1}(x-\mu)]dx\\ =&\ln{((2\pi)^{\frac{D}{2}}|\Sigma|^{-\frac{1}{2}})}\\ &+\frac{1}{2}\int p(x)[(x-\mu)^{\top}\Sigma^{-1}(x-\mu)]dx\\ =&\ln{((2\pi)^{\frac{D}{2}}|\Sigma|^{-\frac{1}{2}})}+\frac{1}{2}\int p(y)\times y^{\top}dy\\ =&\ln{((2\pi)^{\frac{D}{2}}|\Sigma|^{-\frac{1}{2}})}+\frac{1}{2}\sum_{d=1}^{D}\mathbb{E}[y_{d}^{2}]\\ =&\ln{((2\pi)^{\frac{D}{2}}|\Sigma|^{-\frac{1}{2}})}+\frac{D}{2}\\ =&\frac{D}{2}(1+\ln{(2\pi)})+\frac{1}{2}\ln{|\Sigma|}.\end{split} (29)

Then Eq. LABEL:eq:cfs_detial equals:

H(𝒩(ϕ(𝐗);μ𝐗,Σ𝐗𝐗))H(𝒩(ϕ(𝐗)ψ(𝐘);μ𝐗|𝐘,Σ𝐗𝐗|𝐘))=D2(1+ln(2π))+12ln|Σ𝐗𝐗|D2(1+ln(2π))12ln|Σ𝐗𝐗|𝐘|=12ln(|Σ𝐗𝐗|)ln(|Σ𝐗𝐗|𝐘|),=12ln(|Σ𝐗𝐗||Σ𝐗𝐗|𝐘|).\begin{split}&H(\mathcal{N}(\phi({\mathbf{X}});\mu_{\mathbf{X}},\Sigma_{{\mathbf{X}}{\mathbf{X}}}))\\ &-H(\mathcal{N}(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}});\mu_{{\mathbf{X}}|{\mathbf{Y}}},\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}))\\ =&\frac{D}{2}(1+\ln(2\pi))+\frac{1}{2}\ln|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|\\ &-\frac{D}{2}(1+\ln(2\pi))-\frac{1}{2}\ln|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|\\ =&\frac{1}{2}\ln(|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|)-\ln(|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|),\\ =&\frac{1}{2}\ln(\frac{|\Sigma_{{\mathbf{X}}{\mathbf{X}}}|}{|\Sigma_{{\mathbf{X}}{\mathbf{X}}|{\mathbf{Y}}}|}).\end{split} (30)

Empirical risk. The empirical risk introduced by the whole model θ\theta w.r.t 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}} is determined by a convex loss function L(θ)L(\theta). Following  [39], the empirical risk considering 𝒪\mathcal{O} is:

R(θ)=\displaystyle R(\theta)= L(θ)𝑑P(θ)+H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])\displaystyle\int L(\theta)dP(\theta)+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])
+DKL(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]𝒪)\displaystyle+D_{\mathrm{KL}}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]\|\;\mathcal{O}) (31)
H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))).\displaystyle-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}}))).

Proof of Using ψ\psi v.s. not using ψ\psi. Using Jensen’s inequality, due to 𝐘,ψ(𝐘){\mathbf{Y}},\psi({\mathbf{Y}}) contains the same amount of useful information as 𝐘{\mathbf{Y}}, we have:

H(𝐘)H(ψ(𝐘)).\displaystyle H({\mathbf{Y}})\geq H(\psi({\mathbf{Y}})). (32)

Therefore, we have

H(𝔼[P(ϕ(𝐗n),𝐘n)])=H(𝔼[P(ϕ(𝐗n))])+H(𝔼[P(𝐘nϕ(𝐗n))])H(𝔼[P(ϕ(𝐗n))])+H(𝔼[P(ψ(𝐘n)ϕ(𝐗n))])=H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]).\begin{split}&H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})])\\ =&H(\mathbb{E}[P(\phi({\mathbf{X}}_{n}))])+H(\mathbb{E}[P({{\mathbf{Y}}_{n}\mid\phi({\mathbf{X}}_{n})})])\\ \geq&H(\mathbb{E}[P(\phi({\mathbf{X}}_{n}))])+H(\mathbb{E}[P({\psi({\mathbf{Y}}_{n})\mid\phi({\mathbf{X}}_{n})})])\\ =&H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]).\end{split} (33)

Therefore, for the risk of θnψ\theta^{n\psi}:

supR(θnψ)=sup\displaystyle\sup R({\theta}^{n\psi})=\sup minϕ[L(θ)dP(θ)\displaystyle\min_{\phi}[\int L(\theta)dP(\theta)
+H(𝔼[P(ϕ(𝐗n),𝐘n)])]+b,\displaystyle+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})])]+b, (34)

and the risk of θnψ\theta^{n\psi}:

supR(θψ)=sup\displaystyle\sup R({\theta}^{\psi})=\sup minϕ[L(θ)dP(θ)\displaystyle\min_{\phi}[\int L(\theta)dP(\theta)
+H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])]+b,\displaystyle+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])]+b, (35)

where bDKL(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]𝒪)H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))),b\triangleq D_{\mathrm{KL}}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]\|\;\mathcal{O})-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}}))), we have:

supR(θnψ)supR(θψ).\displaystyle\sup R({\theta}^{n\psi})\geq\sup R({\theta}^{\psi}). (36)

Proof of incorporating conditions leads to lower generalization risk on learning invariant representations. For the risks of the model having parameters θc\theta^{c} trained with using conditions, we have:

supR(θc)=sup\displaystyle\sup R({\theta}^{c})=\sup minϕ[L(θ)dP(θ)\displaystyle\min_{\phi}[\int L(\theta)dP(\theta)
+H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])]+b,\displaystyle+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])]+b, (37)

where bDKL(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]𝒪)H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))).b\triangleq D_{\mathrm{KL}}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})]\|\;\mathcal{O})-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}}))). For R(θnc)R({\theta}^{nc}) that trained without using conditions, it has:

supR(θnc)=sup\displaystyle\sup R({\theta}^{nc})=\sup minϕ,ψ[L(θ)𝑑P(θ)+H(𝔼[P(ϕ(𝐗n))])]\displaystyle\min_{\phi,\psi}[\;\int L(\theta)dP(\theta)+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])\;]
+H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])\displaystyle+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])
H(𝔼[P(ϕ(𝐗n))])+b\displaystyle-H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])+b (38)
=sup\displaystyle=\sup minϕ,ψ[L(θ)𝑑P(θ)+H(𝔼[P(ϕ(𝐗n))])]\displaystyle\min_{\phi,\psi}[\;\int L(\theta)dP(\theta)+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])\;]
+H(𝔼[P(ψ(𝐘n)ϕ(𝐗n))])+b.\displaystyle+H(\mathbb{E}[P({\psi({\mathbf{Y}}_{n})\mid\phi({\mathbf{X}}_{n})})])+b.

Due to the inequality:

supminϕ[H(𝔼[P(ϕ(𝐗n),ψ(𝐘n))])]\displaystyle\sup\min_{\phi}[\;H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n})})])\;] (39)
=\displaystyle= supminϕ[H(𝔼[P(ϕ(𝐗n))])\displaystyle\sup\min_{\phi}[\;H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])
+H(𝔼[P(ψ(𝐘n)ϕ(𝐗n))])Minimized.]\displaystyle\underbrace{+H(\mathbb{E}[P({\psi({\mathbf{Y}}_{n})\mid\phi({\mathbf{X}}_{n})})])}_{\text{Minimized.}}\;] (40)
\displaystyle\leq supminϕ,ψ[H(𝔼[P(ϕ(𝐗n))])]\displaystyle\sup\min_{\phi,\psi}[\;H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])\;]
+H(𝔼[P(ψ(𝐘n)ϕ(𝐗n))])Remains.,\displaystyle\underbrace{+H(\mathbb{E}[P({\psi({\mathbf{Y}}_{n})\mid\phi({\mathbf{X}}_{n})})])}_{\text{Remains.}}, (41)

we have

supR(θnc)supR(θc).\displaystyle\sup R({\theta}^{nc})\geq\sup R({\theta}^{c}). (42)

8 Objective derivation details of many previous methods.

This section shows how we uniformly simplify the objectives of previous methods.

ERM [14]: The basic method. The basic method does not focus on minimizing GJSD. Therefore, there are no terms for Aim 1. For Aim 2 it directly minimize H(P(ϕ(𝐗),𝐘))H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})).

DANN [13]: Minimize feature divergences of source domains. DANN [13] minimizes feature divergences of source domains adverbially without considering conditions. Therefore its empirical objective for Aim 1 is

minϕH(𝔼[P(ϕ(𝐗n))])a\displaystyle\min_{\phi}H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])-a (43)

For Aim 2 it directly minimizes H(P(ϕ(𝐗),𝐘))H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})).

CORAL [48]: Minimize the distance between the second-order statistics of source domains. Since CORAL [48] only minimizes the second-order distance between souce feature distributions, its objective can be summarized as:

minϕH(P(ϕ(𝐗),𝐘))+H(P(ϕ(𝐗)))H(𝔼[P(ϕ(𝐗n))]).\displaystyle\min_{\phi}H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}))+H(P({\phi({\mathbf{X}})}))-H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})]). (44)

By grouping it, CORAL [48] has H(𝔼[P(ϕ(𝐗n))])-H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})]) for Aim 1 and H(P(ϕ(𝐗),𝐘))+H(P(ϕ(𝐗)))H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}))+H(P({\phi({\mathbf{X}})})) for Aim 2.

CIDG [28]: Minimizing the conditioned domain gap. CIDG [28] tries to learn conditional domain invariant features:

minϕH(𝔼[P(ϕ(𝐗n),𝐘n)]).\displaystyle\min_{\phi}H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})]). (45)

For Aim 2 it directly minimizes H(P(ϕ(𝐗),𝐘))H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})).

MDA [16]: Minimizing domain gap compared to the decision gap. Some previous works, such as MDA [16], follow the hypothesis that the generalization is guaranteed while the decision gap is larger than the domain gap. Therefore, instead of directly minimizing the domain gap, MDA minimizes the ratio between the domain gap and the decision gap. The overall objective of MDA can be summarized as:

minϕH(P(ϕ(𝐗),𝐘))+H(P(ϕ(𝐗)))+(H(𝔼[P(ϕ(𝐗n),𝐘n)])𝔼[H(P(ϕ(𝐗n),𝐘n))]constant)(H(𝔼[P(ϕ(𝐗n)𝐘n)])𝔼[H(P(ϕ(𝐗)𝐘))]constant)+𝔼[H(P(ϕ(𝐗),𝐘))]constant.\begin{split}&\min_{\phi}H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}))+H(P(\phi({\mathbf{X}})))\\ &+({H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})])-\underbrace{\mathbb{E}[H(P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}}))]}_{constant}})\\ &-({H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})\mid{\mathbf{Y}}_{n}})])-\underbrace{\mathbb{E}[H(P({\phi({\mathbf{X}})\mid{\mathbf{Y}}}))]}_{constant}})\\ &+\underbrace{\mathbb{E}[H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}))]}_{constant}.\end{split} (46)

Since the entropy is non-negative and the constants can be omitted, Eq. LABEL:eq:MDA is equivalent to:

minϕH(P(ϕ(𝐗)𝐘))+H(P(ϕ(𝐗)))+H(𝔼[P(ϕ(𝐗n),𝐘n)])H(𝔼[P(ϕ(𝐗)𝐘)]))+a.\begin{split}&\min_{\phi}H(P({\phi({\mathbf{X}})\mid{\mathbf{Y}}}))+H(P(\phi({\mathbf{X}})))\\ &+H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})])-H(\mathbb{E}[P({\phi({\mathbf{X}})\mid{\mathbf{Y}}})]))+a.\end{split} (47)

By grouping Eq. LABEL:eq:MDA2, we have that for Aim 1 it minimizes minϕH(𝔼[P(ϕ(𝐗n),𝐘n)])\min_{\phi}H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})]), and for Aim 2 it minimizes H(P(ϕ(𝐗)𝐘))H(𝔼[P(ϕ(𝐗)𝐘)]))+H(P(ϕ(𝐗)))H(P({\phi({\mathbf{X}})\mid{\mathbf{Y}}}))-H(\mathbb{E}[P({\phi({\mathbf{X}})\mid{\mathbf{Y}}})]))+H(P(\phi({\mathbf{X}}))).

MIRO [19], SIMPLE [30]: Using pre-trained models as 𝒪\mathcal{O}. One feasible way to obtain 𝒪\mathcal{O} is adopting pre-trained oracle models such as MIRO [19] and SIMPLE [30]. Note that the pre-trained models are exposed to additional data besides those provided. Therefore, for Aim 1: they have:

minϕDKL(P(ϕ(X)|Y)𝒪)𝔼[H(P(ϕ(𝐗n),𝐘n))]constant.\displaystyle\min_{\phi}D_{\mathrm{KL}}(P(\phi(X)|Y)\|\mathcal{O})-\underbrace{\mathbb{E}[H(P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}}))]}_{constant}. (48)

Differently, MIRO only uses one pre-trained model, as its 𝒪𝒪1\mathcal{O}\triangleq\mathcal{O}^{1}; meanwhile, SIMPLE combines KK pre-trained models as the oracle model: 𝒪k=1Kvk𝒪k\mathcal{O}\triangleq\sum_{k=1}^{K}v_{k}\mathcal{O}^{k} where vv is the weight vector. For Aim 2 it directly minimizes H(P(ϕ(𝐗),𝐘))H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})).

RobustNet [10]. RobustNet employs the instance selective whitening loss, which disentangles domain-specific and domain-invariant properties from higher-order statistics of the feature representation and selectively suppresses domain-specific ones. Therefore, it implicitly whitens the 𝐘{\mathbf{Y}}-irrelevant features in 𝐗{\mathbf{X}}. Thus, its objective can be simplified as:

minϕ,ψH(P(ϕ(𝐗),ψ(𝐘)))H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))).\begin{split}&\min_{\phi,{\psi}}H(P({\phi({\mathbf{X}}),{\psi({\mathbf{Y}})}}))-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))\\ &+H(P(\phi({\mathbf{X}}))).\end{split} (49)

9 Aligning notations between paper and supplementary materials

9.1 More details about Table 1

To better understand, we simplify some notations in Table 1. We present the simplified notations and their corresponding origins in Table 9.

Learning domain invariant representations
Aim1: Learning domain invariance Reg1: Integrating prior
DANN minϕH(P(ϕ(𝐗)𝒟))\min_{\phi}H(P(\phi({\mathbf{X}})\mid\mathcal{D})) None
minϕH(𝔼[P(ϕ(𝐗n))]\min_{\phi}H(\mathbb{E}[P(\phi({\mathbf{X}}_{n}))]
CDANN, CIDG, MDA minϕH(P(ϕ(𝐗),𝐘𝒟))\min_{\phi}H(P({\phi({\mathbf{X}}),{\mathbf{Y}}}\mid\mathcal{D})) None
minϕH(𝔼[P(ϕ(𝐗n),ψ(𝐘n))]\min_{\phi}H(\mathbb{E}[P(\phi({\mathbf{X}}_{n}),\psi({\mathbf{Y}}_{n}))]
Ours minϕ,ψH(P(ϕ(𝐗),ψ(𝐘)𝒟))\min_{\phi,\psi}H(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})}\mid\mathcal{D})) minϕ,ψDKL(P(ϕ(𝐗),ψ(𝐘))𝒪){\min_{\phi,\psi}D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),\psi({\mathbf{Y}})})\|\mathcal{O})}
minϕ,ψH(𝔼[P(ϕ(𝐗N),ψ(𝐘n)])\min_{\phi,\psi}H(\mathbb{E}[P({\phi({\mathbf{X}}_{N}),\psi({\mathbf{Y}}_{n})}])
Maximizing A Posterior between representations and targets
Aim2: Maximizing A Posterior (MAP) Reg2: Suppressing invalid causality
CORAL minϕH(P(𝐘ϕ(𝐗)))\min_{\phi}H(P({{\mathbf{Y}}\mid\phi({\mathbf{X}})})) minϕH(P(ϕ(𝐗𝒟)))+H(P(ϕ(𝐗)))\min_{\phi}-H(P({\phi({\mathbf{X}}\mid\mathcal{D})}))+H(P({\phi({\mathbf{X}})}))
minϕH(𝔼[P(ϕ(𝐗n))])+H(P(ϕ(𝐗)))\min_{\phi}-H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])+H(P({\phi({\mathbf{X}})}))
Table 9: Supplemental notations for Table 1. Refined notations and their original formulations are reported. The original formulations are highlighted as blue.

9.2 More details about Table 4

For simplification, we uniformly simplified the formulation of terms from their derivation. The simplified form in Table 4 and its original form can be seen in Table 10. Note that the iAim1 is from CDANN [29], CIDG [28], MDA [16] and iReg2 is from CORAL [48].

GAim2 H(P(ψ(𝐘)ϕ(𝐗)))+H(P(𝐘ψ(𝐘)))H(P({\psi({\mathbf{Y}})\mid\phi({\mathbf{X}})}))+H(P({{\mathbf{Y}}\mid\psi({\mathbf{Y}})}))
GReg1 DKL(P(ϕ(𝐗),𝐘𝒟)𝒪))D_{\mathrm{KL}}(P({\phi({\mathbf{X}}),{\mathbf{Y}}}\mid\mathcal{D})\|\mathcal{O}))
iAim1 H(P(ϕ(𝐗)𝒟))H(P({\phi({\mathbf{X}})}\mid\mathcal{D}))
GAim1 H(P(ϕ(𝐗),𝐘)𝒟)H(P({\phi({\mathbf{X}}),{\mathbf{Y}}})\mid\mathcal{D})
iReg2 H(P(ϕ(𝐗),𝒟)+H(P(ϕ(𝐗)))-H(P({\phi({\mathbf{X}})},\mathcal{D})+H(P({\phi({\mathbf{X}})}))
GReg2 H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))){-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}})))}
GAim2 H(P(ψ(𝐘)ϕ(𝐗)))+H(P(𝐘ψ(𝐘)))H(P({\psi({\mathbf{Y}})\mid\phi({\mathbf{X}})}))+H(P({{\mathbf{Y}}\mid\psi({\mathbf{Y}})}))
GReg1 DKL(𝔼[P(ϕ(𝐗n),𝐘n)]𝒪))D_{\mathrm{KL}}(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})]\|\mathcal{O}))
iAim1 H(𝔼[P(ϕ(𝐗n))])H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])
GAim1 H(𝔼[P(ϕ(𝐗n),𝐘n)])H(\mathbb{E}[P({\phi({\mathbf{X}}_{n}),{\mathbf{Y}}_{n}})])
iReg2 H(𝔼[P(ϕ(𝐗n))])+H(P(ϕ(𝐗)))-H(\mathbb{E}[P({\phi({\mathbf{X}}_{n})})])+H(P({\phi({\mathbf{X}})}))
GReg2 H(P(ϕ(𝐗)ψ(𝐘)))+H(P(ϕ(𝐗))){-H(P(\phi({\mathbf{X}})\mid\psi({\mathbf{Y}})))+H(P(\phi({\mathbf{X}})))}
Table 10: Notations for terms in the paper (above) and its derived formulation (below) in the appendix.

10 Experimental details and parameters

We have conducted 248248 experiments in total, including 1212 Toy experiments (training 33 objective settings on 44 domain settings), 2020 Regression experiments in Monocular depth estimation (training 55 objective settings on 44 domain settings), 99 Segmentation experiments (training 33 objective settings on 11 domain settings and verifying on 33 domain settings), 6363 Classification experiments (training 11 objective settings on 55 datasets that has 4,4,4,4,54,4,4,4,5 domain settings for 33 trails), and 144144 ablation study experiments (training 1212 objective settings on 11 dataset that has 44 domain settings for 33 trails). We believe that the consistent improvements yielded by GMDG in these experiments validate the superiority of our GMDG.

Experimental details of these experiments can be found in the following. Note that we set vA2=1v_{A2}=1 for all experiments.

10.1 Toy experiments: Synthetic regression experimental details.

We explore the efficacy of ψ\psi by using toy regression experiments with synthetic data.

Datasets. The latent features in all three domains are added some distributional shifts and used as the first group in the raw features (denoted as xn1,yn1x^{1}_{n},y^{1}_{n} where n1,2,3n\in{1,2,3} represent which domain it belongs to). Then, some domain-conditioned transformations are applied to shifted features, or some pure random noises are used as the second group in the raw features (denoted as xn2,yn2x^{2}_{n},y^{2}_{n}). Therefore the constructed Xn{1,2,3}=[xn1,xn2],Yn{1,2,3}=[yn1,yn2]X_{n\in\{1,2,3\}}=[x^{1}_{n},x^{2}_{n}],Y_{n\in\{1,2,3\}}=[y^{1}_{n},y^{2}_{n}] both contain features that dependents on DD. Details of each synthetic data are exhibited in Table 11. We generate 1000010000 samples for training and 100100 samples for validation and testing sets.

Parameter settings. All experiments are conducted with vA1,vR1,vR2=0.1v_{A1},v_{R1},v_{R2}=0.1.

Experimental settings. For ϕ,ψ\phi,\psi, we use three-layer MLP and one linear layer for regression prediction and Mean Squared Error (MSE) as the loss. We use the best model on the validation dataset for testing.

Metric. We use the MSE between the predictions and the YY of the testing set as the evaluation metric.

Refer to caption
Figure 3: Toy experiments: Diagram of constructing the toy dataset.
hxhx hx𝒩(hx;0,1)hx\sim\mathcal{N}(hx;0,1)
hyhy hy=hxhy=hx
Data 1 Without distribution shift With affine transformations
X1X_{1} x11=hxx_{1}^{1}=hx x12=x11+ϵ𝒩(ϵ;0,0.3)x_{1}^{2}=x_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
Y1Y_{1} y11=hyy_{1}^{1}=hy y12=y11+ϵ𝒩(ϵ;0,0.3)y_{1}^{2}=y_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
X2X_{2} x21=hxx_{2}^{1}=hx x22=4×x21+ϵ𝒩(ϵ;0.5,0.3)x_{2}^{2}=4\times x_{2}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0.5,0.3)
Y2Y_{2} y21=hyy_{2}^{1}=hy y22=4×y21+0.3y_{2}^{2}=4\times y_{2}^{1}+0.3
X3X_{3} x31=hxx_{3}^{1}=hx x32=2×x31+ϵ𝒩(ϵ;0.3,0.2)x_{3}^{2}=2\times x_{3}^{1}+\epsilon\sim\mathcal{N}(\epsilon;-0.3,0.2)
Y3Y_{3} y31=hyy_{3}^{1}=hy y32=0.5×y310.2y_{3}^{2}=0.5\times y_{3}^{1}-0.2
Data 2 With distribution shift With affine transformations
X1X_{1} x11=hxx_{1}^{1}=hx x12=x11+ϵ𝒩(ϵ;0,0.3)x_{1}^{2}=x_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
Y1Y_{1} y11=hyy_{1}^{1}=hy y12=y11+ϵ𝒩(ϵ;0,0.3)y_{1}^{2}=y_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
X2X_{2} x21=hx+ϵ𝒩(ϵ;0.1,0.1)x_{2}^{1}=hx+\epsilon\sim\mathcal{N}(\epsilon;-0.1,0.1) x22=4×x21+ϵ𝒩(ϵ;0.3,0.3)x_{2}^{2}=4\times x_{2}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0.3,0.3)
Y2Y_{2} y21=hy+ϵ𝒩(ϵ;0.2,0.1)y_{2}^{1}=hy+\epsilon\sim\mathcal{N}(\epsilon;0.2,0.1) y22=8×y210.3y_{2}^{2}=8\times y_{2}^{1}-0.3
X3X_{3} x31=hx+ϵ𝒩(ϵ;0.4,0.2)x_{3}^{1}=hx+\epsilon\sim\mathcal{N}(\epsilon;0.4,0.2) x32=1×x31+ϵ𝒩(ϵ;0.3,0.2)x_{3}^{2}=-1\times x_{3}^{1}+\epsilon\sim\mathcal{N}(\epsilon;-0.3,0.2)
Y3Y_{3} y31=hy+ϵ𝒩(ϵ;0.4,0.2)y_{3}^{1}=hy+\epsilon\sim\mathcal{N}(\epsilon;-0.4,0.2) y32=ϵ𝒩(ϵ;0,0.2)y_{3}^{2}=\epsilon\sim\mathcal{N}(\epsilon;0,0.2)
Data 3 Without distribution shift With squared, cubed transformations or noises
X1X_{1} x11=hxx_{1}^{1}=hx x12=x11+ϵ𝒩(ϵ;0,0.3)x_{1}^{2}=x_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
Y1Y_{1} y11=hyy_{1}^{1}=hy y12=y11+ϵ𝒩(ϵ;0,0.3)y_{1}^{2}=y_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
X2X_{2} x21=hxx_{2}^{1}=hx x22=4×x213+ϵ𝒩(ϵ;0.5,0.3)x_{2}^{2}=4\times x_{2}^{1}**3+\epsilon\sim\mathcal{N}(\epsilon;0.5,0.3)
Y2Y_{2} y21=hyy_{2}^{1}=hy y22=4×y212+0.3y_{2}^{2}=4\times y_{2}^{1}**2+0.3
X3X_{3} x31=hxx_{3}^{1}=hx x32=2×x312+ϵ𝒩(ϵ;0.3,0.2)x_{3}^{2}=2\times x_{3}^{1}**2+\epsilon\sim\mathcal{N}(\epsilon;-0.3,0.2)
Y3Y_{3} y31=hyy_{3}^{1}=hy y32=0.5×y3130.2y_{3}^{2}=0.5\times y_{3}^{1}**3-0.2
Data 4 With distribution shift With squared, cubed transformations or noises
X1X_{1} x11=hxx_{1}^{1}=hx x12=x11+ϵ𝒩(ϵ;0,0.3)x_{1}^{2}=x_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
Y1Y_{1} y11=hyy_{1}^{1}=hy y12=y11+ϵ𝒩(ϵ;0,0.3)y_{1}^{2}=y_{1}^{1}+\epsilon\sim\mathcal{N}(\epsilon;0,0.3)
X2X_{2} x21=hx+ϵ𝒩(ϵ;0.1,0.1)x_{2}^{1}=hx+\epsilon\sim\mathcal{N}(\epsilon;-0.1,0.1) x22=4×x213+ϵ𝒩(ϵ;0.5,0.3)x_{2}^{2}=4\times x_{2}^{1}**3+\epsilon\sim\mathcal{N}(\epsilon;0.5,0.3)
Y2Y_{2} y21=hy+ϵ𝒩(ϵ;0.2,0.1)y_{2}^{1}=hy+\epsilon\sim\mathcal{N}(\epsilon;0.2,0.1) y22=4×y212+0.3y_{2}^{2}=4\times y_{2}^{1}**2+0.3
X3X_{3} x31=hx+ϵ𝒩(ϵ;0.4,0.2)x_{3}^{1}=hx+\epsilon\sim\mathcal{N}(\epsilon;0.4,0.2) x32=2×x312+ϵ𝒩(ϵ;0.3,0.2)x_{3}^{2}=2\times x_{3}^{1}**2+\epsilon\sim\mathcal{N}(\epsilon;-0.3,0.2)
Y3Y_{3} y31=hy+ϵ𝒩(ϵ;0.4,0.2)y_{3}^{1}=hy+\epsilon\sim\mathcal{N}(\epsilon;-0.4,0.2) y32=0.5×y3130.2y_{3}^{2}=0.5\times y_{3}^{1}**3-0.2
Table 11: Toy experiments: Synthetic data details for each experiment.
Use ResNet-50 without SWAD v2v2 v3v3 v1v1 lr mult lr dropout WD TR CF
TerraIncognita 0.1 0.1 0.2 12.5 - - - - -
OfficeHome 0.1 0.001 0.1 20.0 3e-5 0.1 1e-6 - -
VLCS 0.01 0.001 0.1 10.0 1e-5 - 1e-6 0.2 50
PACS 0.01 0.01 0.01 25.0 - - - - -
DomainNet 0.1 0.1 0.1 7.5 - - - - 500
Use ResNet-50 with SWAD v2v2 v3v3 v1v1 lr mult CF
TerraIncognita 0.1 0.001 0.01 10.0 -
OfficeHome 0.1 0.1 0.3 10.0 -
VLCS 0.01 0.001 0.1 10.0 50
PACS 0.01 0.001 0.1 20.0 -
DomainNet 0.1 0.1 0.1 7.5 500
Use RegNetY-16GF with and without SWAD v2v2 v3v3 v1v1 lr mult CF
TerraIncognita 0.01 0.01 0.01 2.5 -
OfficeHome 0.01 0.1 0.1 0.1 -
VLCS 0.01 0.01 0.1 2.0 50
PACS 0.01 0.1 0.1 0.1 -
DomainNet 0.1 0.1 0.1 7.5 500
Table 12: Classification experiments: Parameter settings of classification tasks. Notations: WD: weight decay; TR: tolerance ratio; CF: checkpoint freq. - denotes that for where the default settings are used.
Ablation studies on OfficeHome v2v2 v3v3 v1v1 lr mult use iAim1 use iReg2
Base (ERM) 0.0 0.0 0.0 0.1 False False
Base +iAim1 (DANN) 0.0 0.0 0.1 0.1 True False
Base + GAim1 (CDANN, CIDG) 0.0 0.0 0.1 0.1 False False
Base +iReg2 (CORAL+ψ\psi) 0.0 0.1 0.0 0.1 False True
Base + GReg2 0.0 0.1 0.0 0.1 False False
Base + GAim1 + GReg2 (MDA+ψ\psi) 0.0 0.1 0.1 0.1 False False
Base + GReg1 (MIRO, SIMPLE) 0.01 0.0 0.0 0.1 False False
Base + GReg1 +iAim1 0.01 0.0 0.1 0.1 False True
Base + GReg1 + GAim1 0.01 0.0 0.1 0.1 False False
Base + GReg1 +iReg2 0.01 0.1 0.0 0.1 True False
Base + GReg1 + GReg2 0.01 0.1 0.0 0.1 False False
Base + GReg1 + GAim1 + GReg2 (Ours) 0.01 0.1 0.1 0.1 False False
Table 13: Ablation studies: Parameter settings of ablation studies. Notations: WD: CF: checkpoint freq. - denotes that for where the default settings are used.

10.2 Regression experiments: Monocular depth estimation details.

We explore the efficacy of ψ\psi with GMDG by using toy monocular depth estimation experiments with NYU Depth V2 dataset [46].

Datasets. NYU Depth V2 contains images with 480×640480\times 640 resolution with depth values ranging from 0 to 10 meters. We adopt the densely labeled pairs for training and testing.

Multi-domain construction. To construct multiple domains that fit the problem settings, we split the NYU Depth V2 dataset into four categories as four domains:

  • School: study room, study, student lounge, printer room, computer lab, classroom.

  • Office: reception room, office kitchen, office, nyu office, conference room.

  • Home: playroom, living room, laundry room, kitchen, indoor balcony, home storage, home office, foyer, dining room, dinette, bookstore, bedroom, bathroom, basement.

  • Commercial:furniture store, exercise room, cafe.

After filtering data samples that are not able to be used, each domain has 9595, 110110, 12091209, and 3535 data pairs that can be used for training.

Parameter settings. We follow all the hyperparameter settings in the VA-DepthNet and set vA1=0.001,vR1=0.001,vR2=0.0001v_{A1}=0.001,v_{R1}=0.001,v_{R2}=0.0001. Note that the backbone is trained using VA-DepthNet but without the Variational Loss proposed by VA-DepthNet.

Experimental settings. We use the final saved checkpoint for the leave-one-out cross-validation.

Metrics. Please see metric details in VA-DepthNet [32].

10.3 Segmentation experimental details.

We follow the experimental settings of RobustNet for segmentation experiments.

Datasets. There are two groups of datasets: Synthetic datasets and real-world datasets. (1) Synthetic datasets: GTAV [41] is a large-scale dataset containing 24,966 driving-scene images generated from the Grand Theft Auto V game engine. SYNTHIA [42] which is composed of photo-realistic synthetic images has 9,400 samples with a resolution of 960×720. (2) Real-world datasets: Cityscapes [11] is a large-scale dataset containing high-resolution urban scene images. Providing 3,450 finely annotated images and 20,000 coarsely annotated images, it collects data from 50 different cities in primarily Germany. Only the finely annotated set is adopted for our training and validation. BDD-100K [53] is another real-world dataset with a resolution of 1280×720. It provides diverse urban driving scene images from various locations in the US. We use the 7,000 training and 1,000 validation of the semantic segmentation task. The images are collected from various locations in the US. Mapillary is also a real-world dataset that contains worldwide street-view scenes with 25,000 high-resolution images.

Parameter settings. Specifically, we use all RobustNet’s hyper-parameters and set vA1=0.0001,vR1=0.0001v_{A1}=0.0001,v_{R1}=0.0001.

10.4 Classification experimental details.

Datasets. We use PACS (4 domains, 9,991 samples, 77 classes) [25], VLCS (44 domains, 10,72910,729 samples, 55 classes) [12], OfficeHome (4 domains, 15,588 samples, 65 classes) [50], TerraIncognita (TerraInc, 44 domains, 24,77824,778 samples, 1010 classes) [2], and DomainNet (6 domains, 586,575 samples, 345 classes) [38].

Parameter settings. We list the hyper-parameters in Table 12 to reproduce our results.

Metric. We employ mean Intersection over Union (mIoU) as the measurement for the segmentation task.

10.5 Ablation studies experimental details.

Parameter settings. We run each experiment in three trials with seeds: [0,1,2][0,1,2]. Full settings are reported in Table 13. Especially,

Experimental settings. We use SWAD for all ablation studies to alleviate the effeteness of hyper-parameters. All ablation studies share the same hyper-parameters but add different combinations of terms. CORAL’s [48] objective focuses on minimizing the learned feature covariance discrepancy between source and target, requiring target data access and only regards second-order statistics. We adapt its approach to minimize feature covariances across seen domains for a fair comparison.

11 More results

Visualization of toy experiments: Please see the visualization of toy experiments in Figure 4.

Refer to caption
Figure 4: Toy experiments: Visualization of learned latent representations of different methods. Each color represents a domain.

Regression results: Monocular depth estimation.

The regression results for each unseen domain of monocular depth estimation visualization is displayed in Figure 67.

The Visualization of regression results for unseen domains of models trained with different objective settings are exhibited in Figure 89.

Segmentation results. The segmentation results for unseen samples are displayed in Figure 10.

Classification results. We show the results of each category for the classification experiments as Table 1516171819.

11.1 Other findings and Analysis

What makes a better 𝒪\mathcal{O}. As demonstrated in Eq. LABEL:eq:PUB, 𝒪\mathcal{O} plays a crucial role in PUB by anchoring a space where the relationship between 𝐗{\mathbf{X}} and 𝐘{\mathbf{Y}} is preserved. Ideally, having one 𝒪\mathcal{O} that provides general representations for all seen and unseen domains leads to the best results, one finding supported by MIRO and SIMPLE. However, even though SIMPLE++ combines 283 pre-trained models, achieving the ‘perfect’ 𝒪\mathcal{O} remains unattained. Therefore, this paper primarily discusses how our proposed objectives can improve the model performance when a fixed 𝒪\mathcal{O} is provided.

Comparison with MDA: Minimizing domain gap compared to the decision gap. MDA [16], guided by the hypothesis “guaranteed generalization only when the decision gap exceeds the domain gap”, aims to minimize the ratio between the domain gap and the decision gap. This approach facilitates learning 𝒟\mathcal{D}-independent conditional features, enhancing class separability across domains. As Table 1 illustrates, MDA’s Reg2 objective can also be interpreted as suppressing invalid causality, aligning with our approach. However, MDA’s implementation requires manual selection of ϕ(𝐗)\phi({\mathbf{X}}) from the same 𝐘{\mathbf{Y}} without using ψ\psi and GReg2. Our method further relaxes MDA’s assumption, extending the application of the objective and making it also applicable to tasks besides classification, such as segmentation.

Cutting off causality form ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) may lead to collapse of the model. We have tried to reversely suppress the causality form ϕ(𝐗)ψ(𝐘)\phi({\mathbf{X}})\to\psi({\mathbf{Y}}) instead of causality form ψ(𝐘)ϕ(𝐗)\psi({\mathbf{Y}})\to\phi({\mathbf{X}}) for monocular depth estimation in VA-DepthNet and it causes collapse.

Suppressing invalid causality: Why this design: Our GMDG introduces a mapping ψ\psi for 𝐘{\mathbf{Y}} to relax the static assumption, corroborating more general and practical scenarios. Our empirical findings, as shown in Fig. 5, reveal that introducing ψ(𝐘)\psi({\mathbf{Y}}) without any constraints may not guarantee a clear decision margin for classification. Upon examining our objective in Eq.8 in the main manuscript, we hypothesize that the effect might result from ‘ψ(𝐘)\psi({\mathbf{Y}}) causing ϕ(𝐗)\phi({\mathbf{X}})’, which we term ‘invalid causality’. Thus, we designed a term to suppress such invalid causality. This term is derived from the prediction perspective wherein 𝐘{\mathbf{Y}} should be only predicted from ϕ(𝐗)\phi({\mathbf{X}}); hence, ϕ(𝐗)\phi({\mathbf{X}}) should not be caused by ψ(𝐘)\psi({\mathbf{Y}}). Consider the scenario wherein 𝐘{\mathbf{Y}} and ψ(𝐘)\psi({\mathbf{Y}}) are absent during prediction - the hypothesized causality from ψ(𝐘)\psi({\mathbf{Y}}) to ϕ(𝐗)\phi({\mathbf{X}}) would disrupt the causal chain, resulting in an ‘incomplete’ representation of ϕ(𝐗)\phi({\mathbf{X}}) then prediction degradation. Hence, it is critical to suppress ψ(𝐘)ϕ(𝐗)\psi({\mathbf{Y}})\!\!\!\to\!\!\!\phi({\mathbf{X}}) that may occur during joint training. Notably, the suppression is not symmetric and promotes ϕ(𝐗)𝐘\phi({\mathbf{X}})\!\!\to\!\!{\mathbf{Y}}. Intuition: Intuitively, GReg2 further ‘erases’ the redundant information in ϕ(𝐗)\phi({\mathbf{X}}) that may be caused by ϕ(𝐘)\phi({\mathbf{Y}}), which aims to refine the latent space, yielding better invariant latent features for predicting 𝐘{\mathbf{Y}} (i.e., larger decision margin for the unseen domain as highlighted in Fig. 5).

Refer to caption
Figure 5: T-SNE map of latent features from classification models that were trained without and with GReg2.

GMDG’s efficiency: We have analyzed the efficiency of GMDG in Tab. 14. Though theoretically superior in generality, GMGD may increase computational costs during training due to additional loss functions and VAE encoders. However, these auxiliary components are discarded in the inference stage, ensuring their efficiency remains unaffected. Our confidence in the model’s amenability to efficiency enhancement through careful design is high, and such pursuits remain a promising avenue for future work.

Classification Depth estimation Segmentation
Training Inference Training Inference Training Inference
99.16 49.58 584.44 573.78 209.97 167.34 Baseline
FLOPs (G) 124.00 49.58 1543.90 573.78 449.83 167.34 With GMDG
2.94 1.47 64.42 64.27 23.13 22.54 Baseline
Parameters (M) 4.93 1.47 123.76 64.27 82.48 22.54 With GMDG
Table 14: FLOPs and parameters of baselines without and with GMDG during training and inference.

Applicability, constraint, and limitations: GMDG is specifically designed for mDG with accessible P(𝐘)P({\mathbf{Y}}), in which the model is trained on multiple seen domains and tested on one unseen domain. This task essentially requires learning the invariance across multi-domains for the prediction. When used in single-domain generalization or cases involving novel classes in the unseen domain, GMDG may not be directly applicable, thus requiring further investigations. Meanwhile, as a general objective, our novel GMDG involves additional modules/losses that may incur extra computational costs during training. We have discussed these aspects in our main paper, and we would like to leave them as future work.

Refer to caption
Refer to caption
Figure 6: Regression results: Monocular depth estimation results between VA-Depth and our GMDG on samples from unseen domains. It can be seen that better generalization across domains is obtained with GMDG.
Refer to caption
Refer to caption
Figure 7: Regression results: Monocular depth estimation results between VA-Depth and our GMDG on samples from unseen domains. It can be seen that better generalization across domains is obtained with GMDG, continues.
Refer to caption
Figure 8: Regression results: Monocular depth estimation results for unseen domains of models trained with different objective settings. It can be seen that with the whole GMDG, the model performs the best generalization for all unseen domain settings.
Refer to caption
Figure 9: Regression results: Monocular depth estimation results for unseen domains of models trained with different objective settings, continues. It can be seen that with the whole GMDG, the model performs the best generalization for all unseen domain settings.
Refer to caption
Figure 10: Segmentation results: Visualizations between RobostNet and our GMDG on samples from unseen domains. It can be seen that better generalization is obtained with GMDG.
TerraIncognita Location 100 Location 38 Location 43 Location 46 Avg.
ERM [14] 54.3 42.5 55.6 38.8 47.8
MIRO [19] (use ResNet-50) - - - - 50.4
GMDG (use ResNet-50) 60.9±5.34 47.3±3.42 55.2±1.06 41.0±2.93 51.1±0.91
ERM + SWAD [6] 55.4 44.9 59.7 39.9 50.0
DIWA [40] 57.2 50.1 60.3 39.8 51.9
MIRO [19] + SWAD [6] (use ResNet-50) - - - - 52.9
GMDG + SWAD (use ResNet-50) 61.2±1.4 48.4±1.6 60.0±0.4 42.5±1.1 53.0±0.7
MIRO [19] (use RegNetY-16GF) - - - - 58.9
GMDG (use RegNetY-16GF) 73.3±3.3 54.7±1.4 67.1±0.3 48.6±6.5 60.7±1.8
MIRO [19] + SWAD [6] (use RegNetY-16GF) - - - - 64.3
GMDG + SWAD (use RegNetY-16GF) 74.3±1.5 59.2±1.2 70.6±1.1 56.0±0.8 65.0±0.2
Table 15: Classification experiments on TerraIncognita: More results of full GMDG for each category.
OfficeHome art clipart product real Avg.
ERM [14] 63.1 51.9 77.2 78.1 67.6
MIRO [19] (use ResNet-50) - - - - 70.5±0.4
GMDG (use ResNet-50) 68.9±0.3 56.2±1.7 79.9±0.6 82.0±0.4 70.7±0.2
ERM + SWAD [6] 66.1 57.7 78.4 80.2 70.6
DIWA [40] 69.2 59 81.7 82.2 72.8
MIRO [19] + SWAD [6] (use ResNet-50) - - - - 72.4±0.1
GMDG + SWAD (use ResNet-50) 68.9±0.6 58.2±0.6 80.4±0.3 82.6±0.4 72.5±0.2
MIRO [19] (use RegNetY-16GF) - - - - 80.4±0.2
GMDG (use RegNetY-16GF) 79.7±1.6 67.7±1.8 87.8±0.8 87.9±0.7 80.8±0.6
MIRO [19] + SWAD [6] (use RegNetY-16GF) - - - - 83.3±0.1
GMDG + SWAD (use RegNetY-16GF) 84.1±0.2 74.3±0.9 89.9±0.4 90.6±0.1 84.7±0.2
Table 16: Classification experiments on OfficeHome: More results of full GMDG for each category.
VLCS caltech101 labelme sun09 voc2007 Avg.
ERM [14] 97.7 64.3 73.4 74.6 77.3
MIRO [19] (use ResNet-50) - - - - 79.0±0.0
GMDG (use ResNet-50) 98.3±0.4 65.9±1 73.4±0.8 79.3±1.3 79.2±0.3
ERM + SWAD [6] 98.8 63.3 75.3 79.2 79.1
DIWA [40] 98.9 62.4 73.9 78.9 78.6
MIRO [19] + SWAD [6](use ResNet-50) - - - - 79.6±0.2
GMDG + SWAD (use ResNet-50) 98.9±0.4 63.6±0.2 76.4±0.5 79.5±0.6 79.6±0.1
MIRO [19] (use RegNetY-16GF) - - - - 79.9±0.6
GMDG (use RegNetY-16GF) 97.9±1.3 66.8±2.1 80.8±1 83.9±1.8 82.4±0.6
MIRO [19] + SWAD [6] (use RegNetY-16GF) - - - - 81.7±0.1
GMDG + SWAD (use RegNetY-16GF) 98.4±0.1 65.5±1.4 79.9±0.4 84.9±0.9 82.2±0.3
Table 17: Classification experiments on VLCS: More results of full GMDG for each category.
PACS art_painting cartoon photo sketch Avg.
ERM [14] 84.7 80.8 97.2 79.3 84.2
MIRO [19] (use ResNet-50) - - - - 85.4±0.4
GMDG (use ResNet-50) 84.7±1.0 81.7±2.4 97.5±0.4 80.5±1.8 85.6±0.3
ERM + SWAD [6] 89.3 83.4 97.3 82.5 88.1
DIWA [40] 90.6 83.4 98.2 83.8 89.0
MIRO [19] + SWAD [6] (use ResNet-50) - - - - 88.4±0.1
GMDG + SWAD (use ResNet-50) 90.1±0.6 83.9±0.2 97.6±0.5 82.3±0.7 88.4±0.1
MIRO [19] (use RegNetY-16GF) - - - - 97.4±0.2
GMDG (use RegNetY-16GF) 97.5±1.0 97.0±0.2 99.4±0.2 95.2±0.4 97.3±0.1
MIRO [19] + SWAD [6] (use RegNetY-16GF) - - - - 96.8±0.2
GMDG + SWAD (use RegNetY-16GF) 98.3±0.3 98.0±0.1 99.5±0.3 95.3±0.8 97.9±0.0
Table 18: Classification experiments on PACS: More results of full GMDG for each category.
DomainNet clipart info painting quickdraw real sketch Avg.
ERM [14] 50.1 63.0 21.2 63.7 13.9 52.9 44.0
MIRO [19] (use ResNet-50) - - - - - - 44.3±0.2
GMDG (use ResNet-50) 63.4±0.3 22.4±0.4 51.4±0.4 13.4±0.8 64.4±0.3 52.4±0.4 44.6±0.1
ERM + SWAD [6] 53.5 66.0 22.4 65.8 16.1 55.5 46.5
DIWA [40] 55.4 66.2 23.3 68.7 16.5 56.0 47.7
MIRO [19] + SWAD [6] (use ResNet-50) - - - - - - 47.0±0.0
GMDG + SWAD (use ResNet-50) 66.4±0.3 23.8±0.1 54.5±0.3 15.8±0.1 67.5±0.1 55.8±0.0 47.3±0.1
MIRO [19] (use RegNetY-16GF) - - - - - - 53.8±0.1
GMDG (use RegNetY-16GF) 74.0±0.3 39.5±1.5 61.5±0.3 16.3±1.2 73.9±1.5 62.8±2.4 54.6±0.1
MIRO [19] + SWAD [6] (use RegNetY-16GF) - - - - - - 60.7±0.0
GMDG + SWAD (use RegNetY-16GF) 79.0±0.0 46.9±0.4 69.9±0.4 20.7±0.6 81.1±0.3 70.3±0.4 61.3±0.2
Table 19: Classification experiments on DomainNet: More results of full GMDG for each category.