Rethinking Multi-domain Generalization with A General Learning Objective
Abstract
Multi-domain generalization (mDG) is universally aimed to minimize the discrepancy between training and testing distributions to enhance marginal-to-label distribution mapping. However, existing mDG literature lacks a general learning objective paradigm and often imposes constraints on static target marginal distributions. In this paper, we propose to leverage a -mapping to relax the constraint. We rethink the learning objective for mDG and design a new general learning objective to interpret and analyze most existing mDG wisdom. This general objective is bifurcated into two synergistic amis: learning domain-independent conditional features and maximizing a posterior. Explorations also extend to two effective regularization terms that incorporate prior information and suppress invalid causality, alleviating the issues that come with relaxed constraints. We theoretically contribute an upper bound for the domain alignment of domain-independent conditional features, disclosing that many previous mDG endeavors actually optimize partially the objective and thus lead to limited performance. As such, our study distills a general learning objective into four practical components, providing a general, robust, and flexible mechanism to handle complex domain shifts. Extensive empirical results indicate that the proposed objective with -mapping leads to substantially better mDG performance in various downstream tasks, including regression, segmentation, and classification. Code is available at https://github.com/zhaorui-tan/GMDG/tree/main.
1 Introduction
Domain shift, which breaks the independent and identical distributed (i.i.d.) assumption amid training and test distributions [51], poses a common yet challenging problem in real-world scenarios. Multi-domain generalization (mDG) [3]) is garnering increasing attention owing to its promising capacity to utilize multiple distinct but related source domains for model optimization, ultimately intending to generalize well to unseen domains. Intrinsically, the primary objective for mDG is the maximization of the joint distribution between observations and targets across all domains :
(1) |
A prevalent approach initiates by maximizing the marginal distribution before presuming an invariant across domains [58], anchored on an assumption that remains consistency across .
Aim1: Learning domain invariance | |
Others | None |
DANN | |
CDANN, CIDG, MDA | |
Ours: GAim1 | |
Reg1: Integrating prior | |
Others | None |
MIRO, SIMPLE | |
Ours: GReg1 | |
Aim2: Maximizing A Posterior (MAP) | |
Others | |
Ours: GAim2 | |
Reg2: Suppressing invalid causality | |
Others | None |
CORAL | |
MDA,RobutsNet | |
Ours: GReg2 |
Is truly static across domains? In other words, does truly lack domain-dependent features? In classification tasks, typically, the influence of on is substantially marginal. However, this assumption is not universally applicable, particularly in tasks such as regression or segmentation. Consequently, MDA [16] relaxes the assumption of stable by providing an average class discrepancy, allowing both and vary across . However, MDA has to conduct class-specific sample selection under domains for obtaining , which constrains its objective’s universality and struggles with tasks beyond basic classification, especially where is not discrete.
To better tackle the -dependent variations in both and for border tasks beside classification, we introduce two learnable mappings, and , that project and into the same latent Reproducing Kernel Hilbert Space (RKHS), assumed to extract -independent features from . Incorporating these, Eq. 1 can be changed as
(2) |
Built upon the optimization of Eq. 2, we further identify two additional issues that warrant consideration. 1). The synergy of integrating prior information and domain-invariant feature learning plays a crucial role. pre-trained (oracle) models can be used as priors [19, 30] to regulate feature learning. 2). Issues regarding invalid causality predicament within the static assumption relaxation during learning the invariance come to light. This is aligned with the causality premise to maximize . Efforts must be made to suppress invalid causality during invariant-feature learning (Refer to Eq. 8 for derivation).
Considering these findings, the general objective for mDG, which copes with the above issues and effectively relaxes the static target distribution assumption, is crucial. To be specific, it should consist of four key parts: Aim1- Learning domain-invariant representations and Aim2- Maximizing the posterior; with two regularization Reg1- Integrating prior information and the Reg2- Suppression of invalid causality. In essence, the objective should certify invariant representations of across domains while preserving the prediction relationship in . As a notable contribution, we redesign the conventional mDG paradigm and uniformly simplify most previous works’ empirical objectives, as summarized in Table 1 while Notations are shown in Table 2.
Symbols | Descriptions |
---|---|
; : | The -th observed domains in all domains; Unseen domains in all domains. |
; ; . | All observations and targets; Observations and targets in ; Observations and targets in . |
: | Distributions where corresponds to the random variables. |
: | Learnable transformations that codify into the same latent RKHS. |
: | Mapped . Within the RKHS realm, follow Multivariate Gaussian Distributions. |
; ; : | Prior knowledge (oracle model); Empirical risks; Covariance between two variables. |
: | Predictor that predicts from . |
: | KL divergence; Entropy; Cross-entropy. |
Most current mDG studies only focus on classification. SOTA methods such as MIRO [19] and SIMPLE [30] propose learning similar features by “oracle” models as a substitute for learning domain-invariant representations for mDG. Worth mentioning, we counter MIRO’s argument by confirming the persisting necessity of domain-invariant features, even under prior distribution, by theoretically deviating from minimizing the Generalized Jensen-Shannon Divergence (GJSD). MDA [16] pioneered the relaxation of the static assumption, yet without explicitly introducing a -mapping function, and overlooked the emergence of invalid causality that arises upon the relaxation. Beyond classification, RobustNet [10] and VA-DepthNet [32] explore their methods on mDG settings in segmentation and regression but propose no explicit objective for mDG. Importantly, our theoretical analysis and empirical findings suggest that mere aggregation of all the aforementioned objectives fails to yield a comprehensive general objective for mDG. For instance, term , coupled with prior knowledge utilization, could inadvertently precipitate performance degradation.
In this paper, we introduce the General Multi-Domain Generalization Objective (GMDG) to overcome current limitations in current methods, relaxing the static assumption of (overall formulation is shown in Section 3). Meanwhile, we propose an actionable solution to the invalidated causality through the minimization of the Conditional Feature Shift (CFS). Our main contributions can be summarized as follows:
-
•
We theoretically prove that domain generalization can be improved through the minimization of Generalized Jensen-Shannon Divergence (GJSD), with the incorporation of prior knowledge, leading to the derivation of an alignment upper bound (PUB) (Section 3).
-
•
We analyze existing approaches, demonstrating their incomplete optimization against the GMDG and identifying unexpected terms they inadvertently introduce (Section 4).
-
•
Our approach is the first try that is designed as compatible with existing mDG frameworks and exhibits performance improvements in a suite of tasks, including regression, segmentation, and classification, as confirmed by our comprehensive experiments (Section 5).
Notably, our results that only used one pre-trained model as prior in classification tasks exceed the SOTA SIMPLE++, which employs 283 pre-trained models as an ensemble oracle, while yielding consistent improvement in regression and segmentation, as shown in Figure 1. This further suggests the superiority of GMDG.
2 Related work
Multi-domain generalization. Most current mDG methods focus only on classification tasks. To learn better -independent representations for mDG, DANN [13] minimizes feature divergences between the source domains. CDANN [29], CIDG [28], and MDA [16] additionally take conditions into consideration and aim to learn conditionally invariant features across domains. [5, 7, 19, 30] point out that learning invariant representation to source domains is insufficient for mDG. Thus, MIRO [19] and SIMPLE [19] adopt pre-trained models as an oracle for seeking better general representations across various domains, including unseen target domains. Meanwhile, RobustNet [10] constrains conditional covariance shifts and conducts mDG segmentation, and little exploration focuses on mDG regression, though many other works pay attention to single domain generalization [37, 20, 24, 32]. Our study shows that their objectives optimize partially GMDG, leading to sub-optimal results.
Multi-domain generalization assumptions. In the literature, different assumptions are proposed to simplify the task as described by the original objective in Eq. 1. One assumption is that the is stable while only marginal changes across domains [45, 55]. [54] point out that is usually caused by thus changes while is sable or changes but stays stable, or a combination of both. Thus, MDA [16] allows both and change across domains but needs selecting samples of each class for the calculation. Moreover, it considers no prior. This paper further relaxes these assumptions by extracting domain-invariant features in .
Using pre-trained models as an oracle. Previous methods such as MIRO [19] have employed pre-trained models as the oracle to regularize . SIMPLE [30] employs at most pre-trained models as an ensemble and adaptively composes the most suitable oracle model. RobustNet [10] and VA-DepthNet [32], only use pre-trained models as initialization rather than additional supervision.
3 A general multi-Domain generalization objective
General Multi-Domain Generalization Objective (GMDG) essentially comprises a weighted combination of Four terms, each term designated by an alias:
|
(3) |
Theoretically, we justify that GAim1 and GReg1 can be effectively revised by minimizing the Generalized Jensen-Shannon Divergence (GJSD) with prior knowledge between visible domains for optimization. Meanwhile, we derive an upper bound termed as an alignment Upper Bound with Prior of mDG (PUB). Importantly, we demonstrate using GReg2 not only cope with the invalid causality brought by static assumption relaxation. Regarding GReg2, it can be simplified by minimizing the Conditional Feature Shift (CFS), i.e., the shift between unconditional and conditional features, which can be calculated by . More theoretical details are provided as follows.

3.1 Theoretical Details
Learning of -independent conditional features under prior. The generalization alignment upper bound (PUB), a novel GJSD variational upper bound that is tied to domain generalization alignment, is derived based on the generalized Jensen-Shannon divergence (GJSD) [31].
Definition 1 (GJSD).
Given distributions, and a corresponding probability weight vector , is defined as:
(4) |
Our method addresses the standard scenario in which the weights are evenly distributed across domains: . To achieve , minimizing domain gap between can be converted to minimizing GJSD across all domains:
(5) |
We further involve a prior knowledge distribution under the consideration of a variational density model class . Drawing upon [9], we have a variational upper bound:
(6) |
where is constant w.r.t , hence ignored during optimization. The novel PUB is derived from Eq. LABEL:eq:bound2, is:
(7) |
Minimizing PUB is the proposed objective for GAim1 and GReg1. This implies that methods like MIRO, solely minimizing GReg1, might result in substantial suboptimality, leaving the domain gap unresolved. We discuss two situations of in Section 4.
Suppressing invalid causality. The relaxation of static assumption may lead to unexpected causality while learning the invariance. GAim1 is reformed as:
(8) |
where minimizing GAim1 with relaxation of static assumption may lead to since the term and since the term .

Figure 2 graphically demonstrates the causal diagram under this scenario. Since the prediction relationship from and the casual path , should be preserved for prediction. However, the casual path may compromise the predication from when is unknown during the inference, leading to generalization degradation. This unveils that the invalid causality from that may happen during the learning invariance needs to be suppressed as while , which can be simplified as:
(9) |
where is GReg2. See more mathematical details in Supplementary 7. Our experiments also unveil the phenomenon of invalid causality within invariant feature learning, where suppressing it could improve generalizability. The investigation of previous objectives also discloses that, in addressing the varying , constructs akin to GReg2 are often implicitly included (see Table 1), though their efficacy was not explicitly stated. Moreover, their efficacy may be compromised due to the lack of and other objective terms.
Then, we assume that in the RKSH follow Multivariate Gaussian-like Distributions which are denoted as . follows . GReg2 can be simplified as:
(10) |
where the inequality stands owing to the Condition Reducing Entropy. This implies , deduced from , considering they are positive semi-definite. Distinct from the [52, 18] which decompose causal effects through extra networks, our method is based on transfer entropy (TE) by ensuring , i.e., Thus, minimization of Eq. LABEL:eq:cfs_detial occurs iff , reformulating the task as , where , per [21]. Therefore, GReg2 is simplified as minimizing Conditional Feature Shift (CFS):
(11) |
3.2 Empirical losses derivations
This section presents the empirical losses used to implement Eq. 3. More detailed derivation can be referred to in Supplementary 7. We introduce the mapping to relax the static target distribution. The implementation of varies across tasks, utilizing MLPs for classification and regression, and ResNet-50 for segmentation. To promote a consistent latent space, the mapped retains the same dimension as that of . and are separately fed into for making predictions and obtaining for posterior maximization:
(12) |
To mitigate domain shifts and learn domain invariance, we minimize cross-domain conditional feature distribution discrepancies. Specifically, the mean and variance of the joint distribution of in each domain are estimated using VAE encoders. Consider -pairs means and variance of domains, we derive a joint Gaussian distribution expression . Accordingly, we establish where . Base on PUB in Eq. LABEL:eq:PUB, we introduce to minimize the conditional feature gap across domains:
(13) |
To integrate prior information, similar to MIRO, we utilize VAE encoders to capture the means and variances of : and the output features form . Given that preserves the correlation between () and (), and is frozen during training, is omitted in empirical loss. We propose to minimize the divergence between features and :
(14) |
4 Connection to previous methods
We validate our objective function’s efficiency theoretically and demonstrate its connections with previous objectives, indicating that previous mDG efforts have partly optimized the proposed objective. Refer to Table 1 and Supplementary 8 for a detailed understanding of previous objectives.
Using v.s. not using . Previous works rarely employed to map , whereas we show its benefits for mDG tasks. Employing Jensen’s inequality, we obtain . When other objectives remain the same, we compare the model with parameters optimized via the mapping, against another model without using parameters :
(17) |
The equivalence is valid only if serves as a bijection, a condition prevalent in practical scenarios like classification. Thus, this mapping does not hinder model performance in classification tasks. It also implies that using can lower generalization risks after optimization, especially when contains features dependent on . This could potentially yield superior generalization in segmentation and regression tasks. Detailed proof can be seen in Supplementary 7.
Remark 1 (Importance of mapping ).
Besides relaxing the static distribution assumption of , conveys two other notable benefits: 1). and may originate from different sample spaces with distinct shapes. By applying mappings, can be adapted to the same shape as . In practice, concatenating and is often used as input for VAE encoders to capture . 2). The derivation of Eq. 11 requires the computation of covariance, which mandates that two variables occupy the same sample space - a condition fulfilled by applying .
Incorporating conditions leads to lower generalization risk on learning invariant representations. A few past works [13, 48] minimize domain gaps between features without condition consideration. Its objective for Aim1 is:
|
(18) |
While the other objectives are identical, we consider a model with parameters , trained with , against another model with parameters, trained with GAim1. In this scenario, their empirical risks satisfy:
(19) |
See the mathematical details in Supplementary 7. This reveals that without condition consideration, the minimization of generalization risk is merely partial due to the overlooked risk correlated to . Additional evidence supporting the importance of condition consideration is provided by CDANN [29] and CIDG [28]. Our experiments, conducted through a uniform implementation, also lend support to it.
Effect of oracle model . As stated by MIRO [19] and SIMPLE [30], a generalized comprising both seen and unseen domains yields significant improvements. During the derivation of Eq. LABEL:eq:PUB, we find that the disregard GAim1 term in MIRO [19] and SIMPLE [30] may result in inferior outcomes to our proposed objective.
Remark 2 (Synergy of learning invariance, integrating prior knowledge and suppressing invalid causally).
For readability, we have divided the overall mDG objective into four aspects despite all terms being interconnected. Specifically, as shown by PUB in Eq. LABEL:eq:PUB, GReg1 collaborating with GAim1 brings more performance gains than the case when it is solely applied. Moreover, Eq. 8 shows that the side effect of invalid causality in GAim1 is alleviated by combining with GReg2, underscoring the significance of combining learning invariance, integrating prior knowledge, and suppressing invalid causally. It also suggests that all terms are synergistic and contribute together to improved results.
Validating our assertions via experiments, Section 5.5 ablation study finds that simple cross-domain covariance limitation (GReg2) cannot ensure improved results with prior knowledge.
5 Experiments
Four groups of experiments are done to validate the proposed GMDG. A toy example validates the relaxation of static assumption brought by of . Furthermore, we conduct experiments on regression, segmentation, and classification tasks and use complex benchmark datasets. For a simplification, please refer to Table 4 for the formulations of terms and their alias.
5.1 Toy experiments on synthetic datasets
We perform a regression task on synthetic data to illustrate the impact of using , showcasing its potential for superior results if is not bijective.
Affine transformations | Squared and cubed transformations | |||||
---|---|---|---|---|---|---|
ERM | + | + | ERM | + | + | |
No DCDS | 0.3485 | 0.3537 | 0.3369 | 1.5150 | 0.4652 | 0.3370 |
With DCDS | 0.4144 | 0.2290 | 0.1777 | 0.8720 | 1.5868 | 0.8241 |
GAim2 | GReg1 | ||
---|---|---|---|
iAim1 | GAim1 | ||
iReg2 | GReg2 |
Synthetic data. Supplementary Figure 3 illustrates the construction of synthetic data, built on - pair latent features with a linear relationship, ensuring invariant existence. To better explore this issue, we created four distinct data groups: without and with distribution shift, used affine or squared and cubed transformations as domain-conditioned transformations, and their cross combinations. More description can be seen in Supplementary 10.
Experimental setup. We use two of three constructed domains for training and validation and the last one for testing. Validation and test losses are calculated by MSE. To maintain fairness, all experiments adopt the same network which is selected by the best validation results. Learning aims to find invariant hidden features of while preserving predictive ability from unseen to .
Results. Toy experiment results are reported in Table 3, which are also visualized in Figure 4. It is observed that across all settings, employing with yields superior results, outperforming ERM and ERM+() without , validating the enhanced generalization effect brought by utilizing whenever varies per domain, supporting Eq. 17. Supplementary Figure 4 shows that does learn the invariant representations for to relax previous -invariant assumption. Specifically, learning the invariance of with results in superior invariant representations as the latent representations of are primarily linear, aligning with and ’s linear relationship during data construction. The bottom-left figures reveal that though ERM has learned the most invariant , it suffers the worst test loss, indicating that a well-learned invariant is not sufficient when also has domain-dependent traits. The results also suggest that assuming that vary across domains, using without may not yield superior results.
SILog | Abs Rel | RMS | Sq Rel | RMS log | TD | ||||
---|---|---|---|---|---|---|---|---|---|
Backbone (Swin-L) | 11.1473 | 10.98 | 56.11 | 8.86 | 14.32 | 87.47 | 98.05 | ||
VA-DepthNet (GAim2) | 10.9357 | 11.15 | 56.36 | 9.02 | 14.41 | 87.73 | 98.02 | S | |
GReg1+GAim2 | 10.6548 | 10.49 | 52.63 | 8.04 | 13.72 | 89.75 | 98.12 | ||
GAim1+GReg1+GAim2 | 10.1924 | 10.39 | 50.52 | 7.68 | 13.39 | 89.86 | 98.17 | ||
|
10.1402 | 10.27 | 50.59 | 7.72 | 13.22 | 90.53 | 97.98 | ||
Backbone (Swin-L) | 14.2078 | 16.20 | 81.22 | 16.30 | 20.59 | 72.44 | 96.35 | ||
VA-DepthNet (GAim2) | 14.7080 | 16.76 | 83.17 | 17.07 | 21.44 | 71.46 | 95.11 | Co | |
GReg1+GAim2 | 14.1600 | 16.41 | 80.78 | 16.56 | 21.02 | 72.06 | 95.38 | ||
GAim1+GReg1+GAim2 | 13.9978 | 15.90 | 77.97 | 15.70 | 20.25 | 73.37 | 95.86 | ||
|
14.2803 | 15.57 | 77.45 | 15.27 | 19.94 | 74.40 | 95.47 | ||
Backbone (Swin-L) | 11.6132 | 12.87 | 44.51 | 8.01 | 15.58 | 84.57 | 97.87 | ||
VA-DepthNet (GAim2) | 11.5080 | 12.50 | 43.98 | 7.67 | 15.37 | 84.78 | 97.87 | O | |
GReg1+GAim2 | 10.4061 | 11.71 | 39.02 | 6.87 | 13.83 | 88.27 | 98.17 | ||
GAim1+GReg1+GAim2 | 10.4907 | 11.53 | 38.43 | 6.69 | 13.68 | 88.46 | 98.17 | ||
|
10.4438 | 11.33 | 38.95 | 6.66 | 13.67 | 88.86 | 98.16 | ||
Backbone (Swin-L) | 14.7350 | 18.36 | 52.31 | 13.05 | 20.06 | 74.74 | 93.94 | ||
VA-DepthNet (GAim2) | 15.0300 | 17.99 | 56.54 | 13.20 | 20.64 | 72.40 | 94.38 | H | |
GReg1+GAim2 | 14.7377 | 17.02 | 55.39 | 12.06 | 19.86 | 74.05 | 95.17 | ||
GAim1+GReg1+GAim2 | 14.5018 | 17.14 | 52.10 | 12.01 | 19.37 | 76.13 | 94.95 | ||
|
14.1414 | 15.90 | 52.22 | 10.72 | 18.95 | 76.27 | 96.10 | ||
Backbone (Swin-L) | 12.9258 | 14.60 | 58.54 | 11.56 | 17.64 | 79.81 | 96.55 | ||
VA-DepthNet(GAim2) | 13.0454 | 14.60 | 60.01 | 11.74 | 17.97 | 79.09 | 96.35 | Avg. | |
GReg1+GAim2 | 12.4897 | 13.91 | 56.96 | 10.88 | 17.11 | 81.03 | 96.71 | ||
GAim1+GReg1+GAim2 | 12.2957 | 13.74 | 54.76 | 10.52 | 16.67 | 81.96 | 96.79 | ||
|
12.2514 | 13.27 | 54.80 | 10.09 | 16.45 | 82.52 | 96.93 |
5.2 Regression on benchmark datasets: Monocular depth estimation
We conduct the Monocular Depth Estimation task as the real-world regression task to further verify GMDG.
Experimental setup. We employ VA-DepthNet [32] with Swin-L [33] backbone as the baseline and follow their hyperparameter settings. Experiments are conducted on NYU Depth V2 [46]. To construct multiple domains, we split the dataset into four categories: ‘School’ (S), ‘Office’ (O), ‘Home’ (H), and ‘Commercial’ (Co). We conduct the standard leave-one-out cross-validation as an evaluation method. We use the best checkpoint on the seen domains for the evaluation. Note that all models are trained on the newly constructed dataset. Statistical results on popular evaluation metrics such as the square root of the Scale Invariant Logarithmic error (SILog), Relative Squared error (Sq Rel), Relative Absolute Error (Abs Rel), Root Mean Squared error (RMS), and threshold accuracy () are used as evaluation metrics. See more experimental details in the Supplementary 10.
Results. The Monocular Depth Estimation results are exhibited in Table 5. It can be seen that using terms that are proposed in GMDG leads to better generalization on unseen domains, and using the full GMDG leads to the best results in most metrics. The improvements suggest the feasibility of our GMDG in real-world regression tasks. Specifically, using GReg2 with other terms significantly improves the results when Home is the unseen domain, which is the most difficult domain to be generalized for the VA-DepthNet, and barely compromises the performances while using other domains as the unseen. This reveals that suppressing the causality can improve the generalization of the model (refer to Figure 2 (b) for visual results). Note that, except SILog, all metrics results are scaled by for readability; due to the lack of objective targeting on the mDG problem, VA-DepthNet performs worse than its baseline. See more visualizations in Supplementary 11.
5.3 Segmentation on benchmark datasets
Experimental setup. We follow the experimental setup of RobustNet [10] for mDG segmentation experiments, particularly using DeepLabV3+ [8] as the semantic segmentation model architecture, with ResNet-50 backbone and SGD optimizer. As shown in Table 1, RobustNet’s objective is equivalent to using GAim2 and GReg2. Consistent with previous methods, mIoU serves as our evaluation metric. Datasets comprise real-world datasets (Cityscapes [11] (Ci), BDD-100K [53] (B), Mapillary [35] (M)) and synthetic datasets (GTAV [41], SYNTHIA [42]). Specifically, we train a model on GTAV and Cityscapes, testing on other datasets. We compare our results to DeepLabv3+[8], IBN-NET[36] and RobustNet [10]. We use Intersection over Union (mIoU) as the evaluation metric. See Supplementary 10 for more experimental details.
TD | Ci | B | M | Avg. |
---|---|---|---|---|
DeepLabv3+ | 35.46 | 25.09 | 31.94 | 30.83 |
IBN-Net | 35.55 | 32.18 | 38.09 | 35.27 |
RobustNet (GAim2, GReg2) | 37.69 | 34.09 | 38.49 | 36.76 |
GAim1+GAim2+GReg2 | 38.58 | 34.72 | 39.11 | 37.47 |
GReg1+GAim2+GReg2 | 38.13 | 35.02 | 39.29 | 37.48 |
GAim1+GReg1+GAim2+GReg2 (GMDG) | 38.62 | 34.71 | 39.63 | 37.65 |
Results. Table 6 shows the efficacy of our proposed objective in segmentation tasks upon introducing . Ablation results highlight that using alongside GAim1 can enhance baseline performance, experimentally substantiating that the introduction of , in relaxing assumptions, boosts performance for better generalization. Using GReg1 alone also improves average mIoU. Importantly, the most enhancement in average mIoU is observed when GReg1 and GAim1 are used together, which finds validation in the PUB derivation in Eq. LABEL:eq:PUB. See more results and visualizations in Supplementary 11.
5.4 Classification on benchmark datasets
Experimental setup. We operate on the DomainBed suite [14] and leverage standard leave-one-out cross-validation as an evaluation method. We experiment on real-world benchmark datasets, including PACS [25], VLCS [12], OfficeHome [50], TerraIncognita [2], and DomainNet [38]. The results are the averages from three trials of each experiment. Following MIRO, two backbones are used for the training (ResNet-50 [15] pre-trained in the ImageNet [15] and RegNetY-16GF backbone with SWAG pre-training [47]). The backbones are trained with our proposed objective barely and further with SWAD [6], respectively. See Supplementary 10 for more experimental details.
Non-ensemble methods | |||||||
TD | PACS | VLCS | OfficeHome | TerraInc | DomainNet | Avg. | |
MMD [27] | 84.7±0.5 | 77.5±0.9 | 66.3±0.1 | 42.2±1.6 | 23.4±9.5 | 58.8 | |
Mixstyle [57] | 85.2±0.3 | 77.9±0.5 | 60.4±0.3 | 44.0±0.7 | 34.0±0.1 | 60.3 | |
GroupDRO [43] | 84.4±0.8 | 76.7±0.6 | 66.0±0.7 | 43.2±1.1 | 33.3±0.2 | 60.7 | |
IRM [1] | 83.5±0.8 | 78.5±0.5 | 64.3±2.2 | 47.6±0.8 | 33.9±2.8 | 61.6 | |
ARM [56] | 85.1±0.4 | 77.6±0.3 | 64.8±0.3 | 45.5±0.3 | 35.5±0.2 | 61.7 | |
VREx [23] | 84.9±0.6 | 78.3±0.2 | 66.4±0.6 | 46.4±0.6 | 33.6±2.9 | 61.9 | |
CDANN [29] | 82.6±0.9 | 77.5±0.1 | 65.8±1.3 | 45.8±1.6 | 38.3±0.3 | 62.0 | |
DANN [13] | 83.6±0.4 | 78.6±0.4 | 65.9±0.6 | 46.7±0.5 | 38.3±0.1 | 62.6 | |
RSC [17] | 85.2±0.9 | 77.1±0.5 | 65.5±0.9 | 46.6±1.0 | 38.9±0.5 | 62.7 | |
MTL [4] | 84.6±0.5 | 77.2±0.4 | 66.4±0.5 | 45.6±1.2 | 40.6±0.1 | 62.9 | |
MLDG [26] | 84.9±1.0 | 77.2±0.4 | 66.8±0.6 | 47.7±0.9 | 41.2±0.1 | 63.6 | |
Fish [44] | 85.5±0.3 | 77.8±0.3 | 68.6±0.4 | 45.1±1.3 | 42.7±0.2 | 63.9 | |
ERM [49] | 84.2±0.1 | 77.3±0.1 | 67.6±0.2 | 47.8±0.6 | 44.0±0.1 | 64.2 | |
SagNet [34] | 86.3±0.2 | 77.8±0.5 | 68.1±0.1 | 48.6±1.0 | 40.3±0.1 | 64.2 | |
SelfReg [22] | 85.6±0.4 | 77.8±0.9 | 67.9±0.7 | 47.0±0.3 | 42.8±0.0 | 64.2 | |
CORAL [48] | 86.2±0.3 | 78.8±0.6 | 68.7±0.3 | 47.6±1.0 | 41.5±0.1 | 64.5 | |
mDSDI [5] | 86.2±0.2 | 79.0±0.3 | 69.2±0.4 | 48.1±1.4 | 42.8±0.1 | 65.1 | |
Use ResNet-50 [15] as oracle model. | |||||||
Style Neophile [20] | 89.11 | - | 65.89 | - | 44.60 | - | |
MIRO [19] (GReg1) | 85.4±0.4 | 79.0±0.3 | 70.5±0.4 | 50.4±1.1 | 44.3±0.2 | 65.9 | |
GMDG | 85.6±0.3 | 79.2±0.3 | 70.7±0.2 | 51.1±0.9 | 44.6±0.1 | 66.3 | |
Use RegNetY-16GF [47] as oracle model. | |||||||
MIRO | 97.4±0.2 | 79.9±0.6 | 80.4±0.2 | 58.9±1.3 | 53.8±0.1 | 74.1 | |
GMDG | 97.3±0.1 | 82.4±0.6 | 80.8±0.6 | 60.7±1.8 | 54.6±0.1 | 75.1 | |
Ensemble methods | |||||||
PACS | VLCS | OfficeHome | TerraInc | DomainNet | Avg. | ||
Use multiple oracle models. | |||||||
SIMPLE [30] | 88.6±0.4 | 79.9±0.5 | 84.6±0.5 | 57.6±0.8 | 49.2±1.1 | 72.0 | |
SIMPLE++ [30] | 99.0±0.1 | 82.7±0.4 | 87.7±0.4 | 59.0±0.6 | 61.9±0.5 | 78.1 | |
Use ResNet-50 [15] as oracle model. | |||||||
MIRO + SWAD | 88.4±0.1 | 79.6±0.2 | 72.4±0.1 | 52.9±0.2 | 47.0±0.0 | 68.1 | |
|
88.4±0.1 | 79.6±0.1 | 72.5±0.2 | 53.0±0.7 | 47.3±0.1 | 68.2 | |
Use RegNetY-16GF [47] as oracle model. | |||||||
MIRO + SWAD | 96.8±0.2 | 81.7±0.1 | 83.3±0.1 | 64.3±0.3 | 60.7±0.0 | 77.3 | |
GMDG + SWAD | 97.9±0.3 | 82.2±0.3 | 84.7±0.2 | 65.0±0.2 | 61.3±0.2 | 78.2 |
Results. Table 7 displays the results of non-ensemble algorithms and ensemble algorithms that employ pre-trained models as oracle models. Specifically, our proposed objectives demonstrate more substantial improvements when a higher-quality pre-trained oracle model () is applied. When employing the ResNet-50 model, our approach yields average improvements of approximately and without and with SWAD, respectively, compared to MIRO. In contrast, when RegNetY-16GF serves as an oracle, GMDG results in significant average improvements of and without and with SWAD, respectively. Remarkably, our approach outperforms more than the SOTA method, SIMPLE++, which relies on an ensemble of 283 pre-trained models as oracle models, whereas ours only engages a single pre-trained model. Overall, these results strongly support GMDG’s effectiveness in classification tasks. See more results in Supplementary 11.
5.5 Ablation studies
To better compare our objective with previous objectives, we conduct a systematic ablation study on the classification task since most previous objectives are only available for classification due to the lack of . Experimental setup. In the ablation studies, we test varied terms (see Table 8) combinations on the HomeOffice dataset using SWAG pre-training [47] and SWAD [6]. Every experiment is repeated in three trials, sharing the same hyper-parameter settings for evaluation. See Supplementary 10 for more details.
Results. Table 8 presents ablation study results. The first column denotes previous methods equivalent to term combinations. The main findings are as follows. See Supplementary 11.1 for more other findings.
Used objectives | Art | Clipart | Product | Real | Avg. | Imp. |
---|---|---|---|---|---|---|
Without (GReg1) | ||||||
GAim2 (ERM) | 78.4±0.7 | 68.3±0.5 | 85.8±0.4 | 85.8±0.3 | 79.6±0.2 | 0.0 |
GAim2 + iAim1 (DANN) | 79.1±1.0 | 68.6±0.0 | 85.6±0.8 | 86.1±0.5 | 79.8±0.2 | +0.2 |
GAim2 + GAim1 (CDANN, CIDG) | 79.1±0.7 | 69.1±0.1 | 85.7±0.5 | 86.3±0.6 | 79.9±0.4 | +0.3 |
GAim2 +iReg2 (CORAL+) | 79.1±0.1 | 69.9±0.4 | 86.0±0.1 | 86.3±0.4 | 80.3±0.2 | +0.7 |
GAim2 + GReg2 | 79.2±0.1 | 69.9±1.4 | 86.1±0.5 | 86.1±0.1 | 80.3±0.3 | +0.7 |
GAim2 + GAim1 + GReg2 (MDA+) | 79.5±1.1 | 69.2±1.2 | 86.2±0.2 | 86.5±0.2 | 80.3±0.0 | +0.7 |
With (GReg1) | ||||||
GAim2 + GReg1 (MIRO, SIMPLE) | 83.2±0.6 | 72.6±1.1 | 89.9±0.5 | 90.2±0.1 | 84.0±0.2 | 0.0 |
GAim2 + GReg1 +iAim1 | 83.4±0.5 | 73.1±0.8 | 89.7±0.4 | 90.1±0.3 | 84.1±0.2 | +0.1 |
GAim2 + GReg1 + GAim1 | 83.7±0.3 | 74.0±0.6 | 90.1±0.3 | 90.3±0.2 | 84.5±0.2 | +0.4 |
GAim2 + GReg1 + iReg2 | 82.9±0.5 | 72.5±0.3 | 90.3±0.3 | 90.0±0.3 | 83.9±0.1 | -0.1 |
GAim2 + GReg1 + GReg2 | 83.4±0.2 | 72.3±0.2 | 90.1±0.3 | 90.1±0.3 | 84.0±0.2 | +0.0 |
GAim2 + GReg1 + GAim1 + GReg2 (GMDG) | 84.1±0.2 | 74.3±0.9 | 89.9±0.4 | 90.6±0.1 | 84.7±0.2 | +0.7 |
1). Previous methods that partially utilize our proposed objectives often yield suboptimal results. Note that iAim1 is the unconditional version of GAim1. By eliminating other factors, it can be seen that employing our proposed full objectives offers the most significant improvements, while previous objectives may lead to inferior results.
2). The effectiveness of using conditions. By conducting uniform implementation and testing, it can be observed that the use of conditions yields superior results compared to the unconditional approach. This observation aligns with Eq. 19, suggesting that minimizing the gap between conditional features across domains leads to improved generalization. The disparity in performance between CDANN and DANN might be attributed to differences in their implementation details.
3). Learning invariance is crucial, regardless of whether integrating prior knowledge. Evidently, learning invariance facilitates improvement whether prior is applied or not, as validated in the PUB derivation in Eq. LABEL:eq:PUB. This contradicts MIRO’s argument that achieving similar representations to a prior can replace the need for learning invariance.
4). Impacts of using prior. The significant improvement owes to the use of a pre-trained oracle model () preserving correlations between and - a concept validated by MIRO and SIMPLE. However, utilizing our full set of objectives can further enhance this improvement by an additional . Notably, the invalid causality may not work when using prior knowledge, while the invariance across domains is not permitted. We hypothesize that such invalid causality is inherently eliminated within a ‘good’ feature space obtained by , but may be reintroduced when we minimize the domain gap with . Thus, using the full objective can synergistically produce optimal results.
5). Constraining only the covariance shifts of features across domains (iReg2) does not guarantee better results when prior knowledge is available. We find that using the objectives of CORAL performs better than DANN, CDANN, and CIDG. The results suggest that considering the covariance shifts of features does lead to improvements, which we hypothesize are primarily driven by . However, when a large pre-trained oracle model () is provided, the performance actually degrades. This implies that the use of implicitly minimizes the covariance shifts of features across domains. Under this scenario, the unexpected effect of hinders improvement, while the benefits brought by are diminished by the use of prior knowledge. In contrast, GReg2 continues to yield improvements. This suggests that GMDG is more versatile and suitable for various situations.
6 Conclusion
In this paper, we propose a general objective, namely GMDG, by relaxing the static distribution assumption of through a learnable mapping . GMDG is applicable to diverse mDG tasks, including regression, segmentation, and classification. Empirically, we design a suite of losses to achieve the overall GMDG, adaptable across various frameworks. Extensive experiments validate the viability of our objective across applications where previous objectives may yield suboptimal results compared to ours. Both theoretical analyses and empirical results demonstrate the synergistic effect of distinct terms in the proposed objective. Simplistically, we assume equal domain weights whilst minimizing GJSD, presenting the future scope for dealing with imbalance situations triggering unequal domain weights.
Acknowledgments
The work was partially supported by the following: National Natural Science Foundation of China under No. 92370119, No. 62376113, and No. 62206225; Jiangsu Science and Technology Program (Natural Science Foundation of Jiangsu Province) under No. BE2020006-4; Natural Science Foundation of the Jiangsu Higher Education Institutions of China under No. 22KJB520039.
References
- Arjovsky et al. [2019] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
- Beery et al. [2018] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV), pages 456–473, 2018.
- Blanchard et al. [2011] Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. Advances in neural information processing systems, 24, 2011.
- Blanchard et al. [2021] Gilles Blanchard, Aniket Anand Deshmukh, Ürun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. The Journal of Machine Learning Research, 22(1):46–100, 2021.
- Bui et al. [2021] Manh-Ha Bui, Toan Tran, Anh Tran, and Dinh Phung. Exploiting domain-specific features to enhance domain generalization. Advances in Neural Information Processing Systems, 34:21189–21201, 2021.
- Cha et al. [2021] Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. Advances in Neural Information Processing Systems, 34:22405–22418, 2021.
- Chattopadhyay et al. [2020] Prithvijit Chattopadhyay, Yogesh Balaji, and Judy Hoffman. Learning to balance specificity and invariance for in and out of domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, pages 301–318. Springer, 2020.
- Chen et al. [2018] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018.
- Cho et al. [2022] Wonwoong Cho, Ziyu Gong, and David I Inouye. Cooperative distribution alignment via jsd upper bound. Advances in Neural Information Processing Systems, 35:21101–21112, 2022.
- Choi et al. [2021] Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne T Kim, Seungryong Kim, and Jaegul Choo. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11580–11590, 2021.
- Cordts et al. [2016] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
- Fang et al. [2013] Chen Fang, Ye Xu, and Daniel N Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In Proceedings of the IEEE International Conference on Computer Vision, pages 1657–1664, 2013.
- Ganin et al. [2016] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
- Gulrajani and Lopez-Paz [2020] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. arXiv preprint arXiv:2007.01434, 2020.
- He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Hu et al. [2020] Shoubo Hu, Kun Zhang, Zhitang Chen, and Laiwan Chan. Domain generalization via multidomain discriminant analysis. In Uncertainty in Artificial Intelligence, pages 292–302. PMLR, 2020.
- Huang et al. [2020] Zeyi Huang, Haohan Wang, Eric P Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 124–140. Springer, 2020.
- Huang et al. [2022] Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, and Tongliang Liu. Harnessing out-of-distribution examples via augmenting content and style. arXiv preprint arXiv:2207.03162, 2022.
- Junbum et al. [2022] Cha Junbum, Lee Kyungjae, Park Sungrae, and Chun Sanghyuk. Domain generalization by mutual-information regularization with pre-trained models. European Conference on Computer Vision (ECCV), 2022.
- Kang et al. [2022] Juwon Kang, Sohyun Lee, Namyup Kim, and Suha Kwak. Style neophile: Constantly seeking novel styles for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7130–7140, 2022.
- Kay [1993] Steven M Kay. Fundamentals of statistical signal processing: estimation theory. Prentice-Hall, Inc., 1993.
- Kim et al. [2021] Daehee Kim, Youngjun Yoo, Seunghyun Park, Jinkyu Kim, and Jaekoo Lee. Selfreg: Self-supervised contrastive regularization for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9619–9628, 2021.
- Krueger et al. [2021] David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pages 5815–5826. PMLR, 2021.
- Lee et al. [2022] Suhyeon Lee, Hongje Seong, Seongwon Lee, and Euntai Kim. Wildnet: Learning domain generalized semantic segmentation from the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9936–9946, 2022.
- Li et al. [2017] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pages 5542–5550, 2017.
- Li et al. [2018a] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In Proceedings of the AAAI conference on artificial intelligence, 2018a.
- Li et al. [2018b] Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5400–5409, 2018b.
- Li et al. [2018c] Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, and Dacheng Tao. Domain generalization via conditional invariant representations. In Proceedings of the AAAI conference on artificial intelligence, 2018c.
- Li et al. [2018d] Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European conference on computer vision (ECCV), pages 624–639, 2018d.
- Li et al. [2022] Ziyue Li, Kan Ren, Xinyang Jiang, Yifei Shen, Haipeng Zhang, and Dongsheng Li. Simple: Specialized model-sample matching for domain generalization. In The Eleventh International Conference on Learning Representations, 2022.
- Lin [1991] Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145–151, 1991.
- Liu et al. [2023] Ce Liu, Suryansh Kumar, Shuhang Gu, Radu Timofte, and Luc Van Gool. Va-depthnet: A variational approach to single image depth prediction. arXiv preprint arXiv:2302.06556, 2023.
- Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10012–10022, 2021.
- Nam et al. [2021] Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8690–8699, 2021.
- Neuhold et al. [2017] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pages 4990–4999, 2017.
- Pan et al. [2018] Xingang Pan, Ping Luo, Jianping Shi, and Xiaoou Tang. Two at once: Enhancing learning and generalization capacities via ibn-net. In Proceedings of the European Conference on Computer Vision (ECCV), pages 464–479, 2018.
- Peng et al. [2022] Duo Peng, Yinjie Lei, Munawar Hayat, Yulan Guo, and Wen Li. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2594–2605, 2022.
- Peng et al. [2019] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406–1415, 2019.
- Perlaza et al. [2022] Samir M Perlaza, Gaetan Bisson, Iñaki Esnaola, Alain Jean-Marie, and Stefano Rini. Empirical risk minimization with relative entropy regularization: Optimality and sensitivity analysis. In 2022 IEEE International Symposium on Information Theory (ISIT), pages 684–689. IEEE, 2022.
- Rame et al. [2022] Alexandre Rame, Matthieu Kirchmeyer, Thibaud Rahier, Alain Rakotomamonjy, Patrick Gallinari, and Matthieu Cord. Diverse weight averaging for out-of-distribution generalization. Advances in Neural Information Processing Systems, 35:10821–10836, 2022.
- Richter et al. [2016] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 102–118. Springer, 2016.
- Ros et al. [2016] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3234–3243, 2016.
- Sagawa et al. [2019] Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
- Shi et al. [2021] Yuge Shi, Jeffrey Seely, Philip HS Torr, N Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. arXiv preprint arXiv:2104.09937, 2021.
- Shimodaira [2000] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
- Silberman et al. [2012] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12, pages 746–760. Springer, 2012.
- Singh et al. [2022] Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens Van Der Maaten. Revisiting weakly supervised pre-training of visual perception models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 804–814, 2022.
- Sun and Saenko [2016] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pages 443–450. Springer, 2016.
- Vapnik [1998] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
- Venkateswara et al. [2017] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018–5027, 2017.
- Wang et al. [2022] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 2022.
- Yang et al. [2020] Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. Causalvae: Structured causal disentanglement in variational autoencoder. arXiv preprint arXiv:2004.08697, 2020.
- Yu et al. [2020] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2636–2645, 2020.
- Zhang et al. [2013] Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In International conference on machine learning, pages 819–827. PMLR, 2013.
- Zhang et al. [2015] Kun Zhang, Mingming Gong, and Bernhard Schölkopf. Multi-source domain adaptation: A causal view. In Proceedings of the AAAI Conference on Artificial Intelligence, 2015.
- Zhang et al. [2021] Marvin Zhang, Henrik Marklund, Nikita Dhawan, Abhishek Gupta, Sergey Levine, and Chelsea Finn. Adaptive risk minimization: Learning to adapt to domain shift. Advances in Neural Information Processing Systems, 34:23664–23678, 2021.
- Zhou et al. [2021] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. arXiv preprint arXiv:2104.02008, 2021.
- Zhou et al. [2022] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
Supplementary Material
7 More mathematical details of our method
7.1 Derivation details of PUB
Details for Eq. LABEL:eq:bound1.
We denote . Therefore for GJSD, we have:
(20) |
Therefore, the minimization of GJSD can be written as follows:
|
(21) |
Details for Eq. LABEL:eq:bound2. Taking into account , similar to [9], we have the upper bound for GJSD as:
(22) |
For the standard situation where , we further have:
(23) |
Details for Eq. LABEL:eq:PUB. The above bound can be further reformed as:
(24) |
Derivation details of Eq. 8.
(25) |
Derivation details of Eq. 9. Due to Eq. 8, we want to maintain but suppressing . Thus we want to while . which problem can be simplified as:
(26) |
where the first term is already in GAim2, thus GReg2 should deal with the second term, which is:
(27) |
Also, due to the effect of being alleviated through the mappings, the above equation is approximated as
(28) |
which is GReg2.
Derivation details of Eq. LABEL:eq:cfs_detial. For a Gaussian distribution with dimension, its entropy is:
(29) |
Then Eq. LABEL:eq:cfs_detial equals:
(30) |
Empirical risk. The empirical risk introduced by the whole model w.r.t is determined by a convex loss function . Following [39], the empirical risk considering is:
(31) | ||||
Proof of Using v.s. not using . Using Jensen’s inequality, due to contains the same amount of useful information as , we have:
(32) |
Therefore, we have
(33) |
Therefore, for the risk of :
(34) |
and the risk of :
(35) |
where we have:
(36) |
Proof of incorporating conditions leads to lower generalization risk on learning invariant representations. For the risks of the model having parameters trained with using conditions, we have:
(37) |
where For that trained without using conditions, it has:
(38) | ||||
Due to the inequality:
(39) | ||||
(40) | ||||
(41) |
we have
(42) |
8 Objective derivation details of many previous methods.
This section shows how we uniformly simplify the objectives of previous methods.
ERM [14]: The basic method. The basic method does not focus on minimizing GJSD. Therefore, there are no terms for Aim 1. For Aim 2 it directly minimize .
DANN [13]: Minimize feature divergences of source domains. DANN [13] minimizes feature divergences of source domains adverbially without considering conditions. Therefore its empirical objective for Aim 1 is
(43) |
For Aim 2 it directly minimizes .
CORAL [48]: Minimize the distance between the second-order statistics of source domains. Since CORAL [48] only minimizes the second-order distance between souce feature distributions, its objective can be summarized as:
(44) |
By grouping it, CORAL [48] has for Aim 1 and for Aim 2.
CIDG [28]: Minimizing the conditioned domain gap. CIDG [28] tries to learn conditional domain invariant features:
(45) |
For Aim 2 it directly minimizes .
MDA [16]: Minimizing domain gap compared to the decision gap. Some previous works, such as MDA [16], follow the hypothesis that the generalization is guaranteed while the decision gap is larger than the domain gap. Therefore, instead of directly minimizing the domain gap, MDA minimizes the ratio between the domain gap and the decision gap. The overall objective of MDA can be summarized as:
(46) |
Since the entropy is non-negative and the constants can be omitted, Eq. LABEL:eq:MDA is equivalent to:
(47) |
By grouping Eq. LABEL:eq:MDA2, we have that for Aim 1 it minimizes , and for Aim 2 it minimizes .
MIRO [19], SIMPLE [30]: Using pre-trained models as . One feasible way to obtain is adopting pre-trained oracle models such as MIRO [19] and SIMPLE [30]. Note that the pre-trained models are exposed to additional data besides those provided. Therefore, for Aim 1: they have:
(48) |
Differently, MIRO only uses one pre-trained model, as its ; meanwhile, SIMPLE combines pre-trained models as the oracle model: where is the weight vector. For Aim 2 it directly minimizes .
RobustNet [10]. RobustNet employs the instance selective whitening loss, which disentangles domain-specific and domain-invariant properties from higher-order statistics of the feature representation and selectively suppresses domain-specific ones. Therefore, it implicitly whitens the -irrelevant features in . Thus, its objective can be simplified as:
(49) |
9 Aligning notations between paper and supplementary materials
9.1 More details about Table 1
To better understand, we simplify some notations in Table 1. We present the simplified notations and their corresponding origins in Table 9.
Learning domain invariant representations | ||
Aim1: Learning domain invariance | Reg1: Integrating prior | |
DANN | None | |
CDANN, CIDG, MDA | None | |
Ours | ||
Maximizing A Posterior between representations and targets | ||
Aim2: Maximizing A Posterior (MAP) | Reg2: Suppressing invalid causality | |
CORAL | ||
9.2 More details about Table 4
For simplification, we uniformly simplified the formulation of terms from their derivation. The simplified form in Table 4 and its original form can be seen in Table 10. Note that the iAim1 is from CDANN [29], CIDG [28], MDA [16] and iReg2 is from CORAL [48].
GAim2 | ||
---|---|---|
GReg1 | ||
iAim1 | ||
GAim1 | ||
iReg2 | ||
GReg2 | ||
GAim2 | ||
GReg1 | ||
iAim1 | ||
GAim1 | ||
iReg2 | ||
GReg2 |
10 Experimental details and parameters
We have conducted experiments in total, including Toy experiments (training objective settings on domain settings), Regression experiments in Monocular depth estimation (training objective settings on domain settings), Segmentation experiments (training objective settings on domain settings and verifying on domain settings), Classification experiments (training objective settings on datasets that has domain settings for trails), and ablation study experiments (training objective settings on dataset that has domain settings for trails). We believe that the consistent improvements yielded by GMDG in these experiments validate the superiority of our GMDG.
Experimental details of these experiments can be found in the following. Note that we set for all experiments.
10.1 Toy experiments: Synthetic regression experimental details.
We explore the efficacy of by using toy regression experiments with synthetic data.
Datasets. The latent features in all three domains are added some distributional shifts and used as the first group in the raw features (denoted as where represent which domain it belongs to). Then, some domain-conditioned transformations are applied to shifted features, or some pure random noises are used as the second group in the raw features (denoted as ). Therefore the constructed both contain features that dependents on . Details of each synthetic data are exhibited in Table 11. We generate samples for training and samples for validation and testing sets.
Parameter settings. All experiments are conducted with .
Experimental settings. For , we use three-layer MLP and one linear layer for regression prediction and Mean Squared Error (MSE) as the loss. We use the best model on the validation dataset for testing.
Metric. We use the MSE between the predictions and the of the testing set as the evaluation metric.

Data 1 | Without distribution shift | With affine transformations | |
Data 2 | With distribution shift | With affine transformations | |
Data 3 | Without distribution shift | With squared, cubed transformations or noises | |
Data 4 | With distribution shift | With squared, cubed transformations or noises | |
Use ResNet-50 without SWAD | lr mult | lr | dropout | WD | TR | CF | |||
---|---|---|---|---|---|---|---|---|---|
TerraIncognita | 0.1 | 0.1 | 0.2 | 12.5 | - | - | - | - | - |
OfficeHome | 0.1 | 0.001 | 0.1 | 20.0 | 3e-5 | 0.1 | 1e-6 | - | - |
VLCS | 0.01 | 0.001 | 0.1 | 10.0 | 1e-5 | - | 1e-6 | 0.2 | 50 |
PACS | 0.01 | 0.01 | 0.01 | 25.0 | - | - | - | - | - |
DomainNet | 0.1 | 0.1 | 0.1 | 7.5 | - | - | - | - | 500 |
Use ResNet-50 with SWAD | lr mult | CF | |||
---|---|---|---|---|---|
TerraIncognita | 0.1 | 0.001 | 0.01 | 10.0 | - |
OfficeHome | 0.1 | 0.1 | 0.3 | 10.0 | - |
VLCS | 0.01 | 0.001 | 0.1 | 10.0 | 50 |
PACS | 0.01 | 0.001 | 0.1 | 20.0 | - |
DomainNet | 0.1 | 0.1 | 0.1 | 7.5 | 500 |
Use RegNetY-16GF with and without SWAD | lr mult | CF | |||
---|---|---|---|---|---|
TerraIncognita | 0.01 | 0.01 | 0.01 | 2.5 | - |
OfficeHome | 0.01 | 0.1 | 0.1 | 0.1 | - |
VLCS | 0.01 | 0.01 | 0.1 | 2.0 | 50 |
PACS | 0.01 | 0.1 | 0.1 | 0.1 | - |
DomainNet | 0.1 | 0.1 | 0.1 | 7.5 | 500 |
Ablation studies on OfficeHome | lr mult | use iAim1 | use iReg2 | |||
---|---|---|---|---|---|---|
Base (ERM) | 0.0 | 0.0 | 0.0 | 0.1 | False | False |
Base +iAim1 (DANN) | 0.0 | 0.0 | 0.1 | 0.1 | True | False |
Base + GAim1 (CDANN, CIDG) | 0.0 | 0.0 | 0.1 | 0.1 | False | False |
Base +iReg2 (CORAL+) | 0.0 | 0.1 | 0.0 | 0.1 | False | True |
Base + GReg2 | 0.0 | 0.1 | 0.0 | 0.1 | False | False |
Base + GAim1 + GReg2 (MDA+) | 0.0 | 0.1 | 0.1 | 0.1 | False | False |
Base + GReg1 (MIRO, SIMPLE) | 0.01 | 0.0 | 0.0 | 0.1 | False | False |
Base + GReg1 +iAim1 | 0.01 | 0.0 | 0.1 | 0.1 | False | True |
Base + GReg1 + GAim1 | 0.01 | 0.0 | 0.1 | 0.1 | False | False |
Base + GReg1 +iReg2 | 0.01 | 0.1 | 0.0 | 0.1 | True | False |
Base + GReg1 + GReg2 | 0.01 | 0.1 | 0.0 | 0.1 | False | False |
Base + GReg1 + GAim1 + GReg2 (Ours) | 0.01 | 0.1 | 0.1 | 0.1 | False | False |
10.2 Regression experiments: Monocular depth estimation details.
We explore the efficacy of with GMDG by using toy monocular depth estimation experiments with NYU Depth V2 dataset [46].
Datasets. NYU Depth V2 contains images with resolution with depth values ranging from 0 to 10 meters. We adopt the densely labeled pairs for training and testing.
Multi-domain construction. To construct multiple domains that fit the problem settings, we split the NYU Depth V2 dataset into four categories as four domains:
-
•
School: study room, study, student lounge, printer room, computer lab, classroom.
-
•
Office: reception room, office kitchen, office, nyu office, conference room.
-
•
Home: playroom, living room, laundry room, kitchen, indoor balcony, home storage, home office, foyer, dining room, dinette, bookstore, bedroom, bathroom, basement.
-
•
Commercial:furniture store, exercise room, cafe.
After filtering data samples that are not able to be used, each domain has , , , and data pairs that can be used for training.
Parameter settings. We follow all the hyperparameter settings in the VA-DepthNet and set . Note that the backbone is trained using VA-DepthNet but without the Variational Loss proposed by VA-DepthNet.
Experimental settings. We use the final saved checkpoint for the leave-one-out cross-validation.
Metrics. Please see metric details in VA-DepthNet [32].
10.3 Segmentation experimental details.
We follow the experimental settings of RobustNet for segmentation experiments.
Datasets. There are two groups of datasets: Synthetic datasets and real-world datasets. (1) Synthetic datasets: GTAV [41] is a large-scale dataset containing 24,966 driving-scene images generated from the Grand Theft Auto V game engine. SYNTHIA [42] which is composed of photo-realistic synthetic images has 9,400 samples with a resolution of 960×720. (2) Real-world datasets: Cityscapes [11] is a large-scale dataset containing high-resolution urban scene images. Providing 3,450 finely annotated images and 20,000 coarsely annotated images, it collects data from 50 different cities in primarily Germany. Only the finely annotated set is adopted for our training and validation. BDD-100K [53] is another real-world dataset with a resolution of 1280×720. It provides diverse urban driving scene images from various locations in the US. We use the 7,000 training and 1,000 validation of the semantic segmentation task. The images are collected from various locations in the US. Mapillary is also a real-world dataset that contains worldwide street-view scenes with 25,000 high-resolution images.
Parameter settings. Specifically, we use all RobustNet’s hyper-parameters and set .
10.4 Classification experimental details.
Datasets. We use PACS (4 domains, 9,991 samples, classes) [25], VLCS ( domains, samples, classes) [12], OfficeHome (4 domains, 15,588 samples, 65 classes) [50], TerraIncognita (TerraInc, domains, samples, classes) [2], and DomainNet (6 domains, 586,575 samples, 345 classes) [38].
Parameter settings. We list the hyper-parameters in Table 12 to reproduce our results.
Metric. We employ mean Intersection over Union (mIoU) as the measurement for the segmentation task.
10.5 Ablation studies experimental details.
Parameter settings. We run each experiment in three trials with seeds: . Full settings are reported in Table 13. Especially,
Experimental settings. We use SWAD for all ablation studies to alleviate the effeteness of hyper-parameters. All ablation studies share the same hyper-parameters but add different combinations of terms. CORAL’s [48] objective focuses on minimizing the learned feature covariance discrepancy between source and target, requiring target data access and only regards second-order statistics. We adapt its approach to minimize feature covariances across seen domains for a fair comparison.
11 More results
Visualization of toy experiments: Please see the visualization of toy experiments in Figure 4.

Regression results: Monocular depth estimation.
The regression results for each unseen domain of monocular depth estimation visualization is displayed in Figure 6, 7.
The Visualization of regression results for unseen domains of models trained with different objective settings are exhibited in Figure 8, 9.
Segmentation results. The segmentation results for unseen samples are displayed in Figure 10.
Classification results. We show the results of each category for the classification experiments as Table 15, 16, 17, 18, 19.
11.1 Other findings and Analysis
What makes a better . As demonstrated in Eq. LABEL:eq:PUB, plays a crucial role in PUB by anchoring a space where the relationship between and is preserved. Ideally, having one that provides general representations for all seen and unseen domains leads to the best results, one finding supported by MIRO and SIMPLE. However, even though SIMPLE++ combines 283 pre-trained models, achieving the ‘perfect’ remains unattained. Therefore, this paper primarily discusses how our proposed objectives can improve the model performance when a fixed is provided.
Comparison with MDA: Minimizing domain gap compared to the decision gap. MDA [16], guided by the hypothesis “guaranteed generalization only when the decision gap exceeds the domain gap”, aims to minimize the ratio between the domain gap and the decision gap. This approach facilitates learning -independent conditional features, enhancing class separability across domains. As Table 1 illustrates, MDA’s Reg2 objective can also be interpreted as suppressing invalid causality, aligning with our approach. However, MDA’s implementation requires manual selection of from the same without using and GReg2. Our method further relaxes MDA’s assumption, extending the application of the objective and making it also applicable to tasks besides classification, such as segmentation.
Cutting off causality form may lead to collapse of the model. We have tried to reversely suppress the causality form instead of causality form for monocular depth estimation in VA-DepthNet and it causes collapse.
Suppressing invalid causality: Why this design: Our GMDG introduces a mapping for to relax the static assumption, corroborating more general and practical scenarios. Our empirical findings, as shown in Fig. 5, reveal that introducing without any constraints may not guarantee a clear decision margin for classification. Upon examining our objective in Eq.8 in the main manuscript, we hypothesize that the effect might result from ‘ causing ’, which we term ‘invalid causality’. Thus, we designed a term to suppress such invalid causality. This term is derived from the prediction perspective wherein should be only predicted from ; hence, should not be caused by . Consider the scenario wherein and are absent during prediction - the hypothesized causality from to would disrupt the causal chain, resulting in an ‘incomplete’ representation of then prediction degradation. Hence, it is critical to suppress that may occur during joint training. Notably, the suppression is not symmetric and promotes . Intuition: Intuitively, GReg2 further ‘erases’ the redundant information in that may be caused by , which aims to refine the latent space, yielding better invariant latent features for predicting (i.e., larger decision margin for the unseen domain as highlighted in Fig. 5).

GMDG’s efficiency: We have analyzed the efficiency of GMDG in Tab. 14. Though theoretically superior in generality, GMGD may increase computational costs during training due to additional loss functions and VAE encoders. However, these auxiliary components are discarded in the inference stage, ensuring their efficiency remains unaffected. Our confidence in the model’s amenability to efficiency enhancement through careful design is high, and such pursuits remain a promising avenue for future work.
Classification | Depth estimation | Segmentation | |||||
---|---|---|---|---|---|---|---|
Training | Inference | Training | Inference | Training | Inference | ||
99.16 | 49.58 | 584.44 | 573.78 | 209.97 | 167.34 | Baseline | |
FLOPs (G) | 124.00 | 49.58 | 1543.90 | 573.78 | 449.83 | 167.34 | With GMDG |
2.94 | 1.47 | 64.42 | 64.27 | 23.13 | 22.54 | Baseline | |
Parameters (M) | 4.93 | 1.47 | 123.76 | 64.27 | 82.48 | 22.54 | With GMDG |
Applicability, constraint, and limitations: GMDG is specifically designed for mDG with accessible , in which the model is trained on multiple seen domains and tested on one unseen domain. This task essentially requires learning the invariance across multi-domains for the prediction. When used in single-domain generalization or cases involving novel classes in the unseen domain, GMDG may not be directly applicable, thus requiring further investigations. Meanwhile, as a general objective, our novel GMDG involves additional modules/losses that may incur extra computational costs during training. We have discussed these aspects in our main paper, and we would like to leave them as future work.







TerraIncognita | Location 100 | Location 38 | Location 43 | Location 46 | Avg. |
---|---|---|---|---|---|
ERM [14] | 54.3 | 42.5 | 55.6 | 38.8 | 47.8 |
MIRO [19] (use ResNet-50) | - | - | - | - | 50.4 |
GMDG (use ResNet-50) | 60.9±5.34 | 47.3±3.42 | 55.2±1.06 | 41.0±2.93 | 51.1±0.91 |
ERM + SWAD [6] | 55.4 | 44.9 | 59.7 | 39.9 | 50.0 |
DIWA [40] | 57.2 | 50.1 | 60.3 | 39.8 | 51.9 |
MIRO [19] + SWAD [6] (use ResNet-50) | - | - | - | - | 52.9 |
GMDG + SWAD (use ResNet-50) | 61.2±1.4 | 48.4±1.6 | 60.0±0.4 | 42.5±1.1 | 53.0±0.7 |
MIRO [19] (use RegNetY-16GF) | - | - | - | - | 58.9 |
GMDG (use RegNetY-16GF) | 73.3±3.3 | 54.7±1.4 | 67.1±0.3 | 48.6±6.5 | 60.7±1.8 |
MIRO [19] + SWAD [6] (use RegNetY-16GF) | - | - | - | - | 64.3 |
GMDG + SWAD (use RegNetY-16GF) | 74.3±1.5 | 59.2±1.2 | 70.6±1.1 | 56.0±0.8 | 65.0±0.2 |
OfficeHome | art | clipart | product | real | Avg. |
---|---|---|---|---|---|
ERM [14] | 63.1 | 51.9 | 77.2 | 78.1 | 67.6 |
MIRO [19] (use ResNet-50) | - | - | - | - | 70.5±0.4 |
GMDG (use ResNet-50) | 68.9±0.3 | 56.2±1.7 | 79.9±0.6 | 82.0±0.4 | 70.7±0.2 |
ERM + SWAD [6] | 66.1 | 57.7 | 78.4 | 80.2 | 70.6 |
DIWA [40] | 69.2 | 59 | 81.7 | 82.2 | 72.8 |
MIRO [19] + SWAD [6] (use ResNet-50) | - | - | - | - | 72.4±0.1 |
GMDG + SWAD (use ResNet-50) | 68.9±0.6 | 58.2±0.6 | 80.4±0.3 | 82.6±0.4 | 72.5±0.2 |
MIRO [19] (use RegNetY-16GF) | - | - | - | - | 80.4±0.2 |
GMDG (use RegNetY-16GF) | 79.7±1.6 | 67.7±1.8 | 87.8±0.8 | 87.9±0.7 | 80.8±0.6 |
MIRO [19] + SWAD [6] (use RegNetY-16GF) | - | - | - | - | 83.3±0.1 |
GMDG + SWAD (use RegNetY-16GF) | 84.1±0.2 | 74.3±0.9 | 89.9±0.4 | 90.6±0.1 | 84.7±0.2 |
VLCS | caltech101 | labelme | sun09 | voc2007 | Avg. |
---|---|---|---|---|---|
ERM [14] | 97.7 | 64.3 | 73.4 | 74.6 | 77.3 |
MIRO [19] (use ResNet-50) | - | - | - | - | 79.0±0.0 |
GMDG (use ResNet-50) | 98.3±0.4 | 65.9±1 | 73.4±0.8 | 79.3±1.3 | 79.2±0.3 |
ERM + SWAD [6] | 98.8 | 63.3 | 75.3 | 79.2 | 79.1 |
DIWA [40] | 98.9 | 62.4 | 73.9 | 78.9 | 78.6 |
MIRO [19] + SWAD [6](use ResNet-50) | - | - | - | - | 79.6±0.2 |
GMDG + SWAD (use ResNet-50) | 98.9±0.4 | 63.6±0.2 | 76.4±0.5 | 79.5±0.6 | 79.6±0.1 |
MIRO [19] (use RegNetY-16GF) | - | - | - | - | 79.9±0.6 |
GMDG (use RegNetY-16GF) | 97.9±1.3 | 66.8±2.1 | 80.8±1 | 83.9±1.8 | 82.4±0.6 |
MIRO [19] + SWAD [6] (use RegNetY-16GF) | - | - | - | - | 81.7±0.1 |
GMDG + SWAD (use RegNetY-16GF) | 98.4±0.1 | 65.5±1.4 | 79.9±0.4 | 84.9±0.9 | 82.2±0.3 |
PACS | art_painting | cartoon | photo | sketch | Avg. |
---|---|---|---|---|---|
ERM [14] | 84.7 | 80.8 | 97.2 | 79.3 | 84.2 |
MIRO [19] (use ResNet-50) | - | - | - | - | 85.4±0.4 |
GMDG (use ResNet-50) | 84.7±1.0 | 81.7±2.4 | 97.5±0.4 | 80.5±1.8 | 85.6±0.3 |
ERM + SWAD [6] | 89.3 | 83.4 | 97.3 | 82.5 | 88.1 |
DIWA [40] | 90.6 | 83.4 | 98.2 | 83.8 | 89.0 |
MIRO [19] + SWAD [6] (use ResNet-50) | - | - | - | - | 88.4±0.1 |
GMDG + SWAD (use ResNet-50) | 90.1±0.6 | 83.9±0.2 | 97.6±0.5 | 82.3±0.7 | 88.4±0.1 |
MIRO [19] (use RegNetY-16GF) | - | - | - | - | 97.4±0.2 |
GMDG (use RegNetY-16GF) | 97.5±1.0 | 97.0±0.2 | 99.4±0.2 | 95.2±0.4 | 97.3±0.1 |
MIRO [19] + SWAD [6] (use RegNetY-16GF) | - | - | - | - | 96.8±0.2 |
GMDG + SWAD (use RegNetY-16GF) | 98.3±0.3 | 98.0±0.1 | 99.5±0.3 | 95.3±0.8 | 97.9±0.0 |
DomainNet | clipart | info | painting | quickdraw | real | sketch | Avg. |
---|---|---|---|---|---|---|---|
ERM [14] | 50.1 | 63.0 | 21.2 | 63.7 | 13.9 | 52.9 | 44.0 |
MIRO [19] (use ResNet-50) | - | - | - | - | - | - | 44.3±0.2 |
GMDG (use ResNet-50) | 63.4±0.3 | 22.4±0.4 | 51.4±0.4 | 13.4±0.8 | 64.4±0.3 | 52.4±0.4 | 44.6±0.1 |
ERM + SWAD [6] | 53.5 | 66.0 | 22.4 | 65.8 | 16.1 | 55.5 | 46.5 |
DIWA [40] | 55.4 | 66.2 | 23.3 | 68.7 | 16.5 | 56.0 | 47.7 |
MIRO [19] + SWAD [6] (use ResNet-50) | - | - | - | - | - | - | 47.0±0.0 |
GMDG + SWAD (use ResNet-50) | 66.4±0.3 | 23.8±0.1 | 54.5±0.3 | 15.8±0.1 | 67.5±0.1 | 55.8±0.0 | 47.3±0.1 |
MIRO [19] (use RegNetY-16GF) | - | - | - | - | - | - | 53.8±0.1 |
GMDG (use RegNetY-16GF) | 74.0±0.3 | 39.5±1.5 | 61.5±0.3 | 16.3±1.2 | 73.9±1.5 | 62.8±2.4 | 54.6±0.1 |
MIRO [19] + SWAD [6] (use RegNetY-16GF) | - | - | - | - | - | - | 60.7±0.0 |
GMDG + SWAD (use RegNetY-16GF) | 79.0±0.0 | 46.9±0.4 | 69.9±0.4 | 20.7±0.6 | 81.1±0.3 | 70.3±0.4 | 61.3±0.2 |