Label-Efficient Domain Generalization via Collaborative Exploration and Generalization
Abstract.
Considerable progress has been made in domain generalization (DG) which aims to learn a generalizable model from multiple well-annotated source domains to unknown target domains. However, it can be prohibitively expensive to obtain sufficient annotation for source datasets in many real scenarios. To escape from the dilemma between domain generalization and annotation costs, in this paper, we introduce a novel task named label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains. To address this challenging task, we propose a novel framework called Collaborative Exploration and Generalization (CEG) which jointly optimizes active exploration and semi-supervised generalization. Specifically, in active exploration, to explore class and domain discriminability while avoiding information divergence and redundancy, we query the labels of the samples with the highest overall ranking of class uncertainty, domain representativeness, and information diversity. In semi-supervised generalization, we design MixUp-based intra- and inter-domain knowledge augmentation to expand domain knowledge and generalize domain invariance. We unify active exploration and semi-supervised generalization in a collaborative way and promote mutual enhancement between them, boosting model generalization with limited annotation. Extensive experiments show that CEG yields superior generalization performance. In particular, CEG can even use only 5% data annotation budget to achieve competitive results compared to the previous DG methods with fully labeled data on PACS dataset.
1. Introduction
Despite the remarkable success achieved by modern machine learning algorithms in visual recognition (He et al., 2016; Voulodimos et al., 2018; Srinivas et al., 2021), it heavily relies on the i.i.d. assumption (Vapnik, 1992) that training and test datasets should have a consistent statistical pattern. Since machine learning systems are usually deployed in a wide range of scenarios where the test data are unknown in advance, it may inevitably result in serious model performance degradation when there exists a distinct distribution/domain shift (Quionero-Candela et al., 2009) between the training and test data.

With awareness of this problem, domain generalization (DG) (Blanchard et al., 2011) is introduced to extract domain invariance from multiple well-annotated source datasets/domains and train a generalizable model to unknown target domains. Lots of favorable DG algorithms (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Xu et al., 2021; Zhou et al., 2021c; Pandey et al., 2021; Dubey et al., 2021) have been proposed recently, however, these methods may need to be fed with a large amount of labeled multi-source data for identifying domain invariance and improving model generalization. It might impede the realization of the DG approaches in many real-world applications where labeling massive data could be expensive or even infeasible. For example, a highly accurate and robust system for the detection of lung lesion images of COVID-19 patients may demand a large number of labeled medical images from different hospitals as the source data for training (Ettinger et al., 2021), but it could be impractical to require numerous experienced clinicians to complete the annotation. Therefore, a dilemma is encountered: The requirements of obtaining massive labeled source data for training a generalizable model may hard to be met in realistic scenarios due to the limited annotation budget. Meanwhile, without sufficient labeled data to provide adequate information of multi-source distribution, improving model generalization by identifying and learning the domain invariance is at serious risk of being misled.
To escape from the dilemma, we introduce a more practical task named label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains as shown in Figure 1. Instead of getting fully labeled data, the LEDG task unleashes the potential of budget-limited annotation by querying the labels of a small quota of informative data, and leverages both the labeled and unlabeled data to improve domain generalization. LEDG permits the learning of generalizable models in real scenarios, but it could be much more challenging. The first challenge is from the clear domain distinctions that could exist in the multi-source data, which constitutes enormous obstacles for selecting the most informative samples and learning adequate information of multi-source distribution. Meanwhile, the second challenge is from the discrepant distributions that labeled and unlabeled data may be subject to, making the simultaneous utilization of them for extracting domain invariance and promoting model generalization extremely difficult.
Active learning (AL) (Wang and Shang, 2014; Sener and Savarese, 2018; Ash et al., 2020; Huang et al., 2021b; Kim et al., 2021; Joshi et al., 2009) and semi-supervised learning (SSL) (Tarvainen and Valpola, 2017; Berthelot et al., 2019, 2020; Sohn et al., 2020) provide possible solutions to the introduced LEDG task. AL aims to query the labels of high-quality samples and SSL leverages the unlabeled data to improve performance with limited labeled data. However, the existing AL and SSL methods mostly depend on the i.i.d. assumption, hence may not be favorably extended to the generalization scenarios under distinct domain shifts. Semi-supervised domain generalization (SSDG) (Zhou et al., 2021a; Wang et al., 2021b; Yuan et al., 2021b; Liao et al., 2020; Sharifi-Noghabi et al., 2020) tackles domain shift under the SSL setting. But some data directly assumed to be labeled in this task might not be helpful for generalization improvement but could increase the annotation costs. Thus, it is imperative to find a solution to the challenging LEDG task for getting rid of the raised dilemma between domain generalization and annotation costs, realizing more practical training of generalizable models in real-world scenarios.
To address the LEDG task, in this paper, we propose a novel framework called Collaborative Exploration and Generalization (CEG) which jointly optimizes active exploration and semi-supervised generalization. In active exploration, to unleash the power of the limited annotation, we query the labels of the samples with the highest overall ranking of class uncertainty, domain representativeness, and information diversity, exploring class and domain discriminability while avoiding information divergence and redundancy. In semi-supervised generalization, we augment intra- and inter-domain knowledge with MixUp (Zhang et al., 2018) to expand domain knowledge and generalize domain invariance. An augmentation consistency constraint for unlabeled data and a prediction supervision for labeled data are further included to improve performance. We unify active exploration and semi-supervised generalization in a collaborative way by repeating them alternately, promoting closed-loop mutual enhancement between them for effective learning of domain invariance and label-efficient training of generalizable models.
Our contributions are listed in the following. (1) We introduce a more practical task named label-efficient domain generalization to permit generalization learning in real-world scenarios by tackling the dilemma between domain generalization and annotation costs. (2) To solve this challenging task, we propose a semi-supervised active learning-based framework CEG to unify active query-based distribution exploration and semi-supervised training-based model generalization in a collaborative way, achieving closed-loop mutual enhancement between them. (3) Extensive experiments show the superior generalization performance of CEG, which can even achieve competitive results using 5% annotation budget compared to the previous DG methods with full annotation on PACS dataset.
2. Related Work
Domain Generalization (DG). Different from domain adaptation (DA) (Wang et al., 2021c; Deng et al., 2021; Lv et al., 2021; Yan et al., 2021; Huang et al., 2021c; Ye et al., 2021; Li et al., 2021b; Chen et al., 2021b; Ma et al., 2022; Chen et al., 2022; Chen and Wang, 2021; Chen et al., 2021a) which adapts models from the source domain to the target, DG (Blanchard et al., 2011) assumes that the target domain is unknown during training and aims to train a generalizable model from the source domains. Increasing DG methods (Shankar et al., 2018; Pandey et al., 2021; Dubey et al., 2021; Volpi et al., 2021; Mahajan et al., 2021; Huang et al., 2020; Zhou et al., 2021b; Yuan et al., 2021a, c; Kuang et al., 2018, 2022, 2021, 2020; Shen et al., 2020) are proposed recently, they popularize various strategies via invariant representation learning (Zhao et al., 2020; Dou et al., 2019; Li et al., 2021a, 2018a; Qiao et al., 2020), meta-learning (Shu et al., 2021; Balaji et al., 2018; Li et al., 2018b; Dou et al., 2019; Li et al., 2019), data augmentation (Carlucci et al., 2019; Zhou et al., 2020, 2021c; Xu et al., 2021; Zhang et al., 2021; Huang et al., 2021a; Jeon et al., 2021), and others (Du et al., 2021; Wang et al., 2021a; Liu et al., 2021). But they mostly need the fully labeled data to learn generalization.
Semi-Supervised Domain Generalization (SSDG) (Zhou et al., 2021a; Wang et al., 2021b; Yuan et al., 2021b; Liao et al., 2020; Sharifi-Noghabi et al., 2020) aims to reduce the reliance of DG on annotation via pseudo-labeling (Wang et al., 2021b), consistency learning (Zhou et al., 2021a), or bias filtering (Yuan et al., 2021b). For example, StyleMatch (Zhou et al., 2021a) combines consistency learning, model uncertainty learning, and style augmentation to utilize the annotation for improving model robustness. However, partial samples assumed to be labeled in the SSDG task may not be informative for boosting model generalization but increase the annotation costs.
Semi-Supervised Learning (SSL). SSL (Tarvainen and Valpola, 2017; Berthelot et al., 2019, 2020; Sohn et al., 2020; Jiang et al., 2022) is a practicable way to use both labeled and unlabeled data. For example, MeanTeacher (Tarvainen and Valpola, 2017) achieves significant performance by using the labeled data to optimize a student model, the prediction of which is constrained to be consistent with the prediction of a teacher model. But most of the SSL methods rely on the i.i.d. assumption, which can impair their generalization performance under domain shift.

Active Learning (AL). AL (Ash et al., 2020; Wang and Shang, 2014; Joshi et al., 2009; Sener and Savarese, 2018; Kim et al., 2021; Huang et al., 2021b) aims to select high-quality data for querying the labels. Pool-based AL (Ash et al., 2020; Sener and Savarese, 2018; Huang et al., 2021b; Kim et al., 2021) is the most popular that chooses samples from an unlabeled pool and hand them over to the oracle to label. The labeled samples are then added to a labeled pool as newly acquired knowledge. Some successful uncertainty (Ash et al., 2020; Wang and Shang, 2014; Joshi et al., 2009) and diversity (Ash et al., 2020; Sener and Savarese, 2018) based methods select uncertain and diverse samples for learning task boundary and comprehensive information, respectively. However, the AL algorithms are mainly designed for single-domain data, thus may not be directly extended to the generalization scenarios.
3. Method
3.1. Label-Efficient Domain Generalization
We begin with the task setting of the introduced Label-Efficient Domain Generalization (LEDG). In the LEDG task, we have unlabeled source datasets sampled from different data distributions , respectively. There are unlabeled data points being sampled for each dataset , i.e., , for . We further have an annotation budget , i.e., the maximum number of samples that we are allowed to query their class labels. Each sample pair is defined on the image and label joint space . Besides, the domain label for each sample is given in our task. We consider a classification model composed of a feature extractor and a classifier head , i.e., . The goal of LEDG is to train the model by utilizing the unlabeled multi-source data as well as the limited annotation budget for improving the generalization performance of the model on the target domains with unknown distributions. For convenience of the statement of our method, we denote the dataset consists of all the labeled (queried) samples as and the dataset with all the unlabeled (not queried) samples as , where and are the data size of the labeled and unlabeled datasets, respectively. It is obvious that the whole data size .
Our insight for this challenging task is to consider the labeled and unlabeled samples as the “known” and “unknown” regions of the multi-source distribution, respectively. In view of this, the core idea of our solution is to: (1) explore the key knowledge hidden in the unknown regions via active query for adequate multi-source distribution learning, (2) extract and generalize the domain invariance contained in the obtained knowledge in both the known and unknown regions via semi-supervised training, and (3) make the active query-based exploration and semi-supervised training-based generalization complement and promote each other to train a generalizable model. An overview of our framework, i.e., Collaborative Exploration and Generalization (CEG), is shown in Figure 2.

3.2. Active Exploration
There might be distinct domain divergence among the data distributions of the source domains. Meanwhile, each source domain contains the discriminative information of class boundary which is essential for the prediction task. Thus, we take class and domain discriminability as the key knowledge for learning the multi-source distribution in the active exploration. In light of this, we present to select the samples with high class uncertainty and domain representativeness. To avoid information redundancy, we further take information diversity into consideration. Figure 3 provides revealing insights into the elaborate strategy for the active exploration.
To capture the key knowledge of class discriminability, we select the samples with high class uncertainty. Specifically, we adopt the margin of the top two model predictions to choose class-ambiguous samples to query. Let be the -th dimension of the class prediction of the model (after operation), then the class uncertainty score for each unlabeled sample is defined as
(1) |
We tend to query the samples with high uncertainty scores for their class-ambiguous. Labeling these samples provides the key knowledge of class discriminability, which helps the model to figure out class boundary and boosts its performance of class prediction.
Different from the single-domain scenario considered in the AL methods (Wang and Shang, 2014; Joshi et al., 2009), multiple source domains may lead to an information divergence problem here, i.e., the selected high-uncertainty samples are scattered at the domain boundary (see Figure 3 (a)). To sufficiently explore and grasp information of the multi-source distribution, the selected samples are required to represent the characteristics of each source domain. Therefore, given domain label of each unlabeled sample , we first train a domain discriminator model with a domain discriminability loss:
(2) |
where is the cross-entropy loss. Let be the -th dimension of the domain prediction of the model , we then define domain representativeness score for each unlabeled sample as:
(3) |
Note that different from the class discriminability learning with class ambiguous data, here, we select the samples with high representativeness score, i.e., high domain confidence. It prevents the model from learning class discriminability in the remote areas of each source domain and hence loosing domain characteristics.
Then an information redundancy problem has arisen, i.e., the selected samples with high class uncertainty and domain representativeness may gather together (see Figure 3 (b)), which wastes the limited annotation budget. To disperse the information, we choose the samples that are far away from the known domain-class knowledge of the labeled data. We make a knowledge dataset be composed of the labeled data that belongs to domain and class (if there is no such a sample in , then ). Let be the number of samples in the knowledge dataset and be the feature extractor. We generate knowledge centroids for the known regions in the semantic feature space:
(4) |
We let a set be composed of all the knowledge centroids if . Then, we define information diversity score as
(5) |
where is a distance metric used as cosine distance in experiments. We tend to choose samples with high diversity score, i.e., far away from the closest centroids, facilitating the unknown exploration for comprehensive learning of multi-source distribution.
To avoid numerical issues, we integrate uncertainty score , representativeness score , and diversity score by adopting their rankings, which we denote as , , and , respectively. Finally, we have an overall query ranking for each unlabeled sample :
(6) |
where and are trade-off hyper-parameters. Note that we rank each score, i.e., , , and , from high to low, for selecting the most informative samples of the multi-source data distribution.
3.3. Semi-Supervised Generalization
With active query-based exploration, we have a small quota of labeled data and massive unlabeled data, i.e., small range of known regions and large range of unknown regions of the data distribution. In semi-supervised generalization, we aim to expand domain knowledge and learn domain invariance via MixUp-based intra- and inter-domain knowledge augmentation as shown in Figure 4.
We start by defining the unlabeled samples that close to the knowledge centroids as “reliable samples”, and construct a reliable dataset with expansion threshold to tune reliable range:
(7) |
where pseudo label of each unlabeled sample is assigned by the nearest knowledge centroids , that is,
(8) |
A low threshold value leads to few reliable samples but high dependability of its pseudo labels, and vice versa. To arrange learning tasks in the order of difficulty for helping the model gain sufficient basic and easy knowledge before handling more complex data, we let increase with epochs and dynamically tune learning difficulty:
(9) |
where and are the initial and final threshold values, and are the total and current epochs, respectively. It makes the model expand knowledge stably with the high dependable samples at the beginning, and break through the hard samples gradually.

We expand domain-class knowledge in each domain and across domains with the reliable dataset , and construct MixUp-based intra- and inter-domain knowledge augmentation datasets, i.e., and , respectively. That is,
(10) | ||||
(11) | ||||
where with as in (Zhang et al., 2018). and open up the association among known regions within and across domains, respectively. To broaden the known regions and learn domain-invariant representations for improving out-of-domain generalization ability, we train the model on the union of the augmented datasets by optimizing an expansion and generalization loss :
(12) |
We further utilize the unknown and known regions by adopting augmentation consistency constraint (Sohn et al., 2020) for the unlabeled data and prediction supervision for the labeled data, respectively. Let and be weak (flip-and-shift) and strong (Cubuk et al., 2019) augmentation functions, respectively. Pseudo labels can be assigned by via . The augmentation consistency loss makes the model predictions of strong augmented data, i.e., , and weak augmented label, i.e., , to be consistent:
(13) |
where an indicator ( is set to 0.95 as in (Sohn et al., 2020)) selects high dependable data. This constraint helps the model to capture structural knowledge in the unknown regions via unsupervised learning. For prediction supervision, we adopt a cross-entropy classification loss for the labeled data:
(14) |
A semi-supervised training loss is then derived as:
(15) |
where is a hyper-parameter of knowledge expansion and generalization. We set the weights of and to 1 as in (Sohn et al., 2020).
Our framework CEG explores informative unlabeled samples for learning key knowledge of multi-source distribution using limited annotation budget, promoting the expansion and generalization of the key knowledge in semi-supervised training. Then a well trained model is continuously utilized to select more effective samples in the next round of query. The active exploration and semi-supervised generalization are unified in a collaborative way by being repeated alternately. They complement and promote each other to enable label-efficient domain generalization. The learning process of CEG is stated in Algorithm 1. Note that we use an initial budget from the annotation budget to initialize the labeled dataset via uniform sample selection, and pretrain the model before active query to solve a cold start problem, to our empirical experience.
4. Experiments
Methods | Art | Cartoon | Photo | Sketch | Average |
---|---|---|---|---|---|
DeepAll | 55.792.92 (73.592.89) | 61.841.98 (70.632.33) | 80.211.14 (89.361.21) | 63.001.60 (80.060.95) | 65.210.46 (78.410.44) |
JiGen (Carlucci et al., 2019) | 53.035.19 (77.051.83) | 55.091.85 (76.162.02) | 78.625.83 (93.811.51) | 22.284.05 (70.930.72) | 52.260.25 (79.490.67) |
FACT (Xu et al., 2021) | 74.210.30 (84.760.77) | 65.822.03 (77.520.70) | 90.481.17 (95.290.31) | 55.243.62 (78.970.32) | 71.441.02 (84.130.24) |
DDAIG (Zhou et al., 2020) | 62.873.13 (77.801.09) | 57.646.59 (75.353.21) | 81.483.82 (89.661.74) | 36.612.23 (73.702.99) | 59.652.22 (79.130.91) |
RSC (Huang et al., 2020) | 59.573.37 (77.880.66) | 59.613.22 (73.902.12) | 84.433.59 (93.850.80) | 57.386.61 (80.660.81) | 65.252.95 (81.570.71) |
CrossGrad (Shankar et al., 2018) | 56.067.04 (75.692.25) | 52.734.17 (76.513.24) | 80.511.97 (91.330.50) | 41.255.21 (70.500.97) | 57.642.13 (78.511.40) |
DAEL (Zhou et al., 2021b) | 66.241.86 (83.510.83) | 61.721.89 (72.312.67) | 89.980.37 (95.740.08) | 32.502.20 (78.870.59) | 62.610.46 (82.610.98) |
CEG (ours) | 80.120.37 | 71.110.96 | 92.321.68 | 73.132.87 | 79.170.83 |
Methods | Art | Clipart | Product | Real-World | Average |
---|---|---|---|---|---|
DeepAll | 34.731.13 (47.061.35) | 34.462.11 (47.500.91) | 46.201.51 (64.890.65) | 48.890.87 (65.160.62) | 41.070.93 (56.150.59) |
JiGen (Carlucci et al., 2019) | 29.622.25 (52.670.95) | 25.522.12 (50.400.97) | 37.911.33 (71.210.12) | 39.840.61 (72.240.15) | 33.220.93 (61.630.25) |
FACT (Xu et al., 2021) | 40.710.08 (58.980.29) | 32.120.17 (53.530.35) | 48.050.14 (74.470.56) | 49.160.17 (75.630.67) | 42.510.09 (65.650.41) |
DDAIG (Zhou et al., 2020) | 35.201.06 (55.050.69) | 29.750.50 (52.370.58) | 42.420.58 (72.000.58) | 43.070.12 (73.540.19) | 37.610.16 (63.240.35) |
RSC (Huang et al., 2020) | 31.951.24 (56.060.71) | 28.621.53 (52.950.31) | 40.881.87 (72.610.39) | 42.430.69 (73.420.38) | 35.970.61 (63.760.25) |
CrossGrad (Shankar et al., 2018) | 35.050.37 (54.420.55) | 30.861.74 (52.630.77) | 45.101.76 (73.000.47) | 44.412.08 (73.420.74) | 38.860.27 (63.370.24) |
DAEL (Zhou et al., 2021b) | 35.930.57 (59.200.56) | 30.710.86 (50.972.63) | 42.790.99 (73.530.52) | 43.950.86 (76.560.45) | 38.350.12 (65.060.55) |
CEG (ours) | 47.601.32 | 42.011.19 | 56.201.79 | 57.691.18 | 50.870.99 |
Methods | PACS dataset (%) | Office-Home dataset (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Art | Cartoon | Photo | Sketch | Average | Art | Clipart | Product | Real-World | Average | |
Uniform | 55.562.92 | 61.611.98 | 79.981.14 | 62.771.60 | 64.980.46 | 34.271.13 | 34.002.11 | 45.741.51 | 48.430.87 | 40.610.93 |
Entropy (Wang and Shang, 2014) | 58.792.36 | 63.491.78 | 82.470.99 | 61.670.82 | 66.610.58 | 34.061.59 | 32.021.84 | 46.720.99 | 47.030.96 | 39.960.90 |
BvSB (Joshi et al., 2009) | 62.851.83 | 63.171.02 | 79.573.46 | 63.615.97 | 67.300.97 | 35.581.44 | 35.322.29 | 47.661.42 | 50.390.54 | 42.240.82 |
Confidence (Wang and Shang, 2014) | 58.022.14 | 59.481.61 | 81.753.40 | 61.042.86 | 65.071.60 | 36.351.23 | 36.211.24 | 47.880.75 | 50.462.20 | 42.730.83 |
CoreSet (Sener and Savarese, 2018) | 61.484.57 | 58.742.66 | 79.033.71 | 60.612.25 | 64.961.06 | 37.540.76 | 35.752.55 | 49.441.21 | 51.061.75 | 43.450.66 |
BADGE (Ash et al., 2020) | 54.491.67 | 63.101.60 | 80.841.19 | 65.575.99 | 66.502.11 | 37.811.07 | 36.862.62 | 49.902.00 | 51.262.77 | 43.960.82 |
MeanTeacher (Tarvainen and Valpola, 2017) | 53.846.41 | 54.864.14 | 78.864.63 | 35.524.57 | 55.771.43 | 32.701.56 | 27.252.92 | 43.012.04 | 42.414.03 | 36.351.25 |
MixMatch (Berthelot et al., 2019) | 63.921.77 | 61.372.31 | 81.144.12 | 55.460.61 | 65.471.70 | 25.650.66 | 22.902.24 | 33.800.93 | 28.352.24 | 27.680.80 |
FixMatch (Sohn et al., 2020) | 78.601.47 | 71.142.49 | 92.171.02 | 69.160.94 | 77.770.91 | 36.761.84 | 31.092.53 | 44.794.20 | 45.075.18 | 39.432.21 |
StyleMatch (Zhou et al., 2021a) | 72.671.08 | 73.070.81 | 89.610.74 | 76.460.93 | 77.950.56 | 42.010.68 | 40.950.97 | 47.651.70 | 51.930.26 | 45.630.29 |
CEG (ours) | 80.120.37 | 71.110.96 | 92.321.68 | 73.132.87 | 79.170.83 | 47.601.32 | 42.011.19 | 56.201.79 | 57.691.18 | 50.870.99 |


Strategies | Cases | PACS dataset (%) | Office-Home dataset (%) | ||||||||
Art | Cartoon | Photo | Sketch | Average | Art | Clipart | Product | Real-World | Average | ||
Active Exploration | w/ Uniform | 76.421.27 | 69.923.01 | 87.092.03 | 71.924.97 | 76.342.05 | 45.251.29 | 40.482.18 | 53.482.00 | 55.951.08 | 48.790.96 |
w/o | 80.002.08 | 67.562.34 | 89.430.71 | 71.115.64 | 77.031.28 | 46.122.04 | 40.840.98 | 52.560.75 | 55.831.47 | 48.840.84 | |
w/o | 76.772.74 | 67.914.38 | 90.901.19 | 72.461.44 | 77.010.74 | 46.141.05 | 39.591.25 | 55.990.82 | 57.022.06 | 49.680.71 | |
w/o | 78.210.97 | 68.952.54 | 91.031.50 | 72.583.24 | 77.690.61 | 45.782.34 | 40.791.78 | 54.391.46 | 58.241.83 | 49.840.82 | |
Semi-Supervised Generalization | w/o w/o | 62.612.81 | 68.512.34 | 82.843.85 | 48.654.32 | 65.651.35 | 38.511.06 | 33.621.50 | 48.602.59 | 49.895.07 | 42.661.44 |
w/o | 76.162.11 | 67.053.37 | 88.071.80 | 60.375.36 | 72.911.31 | 44.651.69 | 37.991.25 | 55.412.16 | 56.422.09 | 48.620.59 | |
w/o | 72.131.43 | 68.093.37 | 86.034.02 | 75.122.31 | 75.340.84 | 40.382.15 | 34.681.29 | 46.102.20 | 47.941.66 | 42.271.18 | |
w/o () | 76.522.01 | 67.693.50 | 89.891.72 | 74.722.88 | 77.201.53 | 45.852.72 | 38.501.47 | 54.122.65 | 55.801.66 | 48.571.36 | |
w/o () | 75.362.59 | 68.202.66 | 91.181.05 | 72.492.28 | 76.811.32 | 47.473.43 | 38.972.26 | 52.992.77 | 55.101.48 | 48.631.32 | |
w/ static | 76.862.37 | 69.611.63 | 90.791.13 | 73.962.06 | 77.811.09 | 45.901.76 | 40.462.33 | 55.901.50 | 56.212.30 | 49.621.02 | |
CEG | 80.120.37 | 71.110.96 | 92.321.68 | 73.132.87 | 79.170.83 | 47.601.32 | 42.011.19 | 56.201.79 | 57.691.18 | 50.870.99 |


In this section, we first evaluate our framework CEG in label-limited scenarios, and then give sensitivity analysis of hyper-parameters, ablation studies of the components, and in-depth empirical analysis.
Datasets. We adopt two popular pubilc datasets that are PACS (Li et al., 2017) and Office-Home (Venkateswara et al., 2017). PACS contains 7 categories within 4 domains, i.e., Art, Cartoon, Sketch, and Photo. Office-Home has 65 classes in 4 domains, i.e., Art, Clipart, Product, and Real-World.
Baseline methods. We implement four types of baselines. (1) Domain generalization (DG): DeepAll (training with mixed multi-source data), JiGen (Carlucci et al., 2019), CrossGrad (Shankar et al., 2018), DDAIG (Zhou et al., 2020), DAEL (Zhou et al., 2021b), RSC (Huang et al., 2020), and FACT (Xu et al., 2021). (2) Active learning (AL): Uniform (uniform selection), Entropy (Wang and Shang, 2014), BvSB (Joshi et al., 2009), Confidence (Wang and Shang, 2014), CoreSet (Sener and Savarese, 2018), and BADGE (Ash et al., 2020). (3) Semi-supervised learning (SSL): MeanTeacher (Tarvainen and Valpola, 2017), MixMatch (Berthelot et al., 2019), and FixMatch (Sohn et al., 2020). (4) Semi-supervised domain generalization (SSDG): StyleMatch (Zhou et al., 2021a). See Section 2 for details.
Implementation details. Following (Carlucci et al., 2019; Huang et al., 2020; Xu et al., 2021), we use a pre-trained ResNet-18 (He et al., 2016) as the backbone and conduct leave-one-domain-out experiments by choosing one domain to hold out as the target domain. For fair comparisons, we implement all the methods with the same settings, i.e., SGD optimizer with learning rate 0.003 for feature extractor and 0.01 for classifier, pre-training/learning epochs on PACS and Office-Home datasets are 30/30 and 15/15, respectively, and batch-size is 16, et al. In experiments, we directly use “” to represent “” in Equation (9) for simplicity. We adopt the percentage of the unlabeled samples for instead of a distance value. We set . The hyper-parameters are set to and for PACS and Office-Home, respectively. Half annotation budget is used as the initial budget to initialize the labeled dataset. We report the results over five runs.
4.1. Main Results of CEG
CEG vs DG methods. Table 1 and 2 report the results with 5% annotation budget on PACS and Office-Home datasets, respectively. We observe that the accuracy of DG methods drops rapidly when only 5% labeled data is given. In comparison, our method CEG can select the informative data to label and utilize both the labeled and unlabeled data to boost generalization performance in this challenging label-limited scenario. Most notably, CEG can even achieve competitive results with only 5% annotation budget compared to the DG methods with full annotation on the PACS dataset. It reveals that CEG generally realizes label-efficient domain generalization by exploiting only a small quota of labeled data and massive unlabeled data. We attribute this success to the effective collaboration mechanism between active exploration and semi-supervised generalization, which unleashes the latent power of the limited annotation budget. Since the DG methods may not be good at tackling the label-limited task as they can only use the labeled data, we further compare our CEG method with AL, SSL, and SSDG methods.
CEG vs AL, SSL, SSDG methods. Table 3 reports the results with 5% annotation budget on PACS and Office-Home datasets. CEG outperforms other methods on half of the tasks and yields the best average accuracy on the PACS dataset. It is probably because the AL and SSL methods rely on the i.i.d. assumption, and the SSDG method does not label and exploit the important source data. In contrast, CEG selects the most informative samples for query via active exploration, and hence captures multi-source distribution and boosts generalization ability more accurately. Besides, the performance of CEG is significantly better than other methods on the Office-Home dataset. We attribute it to the construction of domain-class knowledge centroids, which greatly helps CEG to precisely explore unknown regions during active exploration, and effectively expand knowledge and generalize domain invariance during semi-supervised generalization on the Office-Home dataset (because Office-Home has 65 classes but PACS only has 7 classes).
4.2. Sensitivity Analysis
As shown in Figure 5, CEG is generally robust to the hyper-parameters and outperforms other methods even with the default settings, i.e., (78.79% on PACS and 46.34% on Office-Home), (78.09% on PACS and 50.49% on Office-Home), (78.12% on PACS and 50.37% on Office-Home), indicating that exhaustive hyper-parameter fine-tuning is not necessary for CEG to achieve excellent performance in label-efficient generalization learning.
4.3. Results with Increasing Annotation Budget
Figure 6 shows the results with increasing annotation on Office-Home dataset. CEG consistently outperforms other methods by sharp margins on the average accuracy and three of the four tasks. The significant performance achieved by CEG when given a low budget is probably due to the query based-active exploration, but this advantage could be weakened when given a higher budget.
4.4. Why does CEG Work?
Ablation studies are reported in Table 4. The three criteria of active exploration, i.e., uncertainty , representativeness , and diversity , are all important for learning multi-source distribution, and the integration of them further make full use of the limited annotation, compared with uniform selection. For semi-supervised generalization, both knowledge expansion and generalization and augmentation consistency are necessary to yield remarkable results. The intra- and inter-domain knowledge augmentation datasets, i.e., and , both play vital roles in improving generalization performance. It is noteworthy that the proposed significantly improves average accuracy from 42.27% to 50.87% on Office-Home. Besides, the devised dynamic threshold shows its effectiveness of learning with increasing difficulty compared to the static one. The above results illustrate that each component is indispensable, and the exploration and generalization complement and promote each other for achieving the excellent performance.
T-SNE visualization is shown in Figure 7. The left figure shows class-ambiguous samples, i.e., the samples distribute on class boundary, are selected for learning class discriminability. The right figure shows that, in general, the selected samples distribute uniformly and representatively in each domain, illustrating the effectiveness of the domain representativeness and information diversity criteria. These three criteria help CEG to select the most informative samples for learning multi-source distribution, which facilitates the generalizable model training in semi-supervised generalization.
Accuracy curve on Office-Home dataset is shown in Figure 8. Active exploration selects the most important samples and grasps the key knowledge of multi-source distribution to effectively improve performance on each domain, compared with uniform sample selection. Semi-supervised generalization further markedly boosts performance by expanding the obtained knowledge and generalizing domain invariance. They promote each other to achieve remarkable generalization performance on the target domain.
5. Conclusion
We introduce a practical task named label-efficient domain generalization, and propose a novel method called CEG for this task via active exploration and semi-supervised generalization. The two modules promote each other to improve model generalization with the limited annotation. In future work, we may extend our method to a more challenging setting that domain labels are unknown.
Acknowledgements.
This work was supported in part by National Key Research and Development Program of China (2021YFC3340300), Young Elite Scientists Sponsorship Program by CAST (2021QNRC001), National Natural Science Foundation of China (No. 62006207, No. 62037001), Project by Shanghai AI Laboratory (P22KS00111), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Natural Science Foundation of Zhejiang Province (LZ22F020012), and the Fundamental Research Funds for the Central Universities (226-2022-00142).References
- (1)
- Ash et al. (2020) Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations.
- Balaji et al. (2018) Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. 2018. Metareg: Towards domain generalization using meta-regularization. Advances in Neural Information Processing Systems 31 (2018), 998–1008.
- Berthelot et al. (2020) David Berthelot, Nicholas Carlini, Ekin D Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2020. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. International Conference on Learning Representation (2020).
- Berthelot et al. (2019) David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. Advances in Neural Information Processing Systems (2019).
- Blanchard et al. (2011) Gilles Blanchard, Gyemin Lee, and Clayton Scott. 2011. Generalizing from several related classification tasks to a new unlabeled sample. Advances in Neural Information Processing Systems 24 (2011), 2178–2186.
- Carlucci et al. (2019) Fabio Maria Carlucci, Antonio D’Innocente, S. Bucci, B. Caputo, and T. Tommasi. 2019. Domain Generalization by Solving Jigsaw Puzzles. IEEE Conference on Computer Vision and Pattern Recognition (2019), 2224–2233.
- Chen et al. (2021b) Yang Chen, Yingwei Pan, Yu Wang, Ting Yao, Xinmei Tian, and Tao Mei. 2021b. Transferrable Contrastive Learning for Visual Domain Adaptation. In Proceedings of the 29th ACM International Conference on Multimedia. 3399–3408.
- Chen et al. (2021a) Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, and Donglin Wang. 2021a. Pareto Self-Supervised Training for Few-Shot Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13663–13672.
- Chen and Wang (2021) Zhengyu Chen and Donglin Wang. 2021. Multi-Initialization Meta-Learning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1390–1394.
- Chen et al. (2022) Zhengyu Chen, Teng Xiao, and Kun Kuang. 2022. BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE.
- Cubuk et al. (2019) Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2019. Autoaugment: Learning augmentation strategies from data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 113–123.
- Deng et al. (2021) Wanxia Deng, Yawen Cui, Zhen Liu, Gangyao Kuang, Dewen Hu, Matti Pietikäinen, and Li Liu. 2021. Informative Class-Conditioned Feature Alignment for Unsupervised Domain Adaptation. In Proceedings of the 29th ACM International Conference on Multimedia. 1303–1312.
- Dou et al. (2019) Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. 2019. Domain generalization via model-agnostic learning of semantic features. Advances in Neural Information Processing Systems 32 (2019), 6450–6461.
- Du et al. (2021) Zhekai Du, Jingjing Li, Ke Lu, Lei Zhu, and Zi Huang. 2021. Learning Transferrable and Interpretable Representations for Domain Generalization. In Proceedings of the 29th ACM International Conference on Multimedia. 3340–3349.
- Dubey et al. (2021) Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, and Dhruv Mahajan. 2021. Adaptive Methods for Real-World Domain Generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14340–14349.
- Ettinger et al. (2021) Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R Qi, Yin Zhou, et al. 2021. Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9710–9719.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
- Huang et al. (2021a) Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. 2021a. Fsdr: Frequency space domain randomization for domain generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6891–6902.
- Huang et al. (2021b) Siyu Huang, Tianyang Wang, Haoyi Xiong, Jun Huan, and Dejing Dou. 2021b. Semi-supervised active learning with temporal output discrepancy. In IEEE/CVF International Conference on Computer Vision. 3447–3456.
- Huang et al. (2021c) Shengqi Huang, Wanqi Yang, Lei Wang, Luping Zhou, and Ming Yang. 2021c. Few-shot Unsupervised Domain Adaptation with Image-to-class Sparse Similarity Encoding. In Proceedings of the 29th ACM International Conference on Multimedia. 677–685.
- Huang et al. (2020) Zeyi Huang, Haohan Wang, Eric P. Xing, and Dong Huang. 2020. Self-challenging Improves Cross-Domain Generalization. In European Conference on Computer Vision. 124–140.
- Jeon et al. (2021) Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, and Hyeran Byun. 2021. Feature stylization and domain-aware contrastive learning for domain generalization. In Proceedings of the 29th ACM International Conference on Multimedia. 22–31.
- Jiang et al. (2022) Ziqi Jiang, Shengyu Zhang, Siyuan Yao, Wenqiao Zhang, Sihan Zhang, Juncheng Li, Zhou Zhao, and Fei Wu. 2022. Weakly-supervised Disentanglement Network for Video Fingerspelling Detection. In ACM MM.
- Joshi et al. (2009) Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi-class active learning for image classification. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2372–2379.
- Kim et al. (2021) Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. 2021. Task-aware variational adversarial active learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8166–8175.
- Kuang et al. (2018) Kun Kuang, Peng Cui, Susan Athey, Ruoxuan Xiong, and Bo Li. 2018. Stable prediction across unknown environments. In proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 1617–1626.
- Kuang et al. (2022) Kun Kuang, Haotian Wang, Yue Liu, Ruoxuan Xiong, Runze Wu, Weiming Lu, Yue Ting Zhuang, Fei Wu, Peng Cui, and Bo Li. 2022. Stable Prediction with Leveraging Seed Variable. IEEE Transactions on Knowledge and Data Engineering (2022).
- Kuang et al. (2020) Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, and Bo Li. 2020. Stable prediction with model misspecification and agnostic distribution shift. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 4485–4492.
- Kuang et al. (2021) Kun Kuang, Hengtao Zhang, Runze Wu, Fei Wu, Yueting Zhuang, and Aijun Zhang. 2021. Balance-Subsampled stable prediction across unknown test data. ACM Transactions on Knowledge Discovery from Data (TKDD) 16, 3 (2021), 1–21.
- Li et al. (2017) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. 2017. Deeper, broader and artier domain generalization. In Proceedings of the IEEE International Conference on Computer Vision. 5542–5550.
- Li et al. (2018b) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. 2018b. Learning to generalize: Meta-learning for domain generalization. In AAAI Conference on Artificial Intelligence.
- Li et al. (2018a) Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. 2018a. Domain generalization with adversarial feature learning. In IEEE Conference on Computer Vision and Pattern Recognition. 5400–5409.
- Li et al. (2021a) Lei Li, Ke Gao, Juan Cao, Ziyao Huang, Yepeng Weng, Xiaoyue Mi, Zhengze Yu, Xiaoya Li, and Boyang Xia. 2021a. Progressive Domain Expansion Network for Single Domain Generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 224–233.
- Li et al. (2021b) Xinhao Li, Jingjing Li, Lei Zhu, Guoqing Wang, and Zi Huang. 2021b. Imbalanced Source-free Domain Adaptation. In Proceedings of the 29th ACM International Conference on Multimedia. 3330–3339.
- Li et al. (2019) Yiying Li, Yongxin Yang, Wei Zhou, and Timothy Hospedales. 2019. Feature-critic networks for heterogeneous domain generalization. In International Conference on Machine Learning. PMLR, 3915–3924.
- Liao et al. (2020) Yixiao Liao, Ruyi Huang, Jipu Li, Zhuyun Chen, and Weihua Li. 2020. Deep semisupervised domain generalization network for rotary machinery fault diagnosis under variable speed. IEEE Transactions on Instrumentation and Measurement 69, 10 (2020), 8064–8075.
- Liu et al. (2021) Chang Liu, Lichen Wang, Kai Li, and Yun Fu. 2021. Domain Generalization via Feature Variation Decorrelation. In Proceedings of the 29th ACM International Conference on Multimedia. 1683–1691.
- Lv et al. (2021) Jianming Lv, Kaijie Liu, and Shengfeng He. 2021. Differentiated Learning for Multi-Modal Domain Adaptation. In Proceedings of the 29th ACM International Conference on Multimedia. 1322–1330.
- Ma et al. (2022) Xu Ma, Junkun Yuan, Yen-wei Chen, Ruofeng Tong, and Lanfen Lin. 2022. Attention-based cross-layer domain alignment for unsupervised domain adaptation. Neurocomputing 499 (2022), 1–10.
- Maaten and Hinton (2008) L. V. D. Maaten and Geoffrey E. Hinton. 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research 9 (2008), 2579–2605.
- Mahajan et al. (2021) Divyat Mahajan, Shruti Tople, and Amit Sharma. 2021. Domain generalization using causal matching. In International Conference on Machine Learning. PMLR, 7313–7324.
- Pandey et al. (2021) Prashant Pandey, Mrigank Raman, Sumanth Varambally, and Prathosh AP. 2021. Generalization on Unseen Domains via Inference-Time Label-Preserving Target Projections. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12924–12933.
- Qiao et al. (2020) Fengchun Qiao, Long Zhao, and Xi Peng. 2020. Learning to learn single domain generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12556–12565.
- Quionero-Candela et al. (2009) Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2009. Dataset shift in machine learning. The MIT Press.
- Sener and Savarese (2018) Ozan Sener and Silvio Savarese. 2018. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations.
- Shankar et al. (2018) Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. 2018. Generalizing across domains via cross-gradient training. International Conference on Learning Representation (2018).
- Sharifi-Noghabi et al. (2020) Hossein Sharifi-Noghabi, Hossein Asghari, Nazanin Mehrasa, and Martin Ester. 2020. Domain generalization via semi-supervised meta learning. arXiv preprint arXiv:2009.12658 (2020).
- Shen et al. (2020) Zheyan Shen, Peng Cui, Tong Zhang, and Kun Kunag. 2020. Stable learning via sample reweighting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 5692–5699.
- Shu et al. (2021) Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, and Mingsheng Long. 2021. Open Domain Generalization with Domain-Augmented Meta-Learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9624–9633.
- Sohn et al. (2020) Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin Dogus Cubuk, Alexey Kurakin, Han Zhang, and Colin Raffel. 2020. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. Advances in Neural Information Processing Systems 33 (2020).
- Srinivas et al. (2021) Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. 2021. Bottleneck transformers for visual recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16519–16529.
- Tarvainen and Valpola (2017) Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in Neural Information Processing Systems (2017).
- Vapnik (1992) Vladimir Vapnik. 1992. Principles of risk minimization for learning theory. In Advances in neural information processing systems. 831–838.
- Venkateswara et al. (2017) Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In IEEE conference on computer vision and pattern recognition. 5018–5027.
- Volpi et al. (2021) Riccardo Volpi, Diane Larlus, and Grégory Rogez. 2021. Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4443–4453.
- Voulodimos et al. (2018) Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. 2018. Deep learning for computer vision: A brief review. Computational intelligence and neuroscience 2018 (2018).
- Wang and Shang (2014) Dan Wang and Yi Shang. 2014. A new active labeling method for deep learning. In International joint conference on neural networks. IEEE, 112–119.
- Wang et al. (2021c) Mengzhu Wang, Wei Wang, Baopu Li, Xiang Zhang, Long Lan, Huibin Tan, Tianyi Liang, Wei Yu, and Zhigang Luo. 2021c. InterBN: Channel Fusion for Adversarial Unsupervised Domain Adaptation. In Proceedings of the 29th ACM International Conference on Multimedia. 3691–3700.
- Wang et al. (2021b) Ruiqi Wang, Lei Qi, Yinghuan Shi, and Yang Gao. 2021b. Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for Semi-supervised Domain Generalization. arXiv preprint arXiv:2110.04820 (2021).
- Wang et al. (2021a) Yufei Wang, Haoliang Li, Lap-pui Chau, and Alex C Kot. 2021a. Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation. In Proceedings of the 29th ACM International Conference on Multimedia. 2595–2604.
- Xu et al. (2021) Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang, and Qi Tian. 2021. A Fourier-based Framework for Domain Generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14383–14392.
- Yan et al. (2021) Zizheng Yan, Xianggang Yu, Yipeng Qin, Yushuang Wu, Xiaoguang Han, and Shuguang Cui. 2021. Pixel-level intra-domain adaptation for semantic segmentation. In Proceedings of the 29th ACM International Conference on Multimedia. 404–413.
- Ye et al. (2021) Mucong Ye, Jing Zhang, Jinpeng Ouyang, and Ding Yuan. 2021. Source Data-free Unsupervised Domain Adaptation for Semantic Segmentation. In Proceedings of the 29th ACM International Conference on Multimedia. 2233–2242.
- Yuan et al. (2021a) Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, and Lanfen Lin. 2021a. Collaborative Semantic Aggregation and Calibration for Separated Domain Generalization. arXiv e-prints (2021), arXiv–2110.
- Yuan et al. (2021b) Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, and Lanfen Lin. 2021b. Domain-Specific Bias Filtering for Single Labeled Domain Generalization. arXiv preprint arXiv:2110.00726 (2021).
- Yuan et al. (2021c) Junkun Yuan, Xu Ma, Kun Kuang, Ruoxuan Xiong, Mingming Gong, and Lanfen Lin. 2021c. Learning domain-invariant relationship with instrumental variable for domain generalization. arXiv preprint arXiv:2110.01438 (2021).
- Zhang et al. (2018) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. International Conference on Learning Representation (2018).
- Zhang et al. (2021) Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, and Zheyan Shen. 2021. Deep Stable Learning for Out-Of-Distribution Generalization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5372–5382.
- Zhao et al. (2020) Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. 2020. Domain generalization via entropy regularization. Advances in Neural Information Processing Systems 33 (2020).
- Zhou et al. (2021a) Kaiyang Zhou, Chen Change Loy, and Ziwei Liu. 2021a. Semi-Supervised Domain Generalization with Stochastic StyleMatch. arXiv preprint arXiv:2106.00592 (2021).
- Zhou et al. (2020) Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. 2020. Deep domain-adversarial image generation for domain generalisation. In AAAI Conference on Artificial Intelligence.
- Zhou et al. (2021b) Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. 2021b. Domain adaptive ensemble learning. IEEE Transactions on Image Processing 30 (2021), 8008–8018.
- Zhou et al. (2021c) Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. 2021c. Domain Generalization with Mixstyle. In International Conference on Learning Representation.