Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation of Prediction Rationale
Abstract
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data. The primary difficulty in this task is that the model’s predictions may be inaccurate, and using these inaccurate predictions for model adaptation can lead to misleading results. To address this issue, this paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis. By consolidating these hypothesis rationales, we identify the most likely correct hypotheses, which we then use as a pseudo-labeled set to support a semi-supervised learning procedure for model adaptation. To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning. Extensive experimental results demonstrate that our approach achieves state-of-the-art performance in the SFUDA task and can be easily integrated into existing approaches to improve their performance. The codes are available at https://github.com/GANPerf/HCPR.
Index Terms:
Source-free unsupervised domain adaptation, hypothesis consolidation, and prediction rationale.I Introduction
The success of deep learning models in visual tasks is largely dependent on whether the training and testing data share similar distributions [3, 50]. However, when the distribution of the testing data differs significantly from that of the training data, also known as domain shift, the performance of these models can decrease substantially [37, 29]. To mitigate the effects of domain shift and reduce the need for data annotations, Unsupervised Domain Adaptation (UDA) techniques have been developed to transfer knowledge from annotated source domains to new but related target domains without requiring annotations in the target domain [36, 22, 39, 43, 44]. However, most UDA-based methods rely on access to labeled source domain data during adaptation, such an access may not always be feasible due to privacy concerns. As a result, Source-Free Unsupervised Domain Adaptation (SFUDA) [2, 5, 6, 10, 51, 11, 12] gains much attention recently, which only requires a pre-trained model from the source domain and unlabeled data from the target domain.
The main challenge in SFUDA research is how to generate supervision solely from unlabeled data. The current approaches in SFUDA research primarily focus on either generating pseudo-labels [2, 5, 6, 13] or conducting unsupervised feature learning [9, 10, 11, 12, 13] to address this issue. To generate reliable pseudo-labels, existing methods [2, 5, 6] often utilize the distribution of the target domain data to refine the initial predictions from the source domain, i.e., via clustering [2] or using the predictions of neighboring samples [6, 13]. On the other hand, unsupervised feature learning, such as contrastive learning, is often employed as an auxiliary task to encourage the features to adapt to the target domain [9, 10, 11, 12, 13].
In our study, we propose a novel approach to tackle the challenge of SFUDA. Our strategy involves deferring the utilization of label predictions to update the model in the early stages and carefully selecting the most reliable predictions to construct a pseudo-labeled set. The key innovation of our approach lies in considering multiple prediction hypotheses for each sample, accommodating the possibility of multiple potential labels for each data point. We treat each label assignment as a hypothesis and delve into the rationale and supporting evidence behind each prediction. We utilize a representation derived from GradCAM [1] to encode the rationale for predicting an instance to a hypothetical label. Our methodology is inspired by the belief that assessing the correctness of a prediction can be more reliable by analyzing the reasoning behind a particular prediction, rather than solely relying on prediction probabilities. Subsequently, we develop a consolidation method to determine the most trustworthy hypothesis and utilize it as the labeled dataset in a semi-supervised learning framework. By employing this technique, we effectively transform the SFUDA problem into a conventional semi-supervised learning problem.
Concretely, our approach consists of three key steps: model pre-adaptation, hypothesis consolidation, and semi-supervised learning. We have empirically observed that pre-adapting the model can enhance the effectiveness of the second step. To accomplish this, we introduce a straightforward objective that encourages prediction smoothness from the network. In the final step, we leverage the widely-used FixMatch [34] algorithm as our chosen semi-supervised learning method. Through extensive experimentation, we demonstrated the clear advantages of our approach over existing methods in the SFUDA domain and show that the proposed method can be easily integrated into existing approaches to bring improvement.
II Related Work
II-A UDA
Unsupervised domain adaptation aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain. Various approaches have been proposed to address this task, including discrepancy minimization [42, 40, 41], adversarial learning [36, 22, 37, 38], and contrastive learning [39, 28]. Recently, self-training using labeled source data and pseudo-labeled target data has emerged as a prominent approach in unsupervised domain adaptation (UDA) research [43, 44, 45, 46, 47]. However, these methods typically rely on access to the source data, making them inapplicable when source data is unavailable.
II-B SFUDA.
Source-free unsupervised domain adaptation involves adapting a pre-trained model from a source domain to a target domain without access to source data+labels or target labels. Existing SFUDA methods can be broadly categorized into two classes: i) Label Refinement: Methods such as SHOT [2], G-SFDA [5], NRC [6], and GPL [13] focus on refining pseudo labels. SHOT generates pseudo labels using centroids obtained in an unsupervised manner. G-SFDA, NRC, and GPL refine pseudo labels through consistent predictions and nearest neighbor knowledge aggregation from local neighboring samples. ii) Contrastive Feature Learning: Approaches like HCL [9], C-SFDA [12], AdaContrast [10], GPL [13], and DaC [11]. HCL and C-SFDA use a contrastive loss similar to moco [3], where positive pairs consist of augmented query samples and negatives are other samples. AdaContrast and GPL exclude same-class negative pairs based on pseudo labels. DaC divides the target data into source-like and target-specific samples, computes source-like class centroids, and generates negative pairs using these centroids. These methods aim to tackle SFUDA by refining pseudo labels or leveraging contrastive feature learning, demonstrating the potential of different strategies in addressing the challenges of adapting models without access to labeled source data or target label.
III Method

In the source-free unsupervised domain adaptation (SFUDA) setting, only pretrained source models and unlabeled data in the target domain are given. The task is to adapt the model to the target domain by using unlabeled target data only. Our approach sequentially applies three steps as described in Sec. III-A), Sec. III-B) and Sec. III-C).

III-A Model Pre-adaptation via Encouraging Smooth Prediction
The first step of our approach is to make an initial adaptation to reduce the domain gap. We empirically find such a step can be beneficial for the following steps. We develop a pre-adaptation strategy by encouraging a smooth prediction on the data manifold. 111Other pre-adaptation approaches may also work, such as the method in [2], please refer to Sec. IV-F for more experimental evidence. Specifically, we create a memory to store randomly sampled image embedding and update it after each batch training. Then for each target sample , we find the -nearest neighbor and -samples that are furthest to based on the Euclidean distance between the image embedding of and embedding in ( and in our implementation). Then we optimize the following objective:
(1) |
where KL represents Kullback-Leibler divergence and denotes the posterior probability predicted from the source model. is the number of samples within a mini-batch. The first term is used to ensure similar samples have similar predictions. However, using the first term alone may lead to a trivial solution that assigns identical prediction for every instance. Thus we use the second term to counter-act it as it ensures that the least similar samples should have divergent posterior probabilities, i.e., the inner product between posterior should close to zero.
III-B Hypothesis Consolidation from Prediction Rationale
After pre-adaptation, the model generally exhibits improved adaptation to the target domain. However, there may still be instances where the model produces incorrect predictions, making it challenging to rectify misclassifications solely based on predicted posterior probabilities. Therefore, in the second step, we explore a more robust methodology for analyzing predictions.
We begin by considering multiple prediction hypotheses for each individual instance. Specifically, for each instance, we consider the top classes with the highest posterior probabilities as potential prediction hypotheses, denoted as , top . In other words, we acknowledge the correct class label could exist within one of these top classes, even though we do not know which one.
To further analyze each hypothesis , we calculate the GradCAM [1] to identify the regions that contribute to supporting the prediction for , resulting in a representation called the rationale representation . This rationale representation encodes the evidence supporting the corresponding hypothesis. Drawing inspiration from prior work [48, 49], we formally calculate using following equation:
(2) |
where , is the feature map of the last convolutional layer of the network with height, width, and channels. is the feature vector located at the -th grid. is the logit for class , . is equivalent to GradCAM value at the -th grid. Essentially, the calculation of performs weighted average pooling over according to the GradCAM. Figure 1 shows the GradCAM calculated from different hypotheses for the same image. Upon observation, we notice that even if the ground-truth class is not ranked as the top prediction by the model, its associated rationale remains reasonable and similar to the common rationale patterns for the corresponding class. This inspires us to leverage this observation to analyze the model’s current predictions. For example, if an instance has a prediction hypothesis that exhibits a rationale similar to the corresponding class’s common rationale but is not ranked as the top prediction, then the top prediction may not be correct.
Formally, we calculate the class-wise rationale centroid as the average rationale representation from each hypothetical class, representing the common rationale for each class:
(3) |
where represents a class and if . The idea of using multiple hypotheses with the rationale representation is illustrated in Figure 2.
Next, we generate a ranking index for each prediction hypothesis by ranking the Euclidean distance between and its corresponding rationale centroid , i.e., the centroid for class , in the ascending order. For each instance , we obtain ranking indices , top classes, one for each hypothesis. Then, a hypothesis is considered reliable if it satisfies the following two conditions: (1) , indicating the rationale for is typical as its rationale representation is close to the rationale centroid. (2) , where are two predefined ranking thresholds. The second condition ensures that there are no conflicting hypotheses, i.e., no other hypothesis is likely to be true for the same instance as their rationale appears to be unusual.

With those criteria, we can collect a set of reliable hypotheses as samples with their corresponding hypothetical labels. Representative examples of this procedure are depicted in Figure 3. It is important to note that in the second step, we aim to select the most reliable hypothesis rather than correcting hypotheses. This is because we believe that the task of correcting predictions or hypotheses can be better accomplished through the use of semi-supervised learning, which allows for the gradual propagation of pseudo-labels.
By focusing on identifying the most reliable hypothesis based on the proximity of the rationale representation to the rationale centroid and the absence of conflicting rationales, we can create a high-quality set of pseudo-labeled samples (see Section IV-E). These pseudo-labels can then be used in a semi-supervised learning framework to refine the model’s predictions and gradually improve its performance.
III-C Semi-Supervised Learning
After completing the second step of hypothesis consolidation, we obtain a reliable pseudo-label set , while the remaining samples are treated as the unlabeled set . At this stage, we are ready to apply a semi-supervised algorithm to perform the final step of adaptation. For this purpose, we utilize one of the state-of-the-art semi-supervised methods, FixMatch [34], which combines consistency regularization and pseudo-labeling to address this task.
Specifically, we start by sampling a labeled mini-batch from the reliable pseudo-label set and an unlabeled batch from the unlabeled set . We then optimize the following objective function using these batches:
(4) |
where . and are the weakly-augmented and strongly-augmented operations, respectively. is the threshold defined in FixMatch to identify reliable pseudo-label (we set the same with FixMatch as 0.95), and is the cross-entropy between two probability distributions.
We present the overall training process of our proposed SFUDA method in Algorithm 1.
IV Experiments
IV-A Datasets
Office-Home [31] consists of 15,500 images categorized into 65 classes. It includes four distinct domains: Real-world (Rw), Clipart (Cl), Art (Ar), and Product (Pr). To evaluate the proposed method, researchers perform 12 transfer tasks on this dataset, involving adapting models across the four domains. The evaluation reports each domain shift Top-1 and the average Top-1 accuracy. Originally, the DomainNet dataset [29] consisted of over 500,000 images, including six domains and 345 classes. For our evaluation, we follow the approach described in [21] and focus on four domains: Real World (Rw), Sketch (Sk), Clipart (Cl), and Painting (Pt). We assess our proposed method on seven domain shifts within these four domains. VisDA-C [30] contains 152,000 synthetic images from the source domain and 55,000 real object images from the target domain. It consists of 12 object classes, and there is a significant synthetic-to-real domain gap between the two domains. Our evaluation reports per-class Top-1 accuracies, as well as the average Top-1 accuracy on this dataset.
IV-B Implementation Details
To ensure fair comparisons with previous work [2, 10, 12], we employ the ResNet-50 [3] as the network backbone for the Office-Home and DomainNet datasets, and ResNet-101 for the VisDA-C dataset. The network architecture follows the same configuration as SHOT [2]. Specifically, we replace the original fully connected (FC) layer in ResNet-50/101 with a bottleneck layer of 256 dimensions and apply batch normalization [32]. This modified setup serves as the feature extractor+projector head, producing feature representations and embedding of dimensions and , respectively. Additionally, we include an extra fully connected layer with weight normalization [33] as a task-specific classifier.
In the first step of model pre-adaptation, we use a batch size of 64. The value of is set as , where , and represents the training progress variable ranging from 0 to 1, calculated as . In the second step of hypothesis consolidation, we set the number of nearest/furthest neighbor per instance as 3, and set hypothesis per instance as 4, respectively. The ranking thresholds and are determined as a percentage of the total number of samples on the three datasets, specifically set at 0.8% and 1.6%. In the third step of semi-supervised learning, we set the size of and to 64.
We use the SGD optimizer with a momentum of 0.9 and a weight decay of for all datasets. The learning rate is set as for all datasets, except for the bottleneck layer and the additional fully connected layer, where it is set as . We train for 40 epochs on the Office-Home and DomainNet datasets, where 9 epochs are dedicated to the model pre-adaptation. For the VisDA-C dataset, we train for 15 epochs, with 7 epochs allocated for the model pre-adaptation. All images from the datasets undergo augmentation, including weak and strong augmentation. Weak augmentation involves a standard flip-and-shift augmentation strategy, while strong augmentation is similar to the approach used in the work of [34].
# | PA | HCPR | FM | O-H | DN | VisDA-C |
---|---|---|---|---|---|---|
0 | 60.2 | 55.6 | 46.6 | |||
1 | 64.2 | 60.6 | 62.3 | |||
2 | 68.6 | 70.6 | 85.2 | |||
3 | 72.1 | 67.4 | 86.2 | |||
4 | 72.7 | 69.6 | 87.5 | |||
5 | 72.2 | 67.5 | 86.2 | |||
6 | 73.6 | 72.5 | 88.6 |
Method | SF | ArCl | ArPr | ArRw | ClAr | ClPr | ClRw | PrAr | PrCl | PrRw | RwAr | RwCl | RwPr | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ResNet-50 [3] | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1 | |
GSDA [17] | 61.3 | 76.1 | 79.4 | 65.4 | 73.3 | 74.3 | 65.0 | 53.0 | 80.0 | 72.2 | 60.6 | 83.1 | 70.3 | |
RSDA [18] | 53.3 | 77.7 | 81.3 | 66.4 | 74.0 | 76.5 | 67.9 | 53.0 | 82.0 | 75.8 | 57.8 | 85.4 | 70.9 | |
SRDC [19] | 52.3 | 76.3 | 81.0 | 69.5 | 76.2 | 78.0 | 68.7 | 53.8 | 81.7 | 76.3 | 57.1 | 85.0 | 71.3 | |
FixBi [20] | 58.1 | 77.3 | 80.4 | 67.7 | 79.5 | 78.1 | 65.8 | 57.9 | 81.7 | 76.4 | 62.9 | 86.7 | 72.7 | |
G-SFDA [5] | 57.9 | 78.6 | 81.0 | 66.7 | 77.2 | 77.2 | 65.6 | 56.0 | 82.2 | 72.0 | 57.8 | 83.4 | 71.3 | |
SHOT [2] | 56.9 | 78.1 | 81.0 | 67.9 | 78.4 | 78.1 | 67.0 | 54.6 | 81.8 | 73.4 | 58.1 | 84.5 | 71.6 | |
SHOT++ [54] | 57.9 | 79.7 | 82.5 | 68.5 | 79.6 | 79.3 | 68.5 | 57.0 | 83.0 | 73.7 | 60.7 | 84.9 | 73.0 | |
NRC [6] | 57.7 | 80.3 | 82.0 | 68.1 | 79.8 | 78.6 | 65.3 | 56.4 | 83.0 | 71.0 | 58.6 | 85.6 | 72.2 | |
CoWA [8] | 56.9 | 78.4 | 81.0 | 69.1 | 80.0 | 79.9 | 67.7 | 57.2 | 82.4 | 72.8 | 60.5 | 84.5 | 72.5 | |
HCL [9] | 64.0 | 78.6 | 82.4 | 64.5 | 73.1 | 80.1 | 64.8 | 59.8 | 75.3 | 78.1 | 69.3 | 81.5 | 72.6 | |
AaD [51] | 59.3 | 79.3 | 82.1 | 68.9 | 79.8 | 79.5 | 67.2 | 57.4 | 83.1 | 72.1 | 58.5 | 85.4 | 72.7 | |
DaC [11] | 59.1 | 79.5 | 81.2 | 69.3 | 78.9 | 79.2 | 67.4 | 56.4 | 82.4 | 74.0 | 61.4 | 84.4 | 72.8 | |
VMP [52] | 57.9 | 77.6 | 82.5 | 68.6 | 79.4 | 80.6 | 68.4 | 55.6 | 83.1 | 75.2 | 59.6 | 84.7 | 72.8 | |
SFDA-DE [7] | 59.7 | 79.5 | 82.4 | 69.7 | 78.6 | 79.2 | 66.1 | 57.2 | 82.6 | 73.9 | 60.8 | 85.2 | 72.9 | |
C-SFDA [12] | 60.3 | 80.2 | 82.9 | 69.3 | 80.1 | 78.8 | 67.3 | 58.1 | 83.4 | 73.6 | 61.3 | 86.3 | 73.5 | |
Ours | 59.9 | 79.6 | 82.7 | 70.3 | 81.8 | 80.4 | 68.5 | 57.8 | 83.5 | 72.5 | 59.8 | 86.0 | 73.6 |
IV-C Comparison with State-of-the-arts
Method | SF | RwCl | RwPt | PtCl | ClSk | SkPt | RwSk | PtRw | Avg. |
---|---|---|---|---|---|---|---|---|---|
ResNet-50 [3] | 58.8 | 62.2 | 57.7 | 50.3 | 52.6 | 47.3 | 73.2 | 57.4 | |
MCC [23] | 44.8 | 65.7 | 41.9 | 34.9 | 47.3 | 35.3 | 72.4 | 48.9 | |
CDAN [22] | 65.0 | 64.9 | 63.7 | 53.1 | 63.4 | 54.5 | 73.2 | 62.5 | |
GVB [24] | 68.2 | 69.0 | 63.2 | 56.6 | 63.1 | 62.2 | 78.3 | 65.2 | |
MME [21] | 70.0 | 67.7 | 69.0 | 56.3 | 64.8 | 61.0 | 76.0 | 66.4 | |
TENT [15] | 58.5 | 65.7 | 57.9 | 48.5 | 52.4 | 54.0 | 67.0 | 57.7 | |
G-SFDA [5] | 63.4 | 67.5 | 62.5 | 55.3 | 60.8 | 58.3 | 75.2 | 63.3 | |
NRC [6] | 67.5 | 68.0 | 67.8 | 57.6 | 59.3 | 58.7 | 74.3 | 64.7 | |
SHOT [2] | 67.7 | 68.4 | 66.9 | 60.1 | 66.1 | 59.9 | 80.8 | 67.1 | |
AdaConstrast [10] | 70.6 | 69.8 | 69.3 | 58.5 | 66.2 | 60.2 | 80.2 | 67.8 | |
AaD [51] | 70.2 | 69.8 | 68.6 | 58.0 | 65.9 | 61.5 | 80.5 | 67.8 | |
DaC [11]* | 70.0 | 68.8 | 70.9 | 62.4 | 66.8 | 60.3 | 78.6 | 68.3 | |
C-SFDA[12] | 70.8 | 71.1 | 68.5 | 62.1 | 67.4 | 62.7 | 80.4 | 69.0 | |
GPL [13] | 74.2 | 70.4 | 68.8 | 64.0 | 67.5 | 65.7 | 76.5 | 69.6 | |
Ours | 76.9 | 71.8 | 75.4 | 65.5 | 69.9 | 64.6 | 83.2 | 72.5 |
-
*
This work uses ResNet-34 as backbone.
Method | SF | plane | bcyle | bus | car | horse | knife | mcyle | person | plant | sktbrd | train | truck | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ResNet-101 [3] | 55.1 | 53.3 | 61.9 | 59.1 | 80.6 | 17.9 | 79.7 | 31.2 | 81.0 | 26.5 | 73.5 | 8.5 | 52.4 | |
MCC [23] | 88.7 | 80.3 | 80.5 | 71.5 | 90.1 | 93.2 | 85.0 | 71.6 | 89.4 | 73.8 | 85.0 | 36.9 | 78.8 | |
STAR [26] | 95.0 | 84.0 | 84.6 | 73.0 | 91.6 | 91.8 | 85.9 | 78.4 | 94.4 | 84.7 | 87.0 | 42.2 | 82.7 | |
RWOT [27] | 95.1 | 87.4 | 85.2 | 58.6 | 96.2 | 95.7 | 90.6 | 80.0 | 94.8 | 90.8 | 88.4 | 47.9 | 84.3 | |
CAN [28] | 97.0 | 87.2 | 82.5 | 74.3 | 97.8 | 96.2 | 90.8 | 80.7 | 96.6 | 96.3 | 87.5 | 59.9 | 87.2 | |
SHOT [2] | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 90.5 | 89.1 | 86.3 | 58.2 | 82.9 | |
DIPE [4] | 95.2 | 87.6 | 78.8 | 55.9 | 93.9 | 95.0 | 84.1 | 81.7 | 92.1 | 88.9 | 85.4 | 58.0 | 83.1 | |
HCL [9] | 93.3 | 85.4 | 80.7 | 68.5 | 91.0 | 88.1 | 86.0 | 78.6 | 86.6 | 88.8 | 80.0 | 74.7 | 83.5 | |
Net [25] | 94.0 | 87.8 | 85.6 | 66.8 | 93.7 | 95.1 | 85.8 | 81.2 | 91.6 | 88.2 | 86.5 | 56.0 | 84.3 | |
G-SFDA [5] | 96.1 | 88.3 | 85.5 | 74.1 | 97.1 | 95.4 | 89.5 | 79.4 | 95.4 | 92.9 | 89.1 | 42.6 | 85.4 | |
NRC [6] | 96.8 | 91.3 | 82.4 | 62.4 | 96.2 | 95.9 | 86.1 | 80.6 | 94.8 | 94.1 | 90.4 | 59.7 | 85.9 | |
SFDA-DE [7] | 95.3 | 91.2 | 77.5 | 72.1 | 95.7 | 97.8 | 85.5 | 86.1 | 95.5 | 93.0 | 86.3 | 61.6 | 86.5 | |
AdaContrast [10] | 97.0 | 84.7 | 84.0 | 77.3 | 96.7 | 93.8 | 91.9 | 84.8 | 94.3 | 93.1 | 94.1 | 49.7 | 86.8 | |
CoWA [8] | 96.2 | 89.7 | 83.9 | 73.8 | 96.4 | 97.4 | 89.3 | 86.8 | 94.6 | 92.1 | 88.7 | 53.8 | 86.9 | |
DaC [11] | 96.6 | 86.8 | 86.4 | 78.4 | 96.4 | 96.2 | 93.6 | 83.8 | 96.8 | 95.1 | 89.6 | 50.0 | 87.3 | |
BDT [53] | - | - | - | - | - | - | - | - | - | - | - | - | 87.8 | |
C-SFDA [12] | 97.6 | 88.8 | 86.1 | 72.2 | 97.2 | 94.4 | 92.1 | 84.7 | 93.0 | 90.7 | 93.1 | 63.5 | 87.8 | |
Ours | 98.0 | 88.0 | 86.4 | 82.3 | 97.8 | 96.2 | 92.1 | 85.0 | 95.5 | 91.7 | 93.8 | 56.2 | 88.6 |
IV-C1 Quantitative Results
We compare our proposed method against popular source-present and source-free methods on three benchmark datasets: Office-Home, DomainNet, and VisDA-C. We report the Top-1 accuracy, and the results are presented in Table II to Table V. In the Office-Home dataset, as shown in Table II, our proposed method achieves the best performance in terms of Top-1 average accuracy, which is comparable to the most recent source-free method C-SFDA. Additionally, our method in 3 sub-transfer tasks achieves the highest accuracy (see bold in Table II) vs. only one sub-transfer task in C-SFDA. For the DomainNet dataset, as demonstrated in Table IV, our proposed method exhibits significant improvements over all baselines. With an average Top-1 accuracy of , our method outperforms the best source-free baseline by nearly 3% and surpasses the best source-present baseline by 6.1%. Moreover, our method achieves the best performance in almost all domain shifts. On the VisDA-C dataset, presented in Table V, our proposed method outperforms the state-of-the-art method C-SFDA [12] by 0.8%. Furthermore, our method achieves the best performance in specific classes such as “plane”, “bus”, “car”, and “horse”. These results clearly demonstrate the superiority of our proposed method across the evaluated datasets, showcasing its effectiveness in source-free domain adaptation scenarios.
IV-C2 Effectiveness Analysis
We conducted an analysis and comparison of the memory usage and running time of our method with recent works, including AdaContrast [10], C-SFDA [12], and GPL [13]. Interestingly, our method requires normal memory usage, whereas the other methods consume more than 32GB of memory. Despite using standard memory, our approach achieves higher accuracy in comparison. Additionally, the running time of our method is considerably less than that of GPL.
IV-D Ablation Studies
2 | 3 | 4 | 5 | 6 | |
---|---|---|---|---|---|
Accuracy | 73.7 | 74.2 | 75.4 | 75.3 | 74.8 |
IV-D1 Component-wise Analysis
In this section, we conduct ablation studies to analyze the contribution of each component in our method on three benchmark datasets: Office-Home, DomainNet, and VisDA-C. The results are summarized in Table I, in which the HCPR (Hypothesis Consolidation from Prediction Rationale) component makes the most contributions to the promotion of accuracy. Specifically, compared to only using FixMatch, combining both FixMatch and HCPR significantly improves accuracy by 4.4%, 10.0%, and 22.9% on the respective datasets. Additionally, in the case of combining both PA (Pre-Adaptation) and HCPR, we execute PA again following HCPR to integrate the consolidation outcomes from HCPR. This showcases a substantial enhancement in accuracy, with improvements of 0.6%, 2.2%, and 1.3% on the respective datasets compared to solely employing PA. Last but not least, Removing HCPR from the method leads to a performance drop of 1.4%, 5%, and 2% points on Office-Home, DomainNet, and VisDA-C, respectively.

IV-D2 Impact of Model Pre-adaptation
To assess the impact of model pre-adaptation on per-class accuracy (e.g., ”truck”) and average accuracy in our method, we perform experiments using four different settings on the VisDA-C dataset: model pre-adaptation removing the second term in Eq. 1 referring to “step 1 w/o FAR”; the proposed method without model pre-adaptation referring to “w/o step 1”; Using SHOT’s loss as model pre-adaptation to replace Eq. 1, referring to “SHOT as step 1”; and the proposed method with model pre-adaptation using Eq. 1 referring to “Ours”. The experimental results are shown in Figure 4. As we can see, we have the following observations: First, compared to “SHOT as step 1”, the proposed method encouraging smooth prediction has a better average accuracy (86.80% vs. 88.60%), which demonstrates the superiority of making a smooth prediction on the data manifold compared to one-hot prediction in [2]. Second, when removing step 1, referring to “w/o step 1,” the average accuracy dropped by 3.4%. This indicates that Eq. 1 is helpful for model pre-adaptation and improves the ability of the model to distinguish image classes in the target domain. Third, when removing in step 1 referring to “step 1 w/o FAR”, the average performance drop dramatically from 88.60% to 77.46% and the classification accuracy in the class “trunk” drop to 0%. This demonstrates that the plays a vital role in keeping class balance and avoiding some missed classes.
IV-D3 Impact of —the Number of Prediction Hypotheses Per Instance
In our method, we choose labels from the top highest posterior probabilities as the prediction hypothesis. In this section, we investigate the impact of the value of . Table VI shows the accuracy achieved with different . From the result, we can see that using 2 hypotheses has already led to good performance and choosing 3-6 hypotheses leads to optimal performance.

Method | ArCl | ArPr | ArRw | ClAr | ClPr | ClRw | PrAr | PrCl | PrRw | RwAr | RwCl | RwPr | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SHOT [2] | 56.9 | 78.1 | 81.0 | 67.9 | 78.4 | 78.1 | 67.0 | 54.6 | 81.8 | 73.4 | 58.1 | 84.5 | 71.6 |
SHOT+Ours | 58.7 | 79.5 | 82.1 | 69.6 | 80.7 | 80.0 | 69.1 | 56.9 | 82.3 | 74.5 | 59.2 | 85.3 | 73.2 |
AaD [51] | 59.3 | 79.3 | 82.1 | 68.9 | 79.8 | 79.5 | 67.2 | 57.4 | 83.1 | 72.1 | 58.5 | 85.4 | 72.7 |
AaD+Ours | 59.8 | 79.4 | 82.7 | 70.0 | 81.6 | 80.0 | 68.5 | 57.6 | 83.2 | 72.7 | 59.4 | 86.1 | 73.4 |
Method | plane | bcyle | bus | car | horse | knife | mcyle | person | plant | sktbrd | train | truck | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SHOT [2] | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 90.5 | 89.1 | 86.3 | 58.2 | 82.9 |
SHOT+Ours | 97.5 | 84.6 | 83.0 | 74.2 | 96.5 | 93.7 | 92.8 | 86.7 | 93.5 | 92.6 | 89.7 | 56.9 | 86.8 |
AaD [51] | 97.4 | 90.5 | 80.8 | 76.2 | 97.3 | 96.1 | 89.8 | 82.9 | 95.5 | 93.0 | 92.0 | 64.0 | 88.0 |
AaD+Ours | 97.8 | 87.6 | 86.7 | 83.4 | 97.7 | 95.4 | 94.2 | 83.8 | 94.6 | 91.2 | 92.8 | 55.6 | 88.4 |

1 | 2 | 3 | 4 | 5 | |
---|---|---|---|---|---|
Accuracy | 73.2 | 74.7 | 75.4 | 74.7 | 74.3 |
IV-D4 Impact of the —the Number of Nearest and Furthest Neighbor
In the initial step of our model pre-adaptation, we select the -nearest and -furthest neighbors for each target sample. In this analysis, we examine the influence of the value. Figure 6 and Table VIII showcase the performance throughout the training process and top-1 accuracy of classification on the DomainNet (PtCl) dataset for different values of . The results indicate that even with just one nearest and furthest neighbor, we achieve favorable classification accuracy, and selecting 2-5 nearest and furthest neighbors yields optimal performance. Moreover, as observed in Figure 6, it is worth noting that when step 1 (0-9 epochs) achieves and maintains the best results, HCPR plays a pivotal role in enhancing the performance of the model.
IV-D5 Impact of the Two Ranking Thresholds and
To assess the influence of ranking thresholds in our method, we examined the percentage values and relative to the total number of samples. Specifically, we analyzed their impact on the Top-1 average accuracy on the VisDA-C dataset, as illustrated in Figure 5. Our analysis, depicted in Figure 5, revealed that the proposed method exhibits robustness to the specific values of and .
Method | Office-Home | DomainNet |
---|---|---|
near-centroid selection | 72.6 | 69.6 |
Ours | 73.6 | 72.5 |

IV-D6 The Benefit of Using Rationale Representations
To further understand the benefit of using the rationale representation from multiple hypotheses, we explore an alternative method that replaces the proposed second step by using feature centroids rather than rationale centroids. Since the feature is invariant to the prediction hypothesis, only the top predicted class will be considered. More specially, we first generate pseudo-label for each instance and calculate the feature centroid similar to our approach. Then we rank instances based on the Euclidean distances between their features and the corresponding class centroid. The top features closest to the class centroid are assigned reliable pseudo labels, while the remaining samples are left for step 3. We refer to this method as “near-centroid selection”. Table IX presents the comparison results on the Office-Home and DomainNet datasets. As seen, while such an approach still leads to improvement over using step 1 and step 3 alone (by cross-referencing Table I), it is still inferior to the use of HCPR. This clearly demonstrates the benefits of the latter.
IV-D7 Investigation of Recursively Applying HCPR
One may wonder if recursively applying HCPR will lead to additional improvement. To this end, we create a variant of our method by alternatively applying step 2 and step 3, hoping that they may mutually enhance each other. We conducted experiments on the Office-Home (ClPr) dataset. The results are depicted in Figure 7, where the red curve represents our method using the second step only once, i.e., the hypothesis consolidation occurs between model pre-adaptation (0-9 epochs) and semi-supervised learning (10-40 epochs). The blue curve represents our method with the second step updated at the 15th, 20th, and 25th epochs. From the results, we observed that recursively applying HCPR does not lead to an improvement as one may expect.

We also conduct experiments with HCPR applied recursively to only model pre-adaptation or FixMatch. Specifically, we conduct experiments using the Office-Home (ClPr) dataset and configure the following scenarios:
-
•
Combining Step 1 and Step 2, with Step 2 calculated at the 9th, 15th, and 20th epochs (indicated by the green curve in Figure 8).
-
•
Combining Step 3 and Step 2, with Step 2 calculated at the 7th, 15th, and 20th epochs (indicated by the yellow curve in Figure 8).
-
•
Combining Step 2 and Step 3, with Step 2 calculated at the 0th, 15th, and 20th epochs (indicated by the purple curve in Figure 8).
-
•
Our method, is represented by the red curve.
Our observations indicate that utilizing Step 2 only once is sufficient, and the recursive HCPR application does not yield improvements. However, we do note that HCPR plays a crucial role in enhancing FixMatch, particularly in improving the quality of pseudo-labels.
IV-E Pseudo-label Quantity and Quality
DomainNet (Rw->Cl) | Quantity (%) | Quality (%) |
---|---|---|
source model only (con>0.95) | 3.95 | 95.80 |
SHOT (con>0.95) | 61.83 | 80.38 |
PA only (con>0.95) | 79.13 | 80.76 |
HCPR only | 21.35 | 84.02 |
PA+HCPR | 24.65 | 90.76 |





In this section, we assess both the quality and quantity of pseudo-labels generated by each component of our method, comparing them with the source model alone and SHOT. Pseudo-label quantity is measured by the ratio of selected samples to the total samples, while pseudo-label quality is defined as the precision of the selected samples. The results are shown in Table X. As seen, using the original source model generates good pseudo-label quality within the selected group, but only a small number of samples satisfy the high confidence condition. On the other hand, SHOT and PA select a large number of samples but with a relatively poor quality of approximately 80%. In comparison, PA+HCPR achieves both good pseudo-label quality (90.76%), and a substantial quantity of pseudo-labels (24.65%). When comparing HCPR only and PA only, we observed that PA generates nearly four times as many pseudo-labels as HCPR but with lower quality. This suggests the presence of significant noise in the pseudo-labels generated by PA.
The training progress to both the quantity and quality of pseudo-labels can be shown in Figure 9. Our findings revealed that in the initial step with PA (0-9th epochs), there is a significant increase in the quantity of pseudo labels, albeit accompanied by a gradual decrease in their quality. However, with the assistance of HCPR (after 9th epch, before 10th epoch), the quality of pseudo-labels experiences a significant increase, accompanied by a substantial quantity. In the subsequent third step involving FM (10-40th epochs), the quality of pseudo labels has a gradual improvement, which subsequently stabilizes at a consistent level.
IV-F Incorporating the Proposed Method into Existing Approaches
The proposed method can be seamlessly integrated into existing network architectures, such as SHOT [2] and AaD [51]. Specifically, we replace the pre-adaptation phase in our first step with SHOT and AaD, resulting in the combined approach referred to as “SHOT+Ours” and “AaD+Ours”. The integration process can be summarized as follows: first, pseudo labels are generated using SHOT’s unsupervised nearest class centroid approach and AaD’s feature clustering and cluster assignment approach. Then, to refine these pseudo labels and address potential noise and inaccuracies, we utilize hypothesis consolidation of prediction rationale. The refined pseudo-label set is used as the labeled dataset, while the remaining samples are treated as unlabeled. Consequently, the SFUDA problem is transformed into a semi-supervised learning problem. The experimental results, as shown in Table VII, demonstrate the superiority of the proposed method integrated into the SHOT and AaD objectives. Across the Office-Home (Avg. 1.6% and ), VisDA-C (Avg. 3.9% and ), and DomainNet-126 (Avg. 2.8% and ) datasets, the integrated approach consistently outperforms the baseline of SHOT and AaD. This indicates that our method complements existing SFUDA baselines and consistently improves their performance by incorporating our approach as a replacement for the model pre-adaptation phase.
IV-G Visualization
In t-SNE visualization, we compare the results with the state before adaptation by examining three approaches: source model only, AaD [51], and our method shown in Figure 10. The source model only demonstrates shortcomings, experiencing false predictions within each class and struggling to establish clear intra-class boundaries. While AaD generally achieves accurate predictions within each class, it falls short in generating clear intra-class boundaries. In contrast, our method excels in achieving accurate predictions within each class and successfully generates distinct intra-class boundaries, which showcases its ability to enhance prediction accuracy and produce well-defined intra-class boundaries.
V Limitation and Future Work
The current approach relies on having access to the entire target training set to perform crucial steps like pre-adaptation and identifying the reliable pseudo-labeled set. However, in real-world applications, online adaptation is often more desirable as it doesn’t require holding a large number of target examples. As part of our future work, we aim to extend the key idea of this research to the online streaming setting. By doing so, we can develop a methodology that adapts in real-time to incoming data, allowing for more efficient and effective adaptation in dynamic environments. This extension will enhance the applicability and practicality of the proposed approach in various domains.
VI Conclusion
In conclusion, this paper introduces a novel approach for Source-Free Unsupervised Domain Adaptation (SFUDA), where a model needs to adapt to a new domain without access to target domain labels or source domain data. By considering multiple prediction hypotheses and analyzing their rationales, the proposed method identifies the most likely correct hypotheses, which are then used as pseudo-labeled data for a semi-supervised learning procedure. The three-step adaptation process, including model pre-adaptation, hypothesis consolidation, and semi-supervised learning, ensures optimal performance. Experimental results demonstrate that the proposed approach achieves state-of-the-art performance in the SFUDA task and can be seamlessly integrated into existing methods to enhance their performance.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful comments. This work is supported by the Centre for Augmented Reasoning.
References
- [1] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
- [2] J. Liang, D. Hu, and J. Feng, “Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation,” in International Conference on Machine Learning. PMLR, 2020, pp. 6028–6039.
- [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [4] F. Wang, Z. Han, Y. Gong, and Y. Yin, “Exploring domain-invariant parameters for source free domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7151–7160.
- [5] S. Yang, Y. Wang, J. Van De Weijer, L. Herranz, and S. Jui, “Generalized source-free domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8978–8987.
- [6] S. Yang, J. van de Weijer, L. Herranz, S. Jui et al., “Exploiting the intrinsic neighborhood structure for source-free domain adaptation,” Advances in neural information processing systems, vol. 34, pp. 29 393–29 405, 2021.
- [7] N. Ding, Y. Xu, Y. Tang, C. Xu, Y. Wang, and D. Tao, “Source-free domain adaptation via distribution estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7212–7222.
- [8] J. Lee, D. Jung, J. Yim, and S. Yoon, “Confidence score for source-free unsupervised domain adaptation,” in International Conference on Machine Learning. PMLR, 2022, pp. 12 365–12 377.
- [9] J. Huang, D. Guan, A. Xiao, and S. Lu, “Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data,” Advances in Neural Information Processing Systems, vol. 34, pp. 3635–3649, 2021.
- [10] D. Chen, D. Wang, T. Darrell, and S. Ebrahimi, “Contrastive test-time adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 295–305.
- [11] Z. Zhang, W. Chen, H. Cheng, Z. Li, S. Li, L. Lin, and G. Li, “Divide and contrast: Source-free domain adaptation via adaptive contrastive learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 5137–5149, 2022.
- [12] N. Karim, N. C. Mithun, A. Rajvanshi, H.-p. Chiu, S. Samarasekera, and N. Rahnavard, “C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24 120–24 131.
- [13] M. Litrico, A. Del Bue, and P. Morerio, “Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7640–7650.
- [14] Z. Qiu, Y. Zhang, H. Lin, S. Niu, Y. Liu, Q. Du, and M. Tan, “Source-free domain adaptation via avatar prototype generation and adaptation,” arXiv preprint arXiv:2106.15326, 2021.
- [15] D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell, “Tent: Fully test-time adaptation by entropy minimization,” arXiv preprint arXiv:2006.10726, 2020.
- [16] S. Yang, Y. Wang, J. van de Weijer, L. Herranz, and S. Jui, “Casting a bait for offline and online source-free domain adaptation,” arXiv preprint arXiv:2010.12427, 2020.
- [17] L. Hu, M. Kan, S. Shan, and X. Chen, “Unsupervised domain adaptation with hierarchical gradient synchronization,” in Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, 2020, pp. 4043–4052.
- [18] X. Gu, J. Sun, and Z. Xu, “Spherical space domain adaptation with robust pseudo-label loss,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9101–9110.
- [19] H. Tang, K. Chen, and K. Jia, “Unsupervised domain adaptation via structurally regularized deep clustering,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8725–8735.
- [20] J. Na, H. Jung, H. J. Chang, and W. Hwang, “Fixbi: Bridging domain spaces for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1094–1103.
- [21] K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko, “Semi-supervised domain adaptation via minimax entropy,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 8050–8058.
- [22] M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” Advances in neural information processing systems, vol. 31, 2018.
- [23] Y. Jin, X. Wang, M. Long, and J. Wang, “Minimum class confusion for versatile domain adaptation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXI 16. Springer, 2020, pp. 464–480.
- [24] S. Cui, S. Wang, J. Zhuo, C. Su, Q. Huang, and Q. Tian, “Gradually vanishing bridge for adversarial domain adaptation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 12 455–12 464.
- [25] H. Xia, H. Zhao, and Z. Ding, “Adaptive adversarial network for source-free domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9010–9019.
- [26] Z. Lu, Y. Yang, X. Zhu, C. Liu, Y.-Z. Song, and T. Xiang, “Stochastic classifiers for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9111–9120.
- [27] R. Xu, P. Liu, L. Wang, C. Chen, and J. Wang, “Reliable weighted optimal transport for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 4394–4403.
- [28] G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann, “Contrastive adaptation network for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4893–4902.
- [29] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1406–1415.
- [30] X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “Visda: The visual domain adaptation challenge,” arXiv preprint arXiv:1710.06924, 2017.
- [31] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5018–5027.
- [32] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International conference on machine learning. pmlr, 2015, pp. 448–456.
- [33] T. Salimans and D. P. Kingma, “Weight normalization: A simple reparameterization to accelerate training of deep neural networks,” Advances in neural information processing systems, vol. 29, 2016.
- [34] K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li, “Fixmatch: Simplifying semi-supervised learning with consistency and confidence,” Advances in neural information processing systems, vol. 33, pp. 596–608, 2020.
- [35] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
- [36] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in International conference on machine learning. Pmlr, 2018, pp. 1989–1998.
- [37] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7167–7176.
- [38] T.-H. Vu, H. Jain, M. Bucher, M. Cord, and P. Pérez, “Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2517–2526.
- [39] S. Dai, Y. Cheng, Y. Zhang, Z. Gan, J. Liu, and L. Carin, “Contrastively smoothed class alignment for unsupervised domain adaptation,” in Proceedings of the Asian Conference on Computer Vision, 2020.
- [40] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in International conference on machine learning. PMLR, 2015, pp. 1180–1189.
- [41] M. Long, Y. Cao, J. Wang, and M. Jordan, “Learning transferable features with deep adaptation networks,” in International conference on machine learning. PMLR, 2015, pp. 97–105.
- [42] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep domain confusion: Maximizing for domain invariance,” arXiv preprint arXiv:1412.3474, 2014.
- [43] H. Feng, M. Chen, J. Hu, D. Shen, H. Liu, and D. Cai, “Complementary pseudo labels for unsupervised domain adaptation on person re-identification,” IEEE Transactions on Image Processing, vol. 30, pp. 2898–2907, 2021.
- [44] K. Mei, C. Zhu, J. Zou, and S. Zhang, “Instance adaptive self-training for unsupervised domain adaptation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16. Springer, 2020, pp. 415–430.
- [45] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 687–10 698.
- [46] F. Yu, M. Zhang, H. Dong, S. Hu, B. Dong, and L. Zhang, “Dast: Unsupervised domain adaptation in semantic segmentation based on discriminator attention and self-training,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 754–10 762.
- [47] Y. Zou, Z. Yu, B. Kumar, and J. Wang, “Unsupervised domain adaptation for semantic segmentation via class-balanced self-training,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 289–305.
- [48] Y. Shu, B. Yu, H. Xu, and L. Liu, “Improving fine-grained visual recognition in low data regimes via self-boosting attention mechanism,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXV. Springer, 2022, pp. 449–465.
- [49] Y. Shu, A. van den Hengel, and L. Liu, “Learning common rationale to improve self-supervised representation for fine-grained visual recognition problems,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11 392–11 401.
- [50] J. Liang, N. Homayounfar, W.-C. Ma, Y. Xiong, R. Hu, and R. Urtasun, “Polytransform: Deep polygon transformer for instance segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9131–9140.
- [51] S. Yang, Y. Wang, K. Wang, S. Jui et al., “Attracting and dispersing: A simple approach for source-free domain adaptation,” in Advances in Neural Information Processing Systems, 2022.
- [52] M. Jing, X. Zhen, J. Li, and C. Snoek, “Variational model perturbation for source-free domain adaptation,” Advances in Neural Information Processing Systems, vol. 35, pp. 17 173–17 187, 2022.
- [53] J. N. Kundu, A. R. Kulkarni, S. Bhambri, D. Mehta, S. A. Kulkarni, V. Jampani, and V. B. Radhakrishnan, “Balancing discriminability and transferability for source-free domain adaptation,” in International Conference on Machine Learning. PMLR, 2022, pp. 11 710–11 728.
- [54] J. Liang, D. Hu, Y. Wang, R. He, and J. Feng, “Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 8602–8617, 2021.