Domain-Invariant Feature Alignment Using Variational Inference For Partial Domain Adaptation
Abstract
The standard closed-set domain adaptation approaches seek to mitigate distribution discrepancies between two domains under the constraint of both sharing identical label sets. However, in realistic scenarios, finding an optimal source domain with identical label space is a challenging task. Partial domain adaptation alleviates this problem of procuring a labeled dataset with identical label space assumptions and addresses a more practical scenario where the source label set subsumes the target label set. This, however, presents a few additional obstacles during adaptation. Samples with categories private to the source domain thwart relevant knowledge transfer and degrade model performance. In this work, we try to address these issues by coupling variational information and adversarial learning with a pseudo-labeling technique to enforce class distribution alignment and minimize the transfer of superfluous information from the source samples. The experimental findings in numerous cross-domain classification tasks demonstrate that the proposed technique delivers superior and comparable accuracy to existing methods.
I Introduction
A broad spectrum of frameworks that address complex machine learning issues have demonstrated notable performance improvements, attributable to the deep neural networks [26, 27, 28, 29, 30]. For such models to be generalizable, large amounts of labeled data must be readily available for supervision. Procuring such heavily annotated data is challenging in some real-world scenarios when data gathering and subsequent annotation incur significant expenses. A domain adaptation strategy [24] can reduce this annotation requirement by transferring relevant information from a large-scale dataset previously labeled and from a related domain.

The standard closed-set unsupervised domain adaptation frameworks[9, 24], which learn a classifier for the unlabeled target domain using a labeled source domain, have gained massive traction in the machine learning community. However, most existing works on assume that the source and target domains have the same label set. Finding an optimal source domain with an identical label space is challenging in practical scenarios. A more feasible approach is to operate on a relatively small-scale target domain while accessing a large-scale source domain. Partial domain adaptation [3, 14, 4, 6] addresses such a scenario, where the target label set is contained in the source label set. The following section discusses the current challenges in a problem and our recommendations for mitigating them.
Prior works on pda [3, 14, 4, 5, 6] have attempted to find shared latent representations of the source and target samples with class-discriminative properties. Among these, domain adversarial training is widely utilized for extracting domain-invariant latent features from the source and target samples, owing to its performance and extensibility. The process is performed with a feature extractor, a domain discriminator, and a label classifier. The latter two process the feature extractor output to predict domain and class labels. Attaining domain invariance, however, is a necessary and not a sufficient condition; ensuring improvement of target classification performance requires mitigation of the conditional distribution mismatch across two domains. Therefore, the latent space should be sufficiently “well-organized” and “regular” so that samples with the same class label are clustered to their respective distribution, while data with different class labels are assigned to distinct class distributions, regardless of their domains (check figure 1). Furthermore, it is vital to ensure that the information captured in latent features of samples adheres to target data for exercising sufficient supervision when adapting to unlabeled target samples. In other words, two neighboring points in the latent space representing target data should not yield radically different class-specific contents.
We have incorporated domain adversarial training to address these critical issues while enforcing explicit regularization of encoded sample data through variational information. The domain invariant features in the latent space are modeled as a mixture of Gaussian distributions, each representing the latent feature distribution of a predicted class. In addition, the model approximates a posterior feature distribution, where the latent features of a sample follow a Gaussian distribution. The model aims to align these posterior embeddings with the reference latent features during training. Enforcing this regularization assists the adapted model in minimizing inter-class entanglement and promotes class-wise distribution alignment in the latent space while capturing class-semantic information.

Removing the constraint of identical label set assumptions between two domains introduces the risk of negative transfer (propagating of unwanted information from samples in classes private to the source domain) into the model [3, 14, 4, 5], and consequently thwarts classification performance. As the model is not initially privy to the knowledge of shared label-set between two domains in a pda setup, it is essential to incorporate a mechanism into our network that estimates the common categories between two domains. Citing the necessity of eliminating negative transfer, we have designed a technique by quantifying the transferability of the source samples and regulating class-wise contribution to the learning of the classifier, domain discriminator, and feature decoder. This class-weighing scheme is further refined by filtering out confident task-relevant target samples for effective cross-domain alignment.
II Related Work
Several studies [20, 21, 22] in recent years have thoroughly explored the efficacy of deep neural networks for reducing domain discrepancy and effectively transferring relevant knowledge between domains for transfer learning tasks. A line of work [23] proposes a strategy for successfully aligning the distribution across domains and reducing domain discrepancy by applying high-order statistical features (primarily centered on maximum-mean discrepancy). The authors of [24, 25] use adversarial learning to develop a mini-max game that extracts domain-invariant features by utilizing samples from common and private categories of the source dataset. They are, unfortunately, inefficient in a partial-domain adaptation environment and only effective in a limited, closed-set domain adaptation scenario.
By leveraging multiple adversarial networks to down-weight private source category samples, the Selective Adversarial Network (SAN) [3] handles partial-domain adaptation tasks and ensures efficient knowledge transfer. By expanding on this idea, the authors of [4] provide a framework for class-importance weight estimation by combining target sample prediction scores. Similar ideas are put out by Zhang et al. [14] in their work on Importance Weighted Adversarial Nets (IWAN), which makes use of an auxiliary domain discriminator to gauge how closely related a source sample is to the target domain. A soft indicator for distinguishing the common categories from the private source classes is proposed by the Example Transfer Network (ETN) [5], which employs discriminative information to assess the transferability of source domain samples.
Despite outperforming closed-set domain adaptation strategies, these models may have considerable limitations when determining the categories of private sources due to poor classification performance during the early training stages. With this work, we’ve attempted to address the limitations mentioned above.
III Proposed Approach
III-A Problem Definition
An Unsupervised Domain-Adaptation scenario assumes samples representing the source and target domains are drawn from different probability distributions[1]. As witnessed in a standard uda environment, we are furnished with a source dataset of labeled points, sampled from a distribution , and an unlabeled target dataset of samples, drawn from distribution (). Since target class label information is unavailable during adaptation, the Closed-Set variant assumes that the samples in and are categorized into classes from known label-sets and respectively, where . Partial domain adaptation generalizes this characterization and addresses a realistic scenario by alleviating the constraint of identical label space assumptions between the two domains (i.e., ).
With the objective of designing a classifier hypothesis that minimizes the target classification risk under a pda setup, we aim at leveraging source domain supervision to capture class-semantic information, along with minimizing misalignment due to negative transfer from samples in the outlier label-space ( }).
III-B Partial Domain Adaptation Model
With the objective outlined above, in this section, we present an overview of the proposed architecture. The learning process can be categorized into four major components, namely:
-
•
Attaining domain-invariance in the latent space.
-
•
Establishing class-wise distribution alignment.
-
•
Ensuring supervision of target samples through pseudo-label generation.
-
•
Minimizing negative knowledge transfer from source samples in by regulating sample-wise contribution to classification, domain discrimination, and input reconstruction tasks.
The proposed model (fig. 2) accepts an input sample from the source/target domain () and encodes it to a lower dimensional latent representation (, ). The output of the encoder is accepted by the domain discriminator, , which determines its domain membership. Concurrently, the encoder output is processed by networks and to obtain the feature means and feature variances, respectively. and parameterize a Gaussian distribution to obtain a latent feature sample of (the sampling process is conducted using the re-parameterization trick). is subsequently passed through a decoder and a classifier for data reconstruction () and label prediction (), respectively. In addition, we utilize a pseudo-labeling strategy using non-parametric classifiers for supervision of target sample classification and computation of class-importance weights (required for reducing the effect of samples from outlier classes ). The following sections discuss the network architecture from the standpoint of mitigating the issues mentioned earlier.
III-B1 Domain-Invariant Feature Extraction with Adversarial Learning
The adversarial approach for domain adaptation usually centers around matching the source and target feature distributions through a two-player minimax game. The idea has resulted in developing a series of DANNs (domain adversarial neural networks) [19], which achieve high performance in a typical domain adaptation setup with shared label space across domains. In the proposed setup, the first player is modeled as a domain discriminator and is trained to separate the from . poses as the second player trained to confuse the domain discriminator simultaneously by generating domain-invariant features. The encoder weights are learned by maximizing the loss of , whereas the discriminator weights are learned by reducing the loss of to extract domain-transferable features .
The overall objective of the Domain Adversarial Neural Network is realized by minimizing the following term:
(1) |
With the objective of eliminating negative transfer, we down-weight the contributions of all outlier source samples from the source label space . This is achieved by multiplying to the log value of domain discriminator output over the source domain data ( is the ground truth label of source sample , represents the corresponding class weight in the class-importance weight vector ). The detailed process of estimating is presented in section III-B3.
III-B2 Latent Feature Alignment using Variational Information
As highlighted earlier, improving class conditional distribution alignment forms a salient task in pda besides attaining domain invariance. The underlying classification objective is better realized if samples with the same class labels are mapped to the same reference distribution while samples with different class labels are assigned to different distributions. We propose to address this by regularizing the latent space. In this work, the latent features are modeled as a mixture of Gaussian distributions
(2) |
In the equation above, , , and signify the identity matrix, mean, and variance parameters, respectively. Here, each represents a reference feature distribution (prior) for a predicted class ().
The latent representations sampled from these distributions are subsequently processed for classification and data reconstruction to preserve class-discriminative and structural information in them. The reconstruction process is modelled with a Gaussian distribution . The mean of this distribution is represented by the output of a deterministic function on , while the covariance matrix is defined as times the identity matrix ( is a positive constant). We approximate this distribution using a decoder neural network where:
(3) |
With the assumptions of latent features as samples drawn from a mixture of Gaussian distributions, we aim to estimate the posterior distribution . Citing the potential of variational inference for learning latent representations [16], we utilize it in our work to approximate with . Assuming and are conditionally independent given , we can expand the approximated distribution as . The product terms in the signify classification and encoding, respectively. To learn smooth latent features, is formulated as a sample-wise Gaussian distribution.
(4) |
where , , are neural networks. The classifier objective is realized by , as:
(5) |
Here, represents the classifier neural network, followed by a . A pseudo-labeling strategy using a non-parameterized classifier is employed to obtain labels for samples in . A subset of target samples with above-average confidence predictions are filtered out for training , (illustrated further in section III-B3). With the class information obtained for supervision, the adapted predictions are matched to the class predictions for capturing class semantic information, using cross-entropy loss .
(6) |
After establishing the reference (prior) and posterior distributions, we follow a variant of the distribution alignment strategy [15] inferred from maximizing the evidence lower bound and employ variational inference by aligning the encoded latent feature distributions with the mixture of Gaussians . In addition, the adapted reconstructions are matched with the input sample using to preserve the target information in the encoded representations. Using a similar strategy as utilized during adversarial domain alignment, we down-weight the contributions of all outlier source samples from the source label space using (presented in section III-B3). The combined objective encompassing the class distribution alignment and data reconstruction , captured in , is presented as:
(7) |
(8) |
(9) |
III-B3 Target Supervision and Estimation of Class-Importance Weights through Pseudo-Labels
The classification performance in a setup is contingent upon the model’s capability to limit negative transfer from the private category samples in . The extraneous information contained in these samples might confuse the classifier, resulting in an increase in classification error. Therefore, a filtration mechanism is necessary to limit their contribution to the learning process. Most of the existing solutions to the problem [3, 14, 4, 5] attempt to address the negative transfer issue by re-weighting samples with their predicted classification probabilities or by performing a class-wise aggregation of all the target samples for estimating the shared classes. These are, however, not satisfactory strategies and may induce severe classification errors, thereby misleading the optimization process. The effect is especially drastic during the initial stages of training when the classifier is shallow-trained and does not generate predictions with high confidence.
This work utilizes a subset of target samples in the class-importance weight computation process by leveraging high-confidence target predictions. Inspired by the domain adaptation approaches presented in [18, 17, 31], we enable target domain supervision by incorporating pseudo-labels generated from a non-parametric classifier. The adopted pseudo-labeling strategy is as follows:
-
•
Step 1: For each input sample in and in , we obtain their encoded latent representations , respectively, using eq. 4.
-
•
Step 2: Using the encoded representation of a source sample and it’s corresponding category information , we compute the cluster centers , where:
(10) -
•
Step 3: A similarity function (returns a vector of size ) is computed for each , that quantifies its closeness to the representative centers of each source class and is represented as:
(11) We formulate a similarity measure using the Jensen-Shannon divergence metric () to measure the closeness between the latent-target vector and cluster centers of the latent-source representations. The final value for each entry is normalized in the range [0,1], with a higher value signifying greater similarity.
-
•
Step 4: Probability predictions are assigned for by computing softmax, , over similarity values in . i.e.:
(12) (13)
Method Ar Cl Ar Pr Ar Rw Cl Ar Cl Pr Cl Rw Pr Ar Pr Cl Pr Rw Rw Ar Rw Cl Rw Pr Avg. Resnet-50[7] 46.33 67.51 75.87 59.14 59.94 62.73 58.22 41.79 74.88 67.40 48.18 74.17 61.35 DANN[24] 43.76 67.90 77.47 63.73 58.99 67.59 56.84 37.07 76.37 69.15 44.30 77.48 61.72 ADDA[12] 45.23 68.79 79.21 64.56 60.01 68.29 57.56 38.89 77.45 70.28 45.23 78.32 62.82 PADA[4] 51.95 67.00 78.74 52.16 53.78 59.03 52.61 43.22 78.79 73.73 56.60 77.09 62.06 SSPDA[5] 52.02 63.64 77.95 65.66 59.31 73.48 70.49 51.54 84.89 76.25 60.74 80.86 68.07 RTN[10] 49.31 57.70 80.07 63.54 63.47 73.38 65.11 41.73 75.32 63.18 43.57 80.50 63.07 IWAN[14] 53.94 54.45 78.12 61.31 47.95 63.32 54.17 52.02 81.28 76.46 56.75 82.90 63.56 SAN[3] 44.42 68.68 74.60 67.49 64.99 77.80 59.78 44.72 80.07 72.18 50.21 78.66 65.30 Proposed model 54.18 69.22 81.44 65.91 64.73 73.81 71.26 52.31 83.93 76.48 60.92 81.04 69.60 w/o ast 52.87 68.78 79.16 63.92 63.88 70.01 69.17 50.21 79.86 74.38 58.16 79.62 67.50 w/o adv 48.53 69.74 77.38 60.97 62.39 64.92 62.13 45.23 75.87 69.19 52.63 77.51 63.80 w/o cdl 50.76 68.34 78.94 62.03 63.79 68.17 66.19 48.66 78.11 70.39 54.93 79.08 65.69
Method A W A D W A W D D A D W Avg. Resnet-50[7] 75.59 83.44 84.97 98.09 83.92 96.27 87.05 DAN[9] 59.32 61.78 67.64 90.45 74.95 73.90 71.34 DANN[24] 73.56 81.53 86.12 98.73 82.78 96.27 86.50 ADDA[12] 75.67 83.41 84.25 99.85 83.62 95.38 87.03 PADA[4] 86.54 82.17 95.41 100.00 92.69 99.32 92.69 SSPDA[5] 91.52 90.87 94.36 98.94 90.61 92.88 93.20 RTN[10] 78.98 77.07 89.46 85.35 89.25 93.22 85.56 IWAN[14] 89.15 90.45 94.26 99.36 95.62 99.32 94.69 SAN[3] 90.90 94.27 88.73 99.36 94.15 99.32 94.96 Proposed model 92.17 93.98 96.32 100.00 94.26 98.43 95.86 w/o ast 91.14 94.51 93.39 98.90 91.93 95.47 94.22 w/o adv 83.39 81.19 89.04 98.63 87.98 97.31 89.59 w/o cdl 85.71 84.03 90.12 98.26 88.97 96.13 90.53
is utilized for the estimation of class importance through the confidence probabilities, . It gives us an idea of the degree of likeliness with which the target sample is mapped to its closest cluster center. A low value indicates that the model is still confused about mapping to a category in . Using unreliable target samples with low confidence values for class-importance weight estimation might thwart classification by misleading the model optimization task. Citing this, we devise a voting strategy to compute the class-importance weight vector (of size ), where a subset of the target samples (ones with high confidence predictions) are allowed to participate. This is mathematically represented as:
(14) |
(15) |
For a source sample , is represented as the corresponding class weight in the class-importance weight vector .
The threshold parameter is computed over the predicted outputs of the non-parametric classifier ( represents the softmax function) and measures the average probability of source domain samples belonging to the ground-truth class, i.e.:
(16) |
III-B4 Entropy Minimization of Target Samples
In the early phases of a classification process, several side effects arise from adapting from one domain to another. They range from challenges with knowledge transfer caused by significant domain shifts to inducing uncertainty in the classifier. Entropy minimization on the predicted target samples forms a promising candidate to eliminate such adverse effects. In this work, we use an entropy minimization loss, which is described as:
(17) |
Where, represents the probability of belonging to class , as predicted by the classifier .
III-B5 Overall Objective
To sum up, the overall objective function is modeled as follows:
(18) |
with , and as the trade-off hyper-parameters.
IV Experiments
In this section, we perform experiments on two benchmark datasets (Office-Home[13] and Office-31[11]) to evaluate the efficacy of the proposed framework. The experiments are conducted in a setup over multiple tasks for each dataset. The following section reports the evaluation results, followed by an ablation analysis.
IV-A Datasets
For the proposed technique’s overall performance assessment, we use the standard datasets for domain adaptation, specifically Office-Home and Office-31.
The Office-31[11] dataset is relatively small and comprises 4652 images. These are grouped into 31 distinct categories, representing three domains: Amazon (A), DSLR (D), and Webcam (D). For evaluation purposes, we replicate the setup proposed by Cao et. al. [4], where the target domain dataset contains images from 10 distinct classes. The assessment is conducted on 6 different permutations of source-target combinations, namely: AW, AD, WA, WD, DA, and DW.
The larger Office-Home dataset [13], utilized in this evaluation, is significantly more challenging. It contains a collection of 15,500 images grouped into 4 distinct domains: Artistic (Ar), Clip Art (Cl), Product (Pr), and Real-world (Rw). Following the PADA setup [4], we construct the source and target datasets with images from 65 and 25 different classes, respectively. 12 different permutations of source-target are used for evaluation purposes, namely ArCl, ArPr, ArRw, ClAr, ClPr, ClRw, PrAr, PrCl, PrRw, RwAr, RwCl, and RwPr.
IV-B Implementation
The models in the evaluation are implemented using Pytorch on an Nvidia 3090-Ti GPU with 24 GB memory. We utilize Resnet-50 [7], pre-trained on the Imagenet dataset [32] and fine-tuned on the source data, as the backbone network for feature encoding. Before the fully-connected classification layers, we introduce a bottleneck layer of length 256. The discriminator consists of two fully-connected hidden layers of size 1024 with relu and 0.5 dropout probability, followed by the final layer of size 1 with sigmoid activation. The decoder comprises a fully-connected layer followed by a series of transpose convolution layers with batch normalization and leaky-relu, and sigmoid activations in the output layer. The network parameters are optimized using mini-batch SGD with a batch size of 36 and a momentum of 0.9 for 5000 epochs. The learning rates of the bottleneck, classification, decoding, and domain discriminator layers are 10 times that of the backbone, which is set to 1e-3 initially and adjusted as done in PADA [4]. We employ the same approach as DANN [24] to introduce a gradient reversal layer for adversarial training and increase the value from 0 to 1 as the number of iterations progresses. The trade-off parameters and are set to 0.8, 0.1, and 1, 0.1 for Office-31, and Office-home, respectively. We report the trainable classifier network outputs for target classification during model evaluation.
IV-C Comparison Models
The model is evaluated against state-of-the-art domain adaptation models specifically suitable for unsupervised closed-set and partial domain adaptation tasks, namely Resnet-50 [7], Deep Adaptation Network (DAN) [9], Domain Adversarial Neural Network (DANN) [24], Adversarial Discriminative Domain Adaptation (ADDA) network [12], Residual Transfer Networks (RTN) [10], Importance Weighted Adversarial Nets (IWAN) [14], Selective Adversarial Network (SAN) [3], Partial Adversarial Domain Adaptation(PADA) [4] and class Subset Selection for Partial Domain Adaptation (SSPDA) [5].
V Results and Analysis
From the results summarized in Tables I and II, it is observed that the proposed method achieves comparable accuracy results to the state-of-the-art models addressing closed-set and partial domain adaptation on the presented tasks (achieving the highest accuracy in 8 out of 12 tasks and in 3 out of 6 tasks on Office-Home and Office-31 datasets, respectively). It outperforms the compared models’ overall average accuracy percentage on both datasets.
Furthermore, we have also conducted an ablation analysis on the proposed framework by suppressing its three main components, one at a time:
-
•
Proposed model without adaptive selection of target samples (ast): To evaluate its effectiveness, we limit the utilization of pseudo-labels for selecting highly confident target samples and computation of class-importance weights. Instead, we follow a strategy proposed by Cao et. al. [3] by aggregating the classifier prediction probabilities over all target samples to estimate .
-
•
Proposed model without an adversarial loss (adv): To gauge the effectiveness of domain distribution alignment in latent space, we restrict the learning of domain-invariant latent representations by suppressing the adversarial objective (setting of to 0 in eq. 18).
-
•
Proposed model without class-distribution alignment (cdl): The proposed network utilizes the objective to regularize the latent space and achieve class distribution alignment. In this variant, we restrict the model from exploiting variational information for class distribution alignment by setting the value of to 0 (see eq. 18).
VI Conclusion
Improving class conditional distribution alignment forms a salient task in a setup besides attaining domain invariance. Citing this, we couple an adversarial objective for domain alignment with a class distribution alignment strategy using variational information to regularize the latent space. Furthermore, we develop a robust technique for eliminating negative transfer and ensuring effective target supervision by adaptively selecting a subset of highly-confident target samples. The proposed model is tested under a range of tasks against the state-of-the-art models addressing closed-set and partial domain adaptation problems for a comprehensive assessment. In addition, we performed an ablation analysis to verify the importance of the highlighted modules and establish their contribution to the suggested framework. The experimental findings demonstrate the suggested model’s effectiveness over the compared models in the challenging tasks designed over two benchmark datasets.
References
- [1] Sugiyama, Masashi, Matthias Krauledat, and Klaus-Robert Müller. “Covariate shift adaptation by importance weighted cross validation.” Journal of Machine Learning Research 8, no. 5 (2007).
- [2] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan. Domain separation networks. In Advances in neural information processing systems, pages 343–351, 2016.
- [3] Z. Cao, M. Long, J. Wang, and M. I. Jordan. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2724–2732, 2018.
- [4] Z. Cao, L. Ma, M. Long, and J. Wang. Partial adversarial domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 135–150, 2018.
- [5] Z. Cao, K. You, M. Long, J. Wang, and Q. Yang. Learning to transfer examples for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2985–2994, 2019.
- [6] Choudhuri, Sandipan, Riti Paul, Arunabha Sen, Baoxin Li, and Hemanth Venkateswara. ”Partial Domain Adaptation Using Selective Representation Learning For Class-Weight Computation.” In 2020 54th Asilomar Conference on Signals, Systems, and Computers, pp. 289-293. IEEE, 2020.
- [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- [8] J. Hu, C. Wang, L. Qiao, H. Zhong, and Z. Jing. Multi-weight partial domain adaptation. 2019.
- [9] M. Long, Y. Cao, J. Wang, and M. Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97–105. PMLR, 2015.
- [10] M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems, pages 136–144, 2016.
- [11] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213–226. Springer, 2010.
- [12] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167–7176, 2017.
- [13] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018–5027, 2017.
- [14] J. Zhang, Z. Ding, W. Li, and P. Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8156–8164, 2018.
- [15] Yeh, Hao-Wei, Baoyao Yang, Pong C. Yuen, and Tatsuya Harada. ”Sofa: Source-data-free feature alignment for unsupervised domain adaptation.” In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 474-483. 2021.
- [16] Kingma, Diederik P., and Max Welling. ”Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).
- [17] Long, Mingsheng, Yue Cao, Jianmin Wang, and Michael Jordan. ”Learning transferable features with deep adaptation networks.” In International conference on machine learning, pp. 97-105. PMLR, 2015.
- [18] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2016. Unsupervised domain adaptation with residual transfer networks. In Advances in neural information processing systems. 136–144.
- [19] Ganin, Yaroslav, and Victor Lempitsky. ”Unsupervised domain adaptation by backpropagation.” In International conference on machine learning, pp. 1180-1189. PMLR, 2015.
- [20] Hoffman, Judy, Sergio Guadarrama, Eric S. Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, and Kate Saenko. ”LSDA: Large scale detection through adaptation.” Advances in neural information processing systems 27 (2014).
- [21] Oquab, Maxime, Leon Bottou, Ivan Laptev, and Josef Sivic. ”Learning and transferring mid-level image representations using convolutional neural networks.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717-1724. 2014.
- [22] Yosinski, Jason, Jeff Clune, Yoshua Bengio, and Hod Lipson. ”How transferable are features in deep neural networks?.” Advances in neural information processing systems 27 (2014).
- [23] Zhang, Lei, Peng Wang, Wei Wei, Hao Lu, Chunhua Shen, Anton van den Hengel, and Yanning Zhang. ”Unsupervised domain adaptation using robust class-wise matching.” IEEE Transactions on Circuits and Systems for Video Technology 29, no. 5 (2018): 1339-1349.
- [24] Ganin, Yaroslav, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. ”Domain-adversarial training of neural networks.” The journal of machine learning research 17, no. 1 (2016): 2096-2030.
- [25] Li, Shuang, Chi Harold Liu, Binhui Xie, Limin Su, Zhengming Ding, and Gao Huang. ”Joint adversarial domain adaptation.” In Proceedings of the 27th ACM International Conference on Multimedia, pp. 729-737. 2019.
- [26] Liu, Xiangbin, Liping Song, Shuai Liu, and Yudong Zhang. ”A review of deep-learning-based medical image segmentation methods.” Sustainability 13, no. 3 (2021): 1224.
- [27] Wang, Wei, Yujing Yang, Xin Wang, Weizheng Wang, and Ji Li. ”Development of convolutional neural network and its application in image classification: a survey.” Optical Engineering 58, no. 4 (2019): 040901.
- [28] Dang, Qi, Jianqin Yin, Bin Wang, and Wenqing Zheng. ”Deep learning based 2d human pose estimation: A survey.” Tsinghua Science and Technology 24, no. 6 (2019): 663-676.
- [29] Choudhuri, Sandipan, Nibaran Das, Ritesh Sarkhel, and Mita Nasipuri. ”Object localization on natural scenes: A survey.” International Journal of Pattern Recognition and Artificial Intelligence 32, no. 02 (2018): 1855001.
- [30] Guo, Zhiyang, Yingping Huang, Xing Hu, Hongjian Wei, and Baigan Zhao. ”A survey on deep learning based approaches for scene understanding in autonomous driving.” Electronics 10, no. 4 (2021): 471.
- [31] Choudhuri, Sandipan, Hemanth Venkateswara, and Arunabha Sen. ”Coupling Adversarial Learningwith Selective Voting Strategy for Distribution Alignment in Partial Domain Adaptation.” Journal of Computational and Cognitive Engineering 1, no. 4 (2022): 181-186.
- [32] Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ”Imagenet: A large-scale hierarchical image database.” In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.