DSDRNet: Disentangling Representation and Reconstruct Network for Domain Generalization
Abstract
Domain generalization faces challenges due to the distribution shift between training and testing sets, and the presence of unseen target domains. Common solutions include domain alignment, meta-learning, data augmentation, or ensemble learning, all of which rely on domain labels or domain adversarial techniques. In this paper, we propose a Dual-Stream Separation and Reconstruction Network, dubbed DSDRNet. It is a disentanglement-reconstruction approach that integrates features of both inter-instance and intra-instance through dual-stream fusion. The method introduces novel supervised signals by combining inter-instance semantic distance and intra-instance similarity. Incorporating Adaptive Instance Normalization (AdaIN) into a two-stage cyclic reconstruction process enhances self-disentangled reconstruction signals to facilitate model convergence. Extensive experiments on four benchmark datasets demonstrate that DSDRNet outperforms other popular methods in terms of domain generalization capabilities.
Index Terms:
Domain generalization, Disentanglement, Reconstruction, AdaINI Introduction
Currently, many deep learning tasks assume that the data distribution in the training set and the testing set is the same. However, real-world scenarios often deviate from this assumption, such as training a model on color images and testing it on black and white images. Traditional training methods perform poorly when dealing with data with significant distribution differences. Common solutions [1, 2, 3, 4, 5, 6, 7, 8] involve leveraging data from multiple source domains to enable a trained model to generalize to different and unseen domains. Another challenge arises when dealing with scarce experimental data. Collecting a large amount of experimental data may not be feasible in certain scenarios, such as fall detection for the elderly and Person Re-ID [9, 10, 11, 12]. These challenges have gained increasing attention from researchers, making the development of models with strong generalization capabilities a focal point in the research community.

Most existing domain generalization (DG) methods typically draw inspiration from domain adaptation (DA) approaches. The common practice is to perform feature-level alignment in the source domain, leveraging the powerful nonlinear fitting capability of deep networks to learn the invariance in the source domain. However, this approach does not necessarily enable the model to truly understand the features of images. Considering two images from different distributions, when using a feature extractor to generate semantics, a good feature extractor should achieve accurate semantics for each input image. So, how can we determine if the model truly understands an image? Disentanglement reconstruction is a common approach to addressing this issue. To ensure that the model, after disentangling and reconstructing the image, has stronger representation capabilities than before, this places more stringent requirements on deep networks.
According to this idea, this paper introduces a new disentanglement-reconstruction strategy, as illustrated in Figure 1. Figure 1(a) depicts a feature-based unsupervised cross-domain adaptation method [13, 14], which maps data from different domains to a common feature space. However, it loses some features due to enforcing space alignment, leading to a significant drop in model performance when dealing with unknown categories. Figure 1(b) disentangles images to the latent space using a linear disentanglement structure [16] or untangles the images to the latent space, merging style factors into content factors to form new images [15]. Although the model understands the content of the images based on a unified encoder and disentanglement, it lacks generalization and performs poorly when dealing with new categories.
Traditional one-stage disentangle-reconstruct methods only treat AdaIN [17] as a style feature extractor without incorporating it into the image reconstruction and disentanglement processes. This inspired us to adopt a two-stage cycle reconstruction approach, as shown in Fig. 1(c). This paper employs a dual-stream image blending reconstruction disentanglement method, providing strong supervision for AdaIN [17] to reconstruct its style, this encourages the model to learn style features with better disentangle-reconstruct capabilities.
We believe that a good disentangle model should not only possess the capability of two-stage cycle reconstruction but also have the property of maintaining almost unchanged semantic distances between reconstructed images. Based on this principle, we propose a dual-stream image blending reconstruction disentanglement method. During the model training process, we take full advantage of the characteristic that the blended reconstructed images and the original images exhibit approximate semantic space distances. In addition to using intra-instance similarity supervision, we also introduce inter-instance semantic distance supervision, allowing the model’s reconstruction to maintain semantic space consistency and accurately locate and encode ground-truth semantics. Ultimately, this enables the model to learn generalization feature representations from different source domains, addressing the issue of performance degradation due to domain distribution differences.
We highlight the contributions of this work as follows:
-
•
This paper proposes a novel domain generalization method, which explores features from both within-instance and cross-instance perspectives in terms of image content and semantics. By leveraging the relationship between reconstructed images and original images, the proposed method enhances the model’s generalization capabilities.
-
•
We incorporate AdaIN into a two-stage cyclic reconstruction loop, providing stronger supervisory signals for image self-reconstruction.
-
•
Combining with reconstruction, cross-cycle consistency, and semantics adversarial, DSDRNet ensures semantic consistency across different sources, achieving SOTA performance over prevailing baselines on four benchmarks.
II Related Work
II-A Domain Generalization
DG requires us to learn a model from a given number of source domains that can make good predictions about the unseen data distribution. MixStyle [18] introduced the mixing of source domains with a certain probability to generate new samples, aiming to enhance the model’s generalization capability. UFDN [19] proposed a unified feature disentangle for multi-domain image translation. MTAE [20] provided a general framework for domain generalization based on multi-task learning. MMD-AAE [21] utilized maximum mean difference and adversarial auto-encoder for domain generalization. DIVA [22] divided the data into domain information, category information, and other information for the structure, and used VAE for data reconstruction to improve the generalization of the system. CCSA [23] proposed a unified architecture of siamese network to solve the generalization problem in the vision domain. CrossGrad [24] utilizes domain information to assist samples, using domain-guided data augmentation to improve the model’s generalization. DDAIG [25] proposed to divide the model into label classifier, DoTNet, and domain classifier, using DoTNet to map source domain data to the unseen domain. L2A-OT [3] proposed to model the distribution between the synthesized pseudo-novel domain and the source domain, and maximize their divergence using the optimal transport. D-SAM [26] proposed a generic architecture that could improve the generalization of the system while maintaining the separation of existing source domain information and at the same time utilizing common information. DRANet [15] proposed a content-adaptive domain transfer network that helped to preserve the scene structure and transferring style. JiGen [27] proposed an algorithm similar to solving a jigsaw puzzle to improve the generalization of the system. DAEL [4] utilized a deep adversarial auto-encoder to extract domain-specific features from class labels. Epi-FCR [28] decomposed the deep network into feature extractor and classifier components and trained each component using the current domain partners.
II-B Disentangling Representations
Disentangling representation was the analysis of potential generative factors inside the data from the perspective of data generation by studying the physical structure within the data. Bengio et al. [29] introduced a variance factor disentanglement problem. VAE [30] proposed modeling real data from a maximum likelihood perspective by calculating the KL divergence between the true posterior and the variational posterior to achieve data disentanglement. -VAE [31] improved disentanglement performance by increasing the value of , making the learned posterior distribution statistically similar to the prior distribution. AAE [32] suggested using adversarial networks to measure the similarity between the posterior and prior distributions to enhance the representational capacity of latent variables. DRAW [33] introduced a deep recurrent network architecture to learn historical states for simple detection and object tracking. CDD [34] leveraged a pair of generative adversarial networks to achieve a cross-domain bidirectional image translation model. MLVAE [35] separated latent representations of grouped data-related factors by exchanging and sharing the bottom-level representations of samples. MixNMatch [36] achieved a disentangled representation of pose, texture, and background for single-object scenes through data factorization and layered generation.

III Method
III-A Problem Definition
Assuming there are M source domains , where each source domain contains N labeled samples, the goal of the model is to learn a powerful and generalizable predictive function from the data of M source domains. The objective is to minimize the error on the unseen target domain : .
III-B Proposed DSDRNet
The main process of DSDRNet is depicted in Figure 2, which includes an encoder , a disentangle , used in this paper as AdaIN [17], a generator , a discriminator , and a classifier . DSDRNet aims to disentangle the semantics and attributes of samples into different latent spaces through a dual-stream network, training a highly robust and generalizable classifier. It utilizes semantic distance supervision to constrain the similarity of images and reconstructs the semantics and attributes of samples. Through adversarial training and consistency principles, it enables flexible manipulation and augmentation of the training data. Ultimately, this approach equips the model with excellent generalization capabilities for unseen samples.
Disentangling Representation. The model’s objective is to reduce semantic differences among generated samples belonging to the same class. Unlike traditional approaches that use a unified encoder and decoder for feature extraction, this paper employs two separate networks: the disentangle and the encoder . They independently perform the disentangling operation on samples and at the pixel level, resulting in , , , , , and , as shown below:
(1) | ||||
where represents the attributes value of sample extracted by the disentangle , while represents the attributes of sample extracted by the disentangle , and denote the semantic and feature information of sample obtained through the encoder , respectively. Similarly, and refer to the semantic and feature information of sample acquired through the encoder .
Reconstruction. After the disentanglement operations in the previous stage, we obtained , , , , , and . We utilize the generator to generate reconstructed images with the semantics and attributes of sample , as well as with the semantics and attributes of sample . The results are as follows:
(2) | |||
At the same time, we utilize the generator to generate reconstruction images with the semantic of sample and the attribute of sample , as well as reconstruction images with the semantic of sample and the attribute of sample . The results are as follows:
(3) | |||
where and refer to the newly generated images.
We generate images and through the first reconstruct operation. Then, we operate on using the disentangle to obtain attribute values . Similarly, by using the encoder , we obtain the semantic value and feature value of . Likewise, we can obtain the attribute values , semantic value , and feature value of . Using these semantics and attributes, we perform another reconstruct operation to obtain images and . The specific process is as follows:
(4) | |||
where and represent the semantics and attributes of , respectively. Similarly, and represent the semantics and attributes of . and denote the newly generated images after the disentangling and reconstruction operations.
III-C Training Loss
The overall loss of our framework consists of the intra-instance reconstruction loss , inter-instance reconstruction loss , self-reconstruct loss , cross-cycle consistency loss , semantics adversarial loss , classification loss and , and with the balancing term :
(5) | ||||
Intra-instance Reconstruction Loss. To ensure consistency between the features of the original image and the recombined features in the feature space, we need to calculate the angle between the two vectors, and , to measure their similarity. Cosine similarity is well-suited for such calculations, as its values range from -1 to 1, with values closer to 1 indicating higher similarity. Therefore, the final intra-instance reconstruction loss can be expressed as:
(6) |
This loss function quantifies the similarity between the original and recombined features by measuring the cosine of the angle between the two feature vectors, intending to make them as similar as possible.
Similarly, we can obtain the cosine similarity loss between and as . The overall intra-instance reconstruction loss can be represented as:
(7) |
The cosine similarity loss allows us to assess the similarity between the two feature vectors by quantifying their angular difference. By minimizing this loss, we aim to ensure that the reconstructed images remain similar to the original images in terms of feature representations.
Inter-instance Reconstruction Loss. Inter-instance reconstruction refers to the measurement of the dissimilarity between the reconstructed images of different instances. It captures the discrepancy between the features and attributes of different instances in the reconstructed space. The inter-instance reconstruction loss can be calculated using various metrics, such as Euclidean distance, L1 distance, or Structural Similarity Index Measure (SSIM). The purpose of the inter-instance reconstruction loss is to encourage the generated images to have distinct attributes and avoid blending or mixing the attributes of different instances in the reconstruction process.
To further constrain the similarity of image reconstruction, we utilize the L1 norm to enforce cosine similarity loss between them. Given the feature information of two images and as and respectively, and the feature information of two distinct images and as and , the inter-instance reconstruction loss can be expressed as:
(8) |
where presents the cosine similarity loss between and , presents the cosine similarity loss between and .
Self-reconstruction Loss. We apply an L1 loss to train , , and , reducing the disparity between the input images , and their corresponding reconstructed images , , the reconstructed loss is defined as:
(9) | ||||
where and , respectively.
Cross-cycle Consistency Loss. Consistency refers to maintaining invariant between perturbed images and features to learn robust representations. In this paper, consistency loss attempts to maintain composition invariant of semantics and attributes when the domain-transferred image is re-projected into the representation space. After two translation stages, the reconstructed image should exhibit consistency in semantics and attributes with the original image. To enforce this constraint, we define the cross-cycle consistency loss as follows:
(10) | ||||
where and . This loss explicitly encourages consistency in semantics and the appearance of attributes, which is more conducive to improving the generalization of the model.
Semantics Adversarial Loss. To ensure semantic consistency between the original image and the synthesized image , we introduce a content discriminator to distinguish the differences between real images and synthesized images. Additionally, we employ the generator to attempt image generation. The same approach is applied to supervise semantic consistency between the original image and the synthesized image , the semantics-based adversarial loss can be expressed as:
(11) | |||
Kullback-Leibler Divergence Loss. To further ensure that the model maintains semantic invariant during reconstruction, we also apply a model distribution fitting loss to the classification predictions of the original image , denoted as , and the classification predictions of the reconstructed image , denoted as , the results are as follows:
(12) |
where represents the probability distribution of , represents the probability distribution of . Similarly, the Kullback-Leibler Divergence Loss can be obtained for and over the probability distribution, and the final result can be expressed as:
(13) |
Classification Loss. To improve the classification accuracy of the model, we employ multiple class identifiers, denoted as to enhance classification accuracy. The procedure is as follows, given samples and , encoded by the encoder to obtain and , they pass through the classifier to produce and . The K-way class identifier is utilized for accurate label predictions, supervised by the cross-entropy loss:
(14) |
Similarly, the same procedure can be applied to obtain , , , the final overall cross-entropy loss can be expressed as:
(15) |
IV Experiment
For a fair comparison with prior work, we select one domain as the test domain and utilize the remaining domains as source domains. To ensure the reliability of the results, we conducted three runs with different random seeds, averaging the top-1 classification accuracy as the performance measure and reporting the mean accuracy. In the -A section of the APPENDIX, detailed information about the datasets is provided. The -B section introduces the comparative methods, while the -C section covers the experimental details.
IV-A Experiments on Digits-DG
Method | MNIST | MNISTM | SVHN | SYN | Avg |
---|---|---|---|---|---|
ERM | 95.8 | 58.8 | 61.7 | 78.6 | 73.7 |
DANN [13] | 97.8 | 55.6 | 61.9 | 89.4 | 76.2 |
CORAL [37] | 97.6 | 57.7 | 57.8 | 90.1 | 75.8 |
Mixup [38] | 97.5 | 58.0 | 54.8 | 89.8 | 75.0 |
Jigen [27] | 96.5 | 61.4 | 63.7 | 74.0 | 73.9 |
GroupDRO [39] | 97.5 | 53.5 | 55.6 | 92.2 | 74.7 |
RSC [40] | 97.8 | 56.3 | 62.4 | 89.3 | 76.4 |
MixStyle [4] | 96.5 | 63.5 | 64.7 | 81.2 | 76.5 |
ANDMask [41] | 96.9 | 56.0 | 59.5 | 88.2 | 75.1 |
CCSA [23] | 95.2 | 58.2 | 65.5 | 79.1 | 74.5 |
CrossGrad [24] | 96.7 | 61.1 | 65.3 | 80.2 | 75.8 |
DDAIG [25] | 96.6 | 64.1 | 68.6 | 81.0 | 77.6 |
MMD-AAE [21] | 96.5 | 58.4 | 65.0 | 78.4 | 74.6 |
L2A-OT [3] | 96.7 | 63.9 | 68.6 | 83.2 | 78.1 |
DSDRNet | 98.4 | 65.2 | 70.1 | 89.8 | 80.9 |
As shown in Table I, the DSDRNet achieved an accuracy of 98.4% on the MNIST dataset, surpassing the ERM baseline by +2.6% and outperforming the best DANN [13] and RSC [40] by +0.6%. On the MNISTM dataset, the model achieved an accuracy of 65.2%, surpassing ERM by +6.4% and outperforming the best DDAIG [25] by +1.1%. On the SVHN dataset, the model’s performance exceeded ERM by +8.4% and outperformed the current best DDAIG [25] and L2A-OT [3] by +1.5%. On the SYN dataset, the model outperformed ERM by +11.2%, but it scored lower than the best GroupDRO [39] by 2.4%. This difference can be attributed to the fact that the SYN dataset contains randomly inserted backgrounds, leading to more pronounced model oscillations. Overall, the DSDRNet achieved an average accuracy of 80.9% across the four datasets, surpassing ERM by +7.2% and outperforming the best L2A-OT [3] by +2.8%.
Method | Art | Cartoon | Photo | Sketch | Avg |
---|---|---|---|---|---|
ERM | 77.0 | 75.9 | 96.0 | 69.2 | 79.5 |
DANN [13] | 78.7 | 75.3 | 94.0 | 77.8 | 81.4 |
CORAL [37] | 77.8 | 77.1 | 92.6 | 80.6 | 82.0 |
Mixup [38] | 79.1 | 73.5 | 94.5 | 76.7 | 80.9 |
GroupDRO [39] | 76.0 | 76.1 | 91.2 | 79.1 | 80.6 |
RSC [40] | 79.7 | 76.1 | 95.6 | 76.6 | 82.0 |
MMLD [42] | 81.3 | 77.2 | 96.1 | 72.3 | 81.7 |
ANDMask [41] | 76.2 | 73.8 | 91.6 | 78.1 | 79.9 |
MixStyle [4] | 84.1 | 78.8 | 96.1 | 75.9 | 83.7 |
SagNet [43] | 83.6 | 77.7 | 95.5 | 76.3 | 83.3 |
CCSA [23] | 80.5 | 76.9 | 93.6 | 66.8 | 79.5 |
CrossGrad [24] | 79.8 | 76.8 | 96.0 | 70.2 | 80.7 |
DDAIG [25] | 84.2 | 78.1 | 95.3 | 74.7 | 83.1 |
MMD-AAE [21] | 75.2 | 72.7 | 96.0 | 64.2 | 77.0 |
L2A-OT [3] | 83.3 | 78.2 | 96.2 | 73.6 | 82.8 |
D-SAM [26] | 77.3 | 72.4 | 95.3 | 77.8 | 80.7 |
JiGen [27] | 79.4 | 75.3 | 96.0 | 71.6 | 80.6 |
Epi-FCR [28] | 82.1 | 77.0 | 93.9 | 73.0 | 81.5 |
DAEL [4] | 84.6 | 74.4 | 95.6 | 78.9 | 83.4 |
MetaReg [44] | 83.7 | 77.2 | 95.5 | 70.3 | 81.7 |
DSDRNet | 85.2 | 78.4 | 96.8 | 81.2 | 85.4 |
IV-B Experiments on PACS
Based on Table II, in the Art domain, DSDRNet achieves an accuracy of 85.2%, surpassing ERM by 8.2% and outperforming the best-performing method, DAEL [4], by +0.6%. In the Cartoon domain, the model achieves an accuracy of 78.4%, outperforming ERM by +2.5% and slightly below the best method, MixStyle [4], by +0.4%. In the Photo domain, DSDRNet outperforms ERM by +0.8% and is +0.6% better than the current best, L2A-OT [3]. In the Sketch domain, the model achieves an accuracy of 81.2%, surpassing ERM by +12.0% and outperforming the current SOTA CORAL [37] by +0.6%. On the PACS dataset as a whole, DSDRNet reaches an accuracy of 85.4%, significantly outperforming similar comparison methods, exceeding ERM by +5.9%, and outperforming MixStyle by +1.7%.
IV-C Experiments on OfficeHome
As shown in Table III, DSDRNet achieves an accuracy of 67.3% on the OfficeHome dataset, surpassing ERM by +2.6%, and outperforming the current best method, DAEL [4], by +1.2%. Specifically, in the Art domain, the model achieves an accuracy of 61.7%, surpassing ERM by +2.8%, and outperforming L2A-OT [3] by +1.1%. In the Clipart domain, DSDRNet reaches 54.6%, surpassing ERM by +5.2%, and slightly below DAEL [4] by 0.5%. In the Product domain, the model outperforms ERM by +0.9% and surpasses L2A-OT [3] by +0.4%. In the RealWorld domain, the model outperforms ERM by +1.6% and surpasses L2A-OT [3] by +0.8%. These results showcase the efficacy of DSDRNet in enhancing domain generalization performance on the OfficeHome dataset.
Method | Art | Clipart | Product | RealWorld | Avg |
---|---|---|---|---|---|
ERM | 58.9 | 49.4 | 74.3 | 76.2 | 64.7 |
DANN [13] | 57.7 | 44.4 | 72.0 | 72.5 | 61.7 |
CORAL [37] | 58.8 | 48.8 | 72.3 | 73.6 | 63.4 |
Mixup [38] | 55.8 | 47.9 | 72.0 | 72.8 | 62.1 |
GroupDRO [39] | 57.6 | 48.8 | 71.5 | 73.2 | 62.8 |
RSC [40] | 59.0 | 49.2 | 72.5 | 74.2 | 63.7 |
ANDMask [41] | 56.7 | 45.9 | 70.7 | 73.2 | 61.6 |
SagNet [43] | 60.2 | 45.4 | 70.4 | 73.4 | 62.3 |
CCSA [23] | 59.9 | 49.9 | 74.1 | 75.7 | 64.9 |
CrossGrad [24] | 58.4 | 49.4 | 73.9 | 75.8 | 64.4 |
DDAIG [25] | 59.2 | 52.3 | 74.6 | 76.0 | 65.5 |
MMD-AAE [21] | 56.5 | 47.3 | 72.1 | 74.8 | 62.7 |
L2A-OT [3] | 60.6 | 50.1 | 74.8 | 77.0 | 65.6 |
D-SAM [26] | 58.0 | 44.4 | 69.2 | 71.5 | 60.8 |
JiGen [27] | 53.0 | 47.5 | 71.5 | 72.8 | 61.2 |
DAEL [4] | 59.4 | 55.1 | 74.0 | 75.7 | 66.1 |
DSDRNet | 61.7 | 54.6 | 75.2 | 77.8 | 67.3 |
IV-D Experiments on DomainNet
As shown in Table IV, on the DomainNet dataset, ERM achieves an average accuracy of 41.6% across the 6 domains. In contrast, DSDRNet achieves an accuracy of 43.3%, surpassing ERM by +1.7% and outperforming MLDG [45] by +0.8%. Specifically, in the Clipart domain, the model achieves an accuracy of 60.7%, surpassing ERM by +2.3% and outperforming MLDG [45] by +1.4%. In the Infograph domain, DSDRNet reaches an accuracy of 21.5%, surpassing ERM by +1.7% and outperforming CORAL by +0.7%. The model achieved an accuracy of 47.9% in the Painting domain, which is slightly higher than ERM by +0.6% and slightly lower than MLDG [45] by 0.9%. The model is +2.4% higher than ERM and +1.8% higher than MLDG [45] on the Quickdraw domain. In the Real domain, the model performed +1.5% better than ERM and +1.0% better than MLDG [45]. The model is +2.0% higher than ERM and +0.7% higher than MLDG [45] on the Sketch domain. These results indicate that DSDRNet maintains a relatively high level of generalization when encountering datasets with a large number of samples and complex variations.
Method | Clipart | Infograph | Painting | Quickdraw | Real | Sketch | Avg |
---|---|---|---|---|---|---|---|
ERM | 58.4 | 19.8 | 47.3 | 13.4 | 60.7 | 49.9 | 41.6 |
IRM [46] | 51.0 | 16.7 | 38.8 | 11.8 | 53.2 | 44.7 | 36.0 |
DRO [39] | 47.8 | 17.2 | 36.3 | 9.0 | 52.8 | 40.7 | 34.0 |
Mixup [38] | 55.8 | 19.2 | 46.2 | 12.8 | 58.7 | 49.2 | 40.3 |
MLDG [45] | 59.3 | 20.3 | 48.8 | 14.0 | 61.2 | 51.2 | 42.5 |
CORAL [37] | 58.8 | 20.8 | 47.5 | 13.6 | 61.0 | 50.8 | 42.1 |
MMD [21] | 54.6 | 19.6 | 44.9 | 12.6 | 59.7 | 47.5 | 39.8 |
DANN [13] | 53.8 | 17.5 | 43.5 | 11.8 | 56.4 | 46.7 | 38.3 |
C-DANN [47] | 53.4 | 18.4 | 44.7 | 12.9 | 57.5 | 46.5 | 38.9 |
DSDRNet | 60.7 | 21.5 | 47.9 | 15.8 | 62.2 | 51.9 | 43.3 |
V Ablation Study
V-A Impact of Different Loss
The model involves seven different loss functions. To evaluate the performance of these loss functions on the Digits-DG dataset, we experimented with various combinations of losses to assess their impact on the model’s performance. The strategy for selecting these combinations is as follows: First, the classification loss , which is essential for the task, is used throughout the entire experiment. Next, we evaluate the impact of the reconstruction loss . Subsequently, we test the effects of the cycle loss and the adversarial loss on the model’s performance. We then separately test the intra-instance loss and the inter-instance loss . Finally, we combine and to assess the model’s performance. The final results are presented in Table V as shown in the document.
When the model only involves the loss, its performance is consistent with ERM and achieves an accuracy of 73.7%. By adding the constraint, the performance improves by +0.5%. This effect is similar to the impact of adding only on the model’s performance. When combining and , the performance reaches 75.4%, which is a +1.7% improvement over ERM. When is added, the model’s performance significantly improves from 76.2% to 77.8%. The improvement effect of individual is similar to that of , but when the intra-class and inter-class constraints are combined, the effect becomes more pronounced, directly boosting the performance to 78.8%. By combining and , the model’s final performance reaches 80.9%. This indicates that the losses in the model are playing roles to varying degrees, with the constraints of , , and exerting stronger effects on the model.
Avg | |||||||
---|---|---|---|---|---|---|---|
✓ | 73.7 | ||||||
✓ | ✓ | 74.2 | |||||
✓ | ✓ | 74.4 | |||||
✓ | ✓ | ✓ | 75.4 | ||||
✓ | ✓ | ✓ | ✓ | 76.2 | |||
✓ | ✓ | ✓ | ✓ | 77.8 | |||
✓ | ✓ | ✓ | ✓ | 77.9 | |||
✓ | ✓ | ✓ | ✓ | ✓ | 78.8 | ||
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 80.9 |
V-B Analyzing Loss Value Fluctuations
The main loss of our DSDRNet is shown in Figure 3, the details are as follows. started to oscillate decreasingly from 2.42, after 200 iterations around 0.7 and then kept oscillating between 0.5 and 2. The loss oscillated with a relatively large amplitude, it oscillated violently in the first 100 iterations and tended to stabilize at about 5 after 400 iterations. oscillated in the early stage due to relatively few training samples which led to the poor quality of the generated images. fell from the peak value of 11.12 and basically stabilized after 600 iterations and finally at around 1.5. declined relatively fast at the beginning of training due to the relatively small amount of data and started to stabilize after 200 iterations, finally oscillating around 0.5. fell at the beginning of training from the initial 26, after 400 iterations down to 5.5, finally oscillating around 5.0. This is because, in the early oscillation, the model has not yet learned some common features because of the relatively few samples. In contrast, the later oscillation was caused by the random reading of data from the two domains where the difference in data semantics may be relatively large. As the number of training samples increases, decreases because the network learns more useful features with the result of a gradual improvement of the recognition rate of the samples.



V-C Feature Visualizations
To further validate the performance of DSDRNet, we utilize t-SNE to visualize the OfficeHome dataset before and after processing with the model. The resulting visualization is shown in Figure 4. In the left image, the distribution of data from the RealWorld domain and the Product domain is mixed together, making it difficult to distinguish between different categories. However, in the right image, after processing with DSDRNet, the data from different domains are aggregated together based on their respective categories.
VI Conclusion
In this paper, we proposed a dual-stream disentanglement and reconstruction network that achieved favorable results in four generalization tasks. However, the disentanglement and reconstruction techniques require a substantial number of loss constraints and tuning tricks to ensure the model converges to the optimal state. The interpretability of the model’s performance is unknown, and the disentangled representation often consists of only one dimension that encompasses all the attribute information. The limited information in the representation hinders the method’s generalization. Therefore, applying causal reasoning to disentangled representations to enhance the interpretability of the model and analyzing inter-dimensional correlations in multi-dimensional representations for improved disentanglement is a future direction worthy of attention.
References
- [1] K. Zhou, Z. Liu, Y. Qiao, T. Xiang, and C. C. Loy, “Domain generalization: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- [2] J. Wang, C. Lan, C. Liu, Y. Ouyang, T. Qin, W. Lu, Y. Chen, W. Zeng, and P. Yu, “Generalizing to unseen domains: A survey on domain generalization,” IEEE Transactions on Knowledge and Data Engineering, 2022.
- [3] K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Learning to generate novel domains for domain generalization,” in European conference on computer vision. Springer, 2020, pp. 561–578.
- [4] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain adaptive ensemble learning,” IEEE Transactions on Image Processing, vol. 30, pp. 8008–8018, 2021.
- [5] R. Wang, M. Yi, Z. Chen, and S. Zhu, “Out-of-distribution generalization with causal invariant transformations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 375–385.
- [6] F.-E. Yang, Y.-C. Cheng, Z.-Y. Shiau, and Y.-C. F. Wang, “Adversarial teacher-student representation learning for domain generalization,” Advances in Neural Information Processing Systems, vol. 34, pp. 19 448–19 460, 2021.
- [7] J. Guo, L. Qi, and Y. Shi, “Domaindrop: Suppressing domain-sensitive channels for domain generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19 114–19 124.
- [8] Y. Jiang, Y. Wang, R. Zhang, Q. Xu, Y. Zhang, X. Chen, and Q. Tian, “Domain-conditioned normalization for test-time domain generalization,” in European Conference on Computer Vision. Springer, 2022, pp. 291–307.
- [9] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Learning generalisable omni-scale representations for person re-identification,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 9, pp. 5056–5069, 2021.
- [10] P. Zhang, H. Dou, Y. Yu, and X. Li, “Adaptive cross-domain learning for generalizable person re-identification,” in European Conference on Computer Vision. Springer, 2022, pp. 215–232.
- [11] B. Jiao, L. Liu, L. Gao, G. Lin, L. Yang, S. Zhang, P. Wang, and Y. Zhang, “Dynamically transformed instance normalization network for generalizable person re-identification,” in European Conference on Computer Vision. Springer, 2022, pp. 285–301.
- [12] L. Qi, J. Shen, J. Liu, Y. Shi, and X. Geng, “Label distribution learning for generalizable multi-source person re-identification,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 3139–3150, 2022.
- [13] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
- [14] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Darrell, “Cycada: Cycle-consistent adversarial domain adaptation,” in International conference on machine learning. Pmlr, 2018, pp. 1989–1998.
- [15] S. Lee, S. Cho, and S. Im, “Dranet: Disentangling representation and adaptation networks for unsupervised cross-domain adaptation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 252–15 261.
- [16] R. Zhang, S. Tang, Y. Li, J. Guo, Y. Zhang, J. Li, and S. Yan, “Style separation and synthesis via generative adversarial networks,” in Proceedings of the 26th ACM international conference on Multimedia, 2018, pp. 183–191.
- [17] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1501–1510.
- [18] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain generalization with mixstyle,” in International Conference on Learning Representations, 2020.
- [19] A. H. Liu, Y.-C. Liu, Y.-Y. Yeh, and Y.-C. F. Wang, “A unified feature disentangler for multi-domain image translation and manipulation,” Advances in neural information processing systems, vol. 31, 2018.
- [20] M. Ghifary, W. B. Kleijn, M. Zhang, and D. Balduzzi, “Domain generalization for object recognition with multi-task autoencoders,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 2551–2559.
- [21] H. Li, S. J. Pan, S. Wang, and A. C. Kot, “Domain generalization with adversarial feature learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5400–5409.
- [22] M. Ilse, J. M. Tomczak, C. Louizos, and M. Welling, “Diva: Domain invariant variational autoencoders,” in Medical Imaging with Deep Learning. PMLR, 2020, pp. 322–348.
- [23] S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto, “Unified deep supervised domain adaptation and generalization,” in 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017, pp. 5716–5726.
- [24] S. Shankar, V. Piratla, S. Chakrabarti, S. Chaudhuri, P. Jyothi, and S. Sarawagi, “Generalizing across domains via cross-gradient training,” in International Conference on Learning Representations, 2018.
- [25] K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Deep domain-adversarial image generation for domain generalisation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 13 025–13 032.
- [26] A. D’Innocente and B. Caputo, “Domain generalization with domain-specific aggregation modules,” in German Conference on Pattern Recognition. Springer, 2018, pp. 187–198.
- [27] F. M. Carlucci, A. D’Innocente, S. Bucci, B. Caputo, and T. Tommasi, “Domain generalization by solving jigsaw puzzles,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2229–2238.
- [28] D. Li, J. Zhang, Y. Yang, C. Liu, Y.-Z. Song, and T. M. Hospedales, “Episodic training for domain generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1446–1455.
- [29] Y. Bengio et al., “Learning deep architectures for ai,” Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009.
- [30] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
- [31] I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework.” ICLR (Poster), vol. 3, 2017.
- [32] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
- [33] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” in International conference on machine learning. PMLR, 2015, pp. 1462–1471.
- [34] A. Gonzalez-Garcia, J. Van De Weijer, and Y. Bengio, “Image-to-image translation for cross-domain disentanglement,” Advances in neural information processing systems, vol. 31, 2018.
- [35] D. Bouchacourt, R. Tomioka, and S. Nowozin, “Multi-level variational autoencoder: Learning disentangled representations from grouped observations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
- [36] Y. Li, K. K. Singh, U. Ojha, and Y. J. Lee, “Mixnmatch: Multifactor disentanglement and encoding for conditional image generation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8039–8048.
- [37] B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14. Springer, 2016, pp. 443–450.
- [38] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations, 2018.
- [39] S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang, “Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization,” in International Conference on Learning Representations (ICLR), 2020.
- [40] Z. Huang, H. Wang, E. P. Xing, and D. Huang, “Self-challenging improves cross-domain generalization,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer, 2020, pp. 124–140.
- [41] G. Parascandolo, A. Neitz, A. Orvieto, L. Gresele, and B. Schölkopf, “Learning explanations that are hard to vary,” in ICLR, 2021.
- [42] T. Matsuura and T. Harada, “Domain generalization using a mixture of multiple latent domains,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 749–11 756.
- [43] H. Nam, H. Lee, J. Park, W. Yoon, and D. Yoo, “Reducing domain gap by reducing style bias,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8690–8699.
- [44] Y. Balaji, S. Sankaranarayanan, and R. Chellappa, “Metareg: Towards domain generalization using meta-regularization,” Advances in neural information processing systems, vol. 31, 2018.
- [45] D. Li, Y. Yang, Y.-Z. Song, and T. Hospedales, “Learning to generalize: Meta-learning for domain generalization,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
- [46] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz, “Invariant risk minimization,” arXiv preprint arXiv:1907.02893, 2019.
- [47] Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao, “Deep domain generalization via conditional invariant adversarial networks,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 624–639.
- [48] D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” in ICCV, 2017.
- [49] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5018–5027.
- [50] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1406–1415.
- [51] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
- [52] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in ICML. PMLR, 2015, pp. 1180–1189.
- [53] N. Yuval, “Reading digits in natural images with unsupervised feature learning,” in Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
- [54] S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.
- [55] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [56] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International journal of computer vision, vol. 115, no. 3, pp. 211–252, 2015.
- [57] I. Gulrajani and D. Lopez-Paz, “In search of lost domain generalization,” in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=lQdXeXDoWtI
-A Datasets
We compared DSDRNet model to state-of-the-art DG algorithms on the following tasks: digit classification(Digits-DG), image recognition(PACS [48], OfficeHome [49]), DomainNet [50]).
Digits-DG. Digits-DG is a collection of four benchmarks for Digits classification, namely MNIST [51], MNISTM [52], Synthetic Digits [52], SVHN [53], the main differences among these four datasets lie in image quality, background, and font style. MNIST is a classic dataset of black-and-white handwritten digits. The dataset contains images with a size of 2828 pixels and comprises 55,000 training samples and 10,000 testing samples. MNISTM is a dataset of color images featuring digits. The images in this dataset have a size of 3232 pixels and consist of 55,000 training samples and 10,000 testing samples. SVHN, which stands for street view house Numbers, is a dataset of color images showcasing house numbers. The images in this dataset are of size 3232 pixels in size and include 73,257 training samples and 26,032 testing samples. Synthetic Digits is a dataset consisting of synthetic images of English digits embedded in random backgrounds. This dataset includes 25,000 training samples.
PACS. PACS [48] is a frequently employed DG dataset consisting of 4 domains: Art Painting (2,048 images), Cartoon (2,344 images), Photo (1,670 images), and Sketch (3,929 images). This dataset encompasses seven object categories: dog, elephant, giraffe, guitar, horse, house, and person.
OfficeHome. OfficeHome [49] originally introduced for domain adaptation is getting popular in the DG community. This dataset comprises approximately 15,500 images spanning 65 categories, primarily featuring Office and Home objects. Similar to PACS, it is divided into four domains: Artistic, Clipart, Product, and Real World, each of which has 65 object categories: TV, Telephone, Refrigerator, Radio, Printer, Pen, Pencil, Notebook, Mouse, Flowers, Chairs and so on.
DomainNet. DomainNet [50] is a commonly used dataset for DG tasks, consisting of 6 domains, 345 classes, and a total of 586,575 samples. The dataset is distributed across the following domains. Clipart: 48,129 samples, Infograph: 51,605 samples, Painting: 72,266 samples, Quickdraw: 172,500 samples, Real: 172,947 samples, Sketch: 69,128 samples. This dataset provides a diverse and challenging environment for DG experiments.
-B Methods
We compared DSDRNet to SOTA methods: DANN [13], CORAL [37], Mixup [38], GroupDRO [39], RSC [40], MMLD [42], ANDMask [41], MixStyle [4], SagNet [43], CCSA [23], CrossGrad [24], DDAIG [25], MMD-AAE [21], L2A-OT [3], D-SAM [26], JiGen [27], Epi-FCR [28], DAEL [4], IRM [46], DRO [39], MLDG [45], MMD [21], C-DANN [47], and MetaReg [44]. ERM as a baseline, which directly feeds all the source domain data into the model for training, without using any DG tricks. The authors’ original experimental setup and results are preserved in my experiments.
-C Experimental Details
We implement all methods with PyTorch and run experiments on two NVIDIA GeForce RTX 3090 GPUs. We use Wide_Resnet_28 [54] as the backbone network on the Digit-DG dataset. On the OfficeHome and PACS datasets, we utilize a pre-trained ResNet-18 [55] as the backbone, which has been pre-trained on the ImageNet [56] dataset. However, for the DomainNet dataset, we use a ResNet-50 [55] as the backbone network.
According to Domainbed [57] is settings, the batch size is 64 for the Digit-DG dataset and 32 for the rest of the other datasets. The optimizer for encode , discriminator , generator , disentangle are Adam with learning rate as and weight decay as . We use the SGD optimizer to classifier , with an initial learning rate of , a momentum of 0.9, and a weight decay of . In all experiments, we set the hyper-parameters as follows: = 1, = 1, = 1, = 1, = 1, = 8, and = 0.5.