Wasserstein Geodesic Generator for Conditional Distributions
Abstract
Generating samples given a specific label requires estimating conditional distributions. We derive a tractable upper bound of the Wasserstein distance between conditional distributions to lay the theoretical groundwork to learn conditional distributions. Based on this result, we propose a novel conditional generation algorithm where conditional distributions are fully characterized by a metric space defined by a statistical distance. We employ optimal transport theory to propose the Wasserstein geodesic generator, a new conditional generator that learns the Wasserstein geodesic. The proposed method learns both conditional distributions for observed domains and optimal transport maps between them. The conditional distributions given unobserved intermediate domains are on the Wasserstein geodesic between conditional distributions given two observed domain labels. The proposed method generates the Wasserstein geodesic under some conditions. Experiments on face images with light conditions as domain labels demonstrate the efficacy of the proposed method.
Keywords: Generative model, Optimal transport, Conditional generation, Wasserstein geodesic, Wasserstein barycenter
1 Introduction
Conditional generation is the task of constructing synthetic samples following target distributions given by specific domain labels such as age, emotion, and gender. Important applications include class-conditional image generation (Odena et al., 2017; Bao et al., 2017), age progression (Antipov et al., 2017; Wang et al., 2018), text-to-image synthesis (Reed et al., 2016; Zhang et al., 2017a), and data augmentation (Frid-Adar et al., 2018; Shao et al., 2019).
Most conditional generation methods are extended from outstanding image generative models such as variational autoencoders (VAEs) (Kingma and Welling, 2014), generative adversarial networks (GANs) (Goodfellow et al., 2014), and adversarial autoencoders (AAEs) (Makhzani et al., 2015). They model the distribution of images by transforming latent variables with deep neural networks. State-of-the-art conditional generative methods include conditional VAE (cVAE) (Sohn et al., 2015), conditional GAN (cGAN) (Mirza and Osindero, 2014), and conditional AAE (cAAE) (Makhzani et al., 2015). The main extension from image generative models is to concatenate domain labels into the latent variable so that the generator is a function of the latent variable as well as the domain label. A detailed review of current methods is provided in Section 2.
Various conditional generative models have demonstrated realistic results for observed domains and have usually been applied to generate samples for unobserved intermediate domains. For example, in the age progression literature, models trained with images of people in their 20s and 50s can be applied to generate synthetic samples in unobserved intermediate domains such as 30s and 40s. Existing methods pass intermediate domain values through deep neural networks and presume that the generated data arise from the conditional distribution given the intermediate domain value. We anticipate conditional distributions given domain values change smoothly over the domain values. However, existing methods do not guarantee that conditional distributions smoothly change over the in-between regions where data are unobserved. Also, a theoretical framework describing paths on the space of conditional distributions populated by domain label values has not been provided.
The Wasserstein geodesic is the shortest path between two distributions in terms of the Wasserstein distance. We propose a novel conditional generator to learn Wasserstein geodesic, named the Wasserstein geodesic generator. The proposed method is composed of two elements of the Wasserstein geodesic, the conditional distributions given observed domains and the optimal transport map between them, so that the conditional distributions to reside in the Wasserstein space, a metric space defined by the Wasserstein distance. The two elements are the vertices and edges of the Wasserstein geodesic in the space of the conditional distributions, respectively. For vertices, we propose a novel notion of conditional sub-coupling for conditional generation, and adopt it to derive a tractable upper bound of the expected Wasserstein distance between the target and model conditional distributions. For edges, our proposed method learns the optimal transport map with respect to (w.r.t.) the metric on feature space specified by encoder networks. We prove that the conditional distributions given unobserved intermediate domain labels constitute the constant-speed Wasserstein geodesic between the observed domains. Our work is the first to propose conditional distributions given both observed and unobserved domains that are fully characterized by a metric space w.r.t. a statistical distance.
Our contributions are summarized as follows:
-
•
We propose a novel conditional generator that learns the Wasserstein geodesic, named the Wasserstein geodesic generator. Our work is the first that can generate samples whose conditional distributions are fully characterized by the Wasserstein space.
-
•
We lay a theoretical groundwork for learning conditional distributions with the Wasserstein distance by deriving a tractable upper bound of the Wasserstein distance between conditional distributions.
-
•
We employ optimal transport maps between conditional distributions given two observed domains to construct the Wasserstein geodesic between the observed points in the space of conditional distributions.
-
•
We derive that the proposed distribution approximates the Wasserstein barycenter in multiple observed distribution scenarios. It becomes the Wasserstein barycenter when distributions of representations are identical across observed domains.
-
•
Experiments on face images with light conditions as domain labels demonstrate the efficacy of the proposed method.
The remainder of the paper is organized as follows. In Section 2, we review related works including conditional generative models. Section 3 presents theoretical results to derive a tractable upper bound of the expected Wasserstein distance between conditional distributions and Section 4 introduces the proposed method. Section 5 presents experimental results on a face image dataset with light conditions as domain labels. All proofs of theoretical results are provided in Appendix A.
2 Related Works
This section reviews related works on conditional generation, Data-to-Data Translation, and Wasserstein geometry. Our approach utilizes conditional generative models to learn observed conditional distributions, which serve as the vertices in the distribution space. We leverage Data-to-Data Translation techniques to learn intermediate paths between these observed conditional distributions, effectively establishing edges between the vertices in the distribution space. Additionally, we employ the properties of the Wasserstein space to comprehensively characterize this entire process.
The term conditional generation encompasses various meanings, sometimes leading to confusion. In the context of our work, conditional generation refers to a specific process of transforming latent variables and domain labels to generate samples following conditional distributions given domain label values. We make a clear distinction between Data-to-Data Translation and conditional generative models. This distinction is made to clarify the differences between conditioning on representations and domain labels and conditioning on other observed data. It further emphasizes their orthogonal roles: conditional generative models focus on learning vertices (the distributions associated with specific domain labels), while Data-to-Data Translation concentrates on learning edges (the transitions between these distributions) in the Wasserstein space. In some works in the disentangled representation learning literature (Chen et al., 2016; Higgins et al., 2017; Makhzani et al., 2015), domain labels are not available and pseudo-labels are introduced to imitate domain labels.
2.1 Conditional Generative Model
Most conditional generative models are extended from image generation methods. We first review three eminent image generative models: VAEs, GANs, and AAEs. To synthesize realistic data, all three methods aim to learn a generator that transforms latent factors, which follow a user-specified prior distribution. VAEs consist of encoder and decoder networks. They model the joint likelihood of latent factors and their transformations, feed-forwarded by decoder networks, and seek the maximum likelihood estimator for the marginal distribution of observations. Due to the intractability of the likelihood with nonlinear decoder networks, VAEs employ variational inference (Bishop, 2006) to maximize the evidence lower bound, using encoder networks to approximate the distribution of latent variables given observations. In contrast, GANs are composed of discriminator and generator networks. The generator networks in GANs serve the same function as decoders in VAEs—transforming latent factors to generate data. However, GANs introduce discriminator networks to form an adversarial loss for generator training. AAEs, similarly, employ discriminator and generator networks but also encoders. Unlike GANs, the discriminator of AAEs aims to classify encoded results and latent variables drawn from the prior distribution. Training AAEs can be interpreted as minimizing the -Wasserstein distance between distributions of real data and generation results, a special case of Wasserstein autoencoders (WAEs) (Tolstikhin et al., 2018).
Statistical distances employed in forming training objectives in generative models are pivotal components for the quality of generation results. Both VAEs and GANs utilize the -divergence (Csiszár, 1964) to learn distributions of images. As an alternative statistical distance, Arjovsky et al. (2017) showed that the Wasserstein distance has advantages over the -divergence when the supports of data distributions are on a low-dimensional manifold as in image data. The Wasserstein distance yields differentiable losses, while -divergences do not define losses well or yield non-differentiable losses, possibly due to these advantages, Wasserstein distance-based approaches including AAEs, WAEs, and Wasserstein GANs (Arjovsky et al., 2017) often have outperformed -divergence-based approaches.
The main extension from image generative models to conditional models is to concatenate domain labels into the latent variable. In cVAE, cGAN, and cAAE, domain labels are incorporated into encoder networks of VAE, discriminator and generators of GAN, and decoder of AAE, respectively. To enhance the visual quality and diversity of generation results, Kameoka et al. (2018), Odena et al. (2017), and Zhao et al. (2018) introduce auxiliary classifier to match the observed and predicted domain labels for conditional generation results. Conditional generative models can be used to generate data for unobserved domains. This is accomplished by inserting unobserved domain label values into trained models, a technique demonstrated in zero-shot learning approaches (Xian et al., 2018; Chao et al., 2016), aimed at boosting classification performance for unseen classes. However, this approach faces limitations. Firstly, it assumes that latent variables for unobserved domains follow similar patterns to those for observed domains. Secondly, the generated data distribution for unobserved domains has not been justified with a metric space w.r.t. distributions. Intuitively, the unobserved intermediate distribution serves as the centroid of observed distributions, but this property has not been discussed. In contrast, our approach constructs geodesics in the Wasserstein space for unobserved domains, and the proposed distribution is the Wasserstein barycenter when distributions of representations are identical across observed domains. This result is achieved without assuming the homogeneity of representations across both observed and unobserved domains, distinguishing our approach from existing works.
We employ conditional generative models for learning data distributions from observed domain labels, observed vertices in the Wasserstein space. In Section 3, we present theoretical results to provide a tractable objective for minimizing the formulated Wasserstein distances between conditional distributions.
2.2 Data-to-Data Translation
Conditional generative models find a transformation from latent variables and domain labels to data, effectively learning conditional distributions. In contrast, another line of work, which we refer to as Data-to-Data Translation (Kim et al., 2017; Choi et al., 2018; Zhu et al., 2017), operates under a different paradigm. In Data-to-Data Translation, the source and target domains are predefined, and the goal is to find a transport map from the source data to the target data. Typical examples include unpaired translations, such as converting daytime scenes to nighttime scenes, where corresponding pairs of daytime and nighttime scenes for the same location are not required during training. Prominent methods within this field include multi-modal translations across different data domains (Xu et al., 2018; Isola et al., 2017), such as Text-to-Image Translation, as demonstrated by DALL-E (Ramesh et al., 2021), and multi-domain Image-to-Image Translation, as exemplified by StarGAN (Choi et al., 2018).
CycleGAN (Zhu et al., 2017) is a pioneering work in unpaired Image-to-Image Translation. It minimizes the adversarial loss with target data and translated source data, with the cycle consistency loss encouraging the inverse relation between translation maps between two domains. In this case, the transformation is encouraged to match distributions of the target and converted source data, but it does not learn conditional distributions given domain labels since the real data from the source domains are mandatory to construct target data. Liu et al. (2017) introduce a latent variable model to learn conditional distributions. However, the properties of the conditional distributions given unobserved intermediate domains have not been discussed in the Data-to-Data Translation literature.
Our proposed method leverages Data-to-Data Translation techniques to find optimal transport maps between observed conditional distributions, thereby defining edges between observed vertices in the Wasserstein space. In Section 4, we propose to generate intermediate data from edges, Wasserstein geodesics, and extend it to approximate the centroid, which is the Wasserstein barycenter of observed distributions.
2.3 Wasserstein Space
The Wasserstein space, a metric space of distributions endowed with the Wasserstein distance, has found widespread applications in various fields of generative models. We review related works in image processing, domain adaptation, and data augmentation. Other important applications of Wasserstein space include density matching (Cisneros-Velarde and Bullo, 2020), distribution alignment (Zhou et al., 2022), online learning (Korotin et al., 2021), and Bayesian inference (Srivastava et al., 2018).
In image processing, most applications focus on transporting point clouds (Cuturi, 2013) or specific features—such as texture (Rabin et al., 2012), colors (Rabin et al., 2014), and shapes (Solomon et al., 2015)—from a source image a target image. While these approaches have shown remarkable results, they typically require training models or solving an optimization problem for each pair of source and target images. Taking a different route, Mroueh (2020) introduces universal style transfer that employs autoencoders. The method exploits the Wasserstein geodesic of Gaussian measures, using features extracted by encoder networks. However, in Mroueh’s framework, distributions of generated samples for unobserved intermediate domains cannot be characterized by Wasserstein spaces. On a related note, Korotin et al. (2019) put forth a generative model to solve the dual form of the -Wasserstein distance between two distributions of images, which they apply to image style transfer tasks.
In domain adaptation, Xie et al. (2019) proposes latent variable models using single representations to generate multiple images from each domain while minimizing the transportation costs between them. However, this method requires modality-specific generators and can not generate intermediate distributions. The Wasserstein Barycenter Transport (WBT) (Montesuma and Mboula, 2021) is a closely related work to ours. The WBT targets the Wasserstein barycenter of multiple observed source distributions to generate unobserved intermediate domains, but it requires pairs of observations from all the source domains and solves optimization problems for every generation.
In data augmentation, several recent works utilize the Wasserstein space. Bespalov et al. (2022) propose to augment landmark coordinates of facial images with the Wasserstein barycenter, but their method requires computing Wasserstein distances between all pairs of images to oversample landmark data. Zhu et al. (2023) augment data from the Wasserstein barycenter of distributions of images to learn robust classifiers, but this method assumes the Gaussianity of conditional distributions. The work by Fan and Alvarez-Melis (2023) is closely related to this work. Their approach synthesizes data for unobserved domains by applying linear combinations of optimal transport maps between datasets, essentially generating data from the generalized Wasserstein geodesic of observed data distributions. Despite the merits of generalized geodesics, such as convexity and impressive performance in transfer learning tasks, the method employs the optimal transport dataset distance (Alvarez-Melis and Fusi, 2020), dependent on classification labels from each domain. Additionally, the generalized geodesic differs from the Wasserstein barycenter, and the method uses an alternate transportation cost, the -transport metric (Craig, 2016) in optimization.
Existing methods typically rely on strong assumptions, such as the data following Gaussian distributions and the need to solve optimization problems each time data is generated. Other assumptions include the existence of the Wasserstein distance, optimal transport map, Wasserstein geodesics, and Wasserstein barycenters with Euclidean distances on the data space, implying that the distribution of high-dimensional data is continuous. In contrast, our work fills this gap by generating and justifying intermediate, unobserved distributions without the aforementioned assumptions.
3 Theoretical Results on Wasserstein Distance between Conditional Distributions
3.1 Basic Notations
We provide basic notations as follows. Random variables, their realizations, and their supports are denoted by capital, small, and calligraphy capital letters, respectively. The real data, generated samples, and domain labels are denoted by , , and , respectively. We denote the set of distributions defined on a given support by and the conditional distribution of given by .
For any metric on and probability measures and in , the -Wasserstein distance between and w.r.t. is denoted by where and is the set of all couplings of and . For brevity, we omit , the metric on , in the Wasserstein distance if there is no confusion. We assume compactness and convexity of to ensure that the Wasserstein space is a geodesic space where every two points can be connected by the constant-speed geodesic (Santambrogio, 2015).
3.2 Distances between Conditional Distributions
Generating samples given a specific label requires to learn conditional distributions given domain labels. In this section, we formulate distances between target and model conditional distributions and derive a tractable upper bound of the Wasserstein distance between conditional distributions.
Denoting the latent variable independent of domain labels by , the distribution of the observed domain labels by , and the conditional generator by , the model conditional distribution can be expressed as where . To learn , we formulate a class of distances between conditional distributions as
(1) |
where is a measure between distributions. Various statistical distances can be considered for , and when we choose the Kullback-Leibler divergence, Equation (1) can be minimized by maximizing the expectation of the variational lower bound of the conditional log-likelihood over the distribution of domain labels. For the case of the Jensen-Shannon divergence, adversarial learning with discriminator and generator incorporating domain labels minimizes Equation (1).
To bring advantages over -divergences, we focus on Equation (1) equipped with Wasserstein distances to learn the Wasserstein geodesic. Since there is no previous work formulating or deriving properties of the Wasserstein distance between conditional distributions given domain labels, in the next section, we derive an upper bound of Equation (1) that has a tractable representation.
3.3 A Tractable Upper Bound of Wasserstein Distance between Conditional Distributions
In this section, we lay a theoretical groundwork by deriving a tractable upper bound of the expected Wasserstein distance between conditional distributions.
We first propose a new set of couplings for conditional generation, conditional sub-coupling.
Definition 1
For any and , we define the conditional sub-coupling as the set of all probability measures expressed as for some where . The conditional sub-coupling is denoted by .
The conditional sub-coupling is the set of all probability measures induced by couplings of conditional distributions. It is nonempty and equals to if for all . The following example provides cases where the conditional sub-coupling is a proper subset of . Let denote bivariate Gaussian distribution with mean and covariance .
Example 1
Let be and be . Then, includes if and only if .
Further discussions and proofs about the conditional sub-coupling are provided in Appendix A. With the conditional sub-coupling, we derive an upper bound of the expected -Wasserstein distance in the following theorem.
Theorem 2
Let and be distributions in . For any metric on and defining ,
|
(2) |
That is, the minimum transport cost over conditional sub-coupling is an upper bound of expected Wasserstein distance between conditional distributions.
We show a tractable representation of the upper bound in the following theorem. We denote the set of all satisfying by ; the RHS can be considered as an aggregate posterior (Makhzani et al., 2015).
Theorem 3
Let and be distributions in and , respectively. For any metric space , , and generator ,
|
That is, the upper bound of the Wasserstein distance between conditional distributions, the RHS of Equation (2), can be expressed as the infimum of the reconstruction error over encoders . Note that the integrand in the LHS of Equation (2) depends on the conditioning data and requires to evaluate the Wasserstein distance for every realization , which is infeasible. In contrast, the derived representation can be computed by solving a stochastic optimization problem. When the terms related to domain label are removed, Theorem 3 reduces to the representation of the Wasserstein distance between marginal distributions provided by Tolstikhin et al. (2018).
4 Proposed method
4.1 Motivation
For a motivating example, suppose data come from one of two observed domains whose label values are and . Existing methods in conditional generation literature have considered as intermediate samples (Zhang et al., 2017b). However, without a strong assumption such as a linear structure, the interplay between , , and is difficult to formalize.
A desirable property of generated samples for unobserved intermediate domains would be their conditional distributions change smoothly from one observed domain to another. The next section proposes a new conditional generator that constructs samples from distributions on the constant-speed geodesic in the Wasserstein space.
Definition 4
(Constant-speed geodesic) (Santambrogio, 2015) For any and on a Wasserstein space , a parameterized curve is called the constant-speed geodesic from to in if , , and for any .
That is, a constant-speed geodesic in a Wasserstein space is a parameterized curve whose speed equals to the Wasserstein distance. Our method yields the conditional distribution given an unobserved intermediate domain label as an interpolation point between the conditional distributions given observed domain labels in the Wasserstein space. Unlike existing methods, the generated distributions are fully characterized by the Wasserstein space .
4.2 Conditional Generator for Learning Wasserstein Geodesics
This section proposes the Wasserstein geodesic generator, a novel conditional generator for learning Wasserstein geodesics. Our method learns the conditional distributions given observed domains and the optimal transport maps between them to construct the Wasserstein geodesic. The proposed method consists of three networks: encoder , generator , and transport map .
We first define the optimal transport map, and then provide the proposed method.
Definition 5
(Optimal transport map) (Santambrogio, 2015) A map is an optimal transport map from to w.r.t. if is a solution of the Monge-Kantorovich transportation problem,
subject to |
The optimal transport map refers maps yielding the minimum transportation cost. The optimal transport map uniquely exists if the cost function is the -th power of -distance denoted by where and measures are absolutely continuous on compact domains. The minimum transportation cost by the optimal transport map is known as .
We now present the Wasserstein geodesic generator. The proposed method postulates the encoder, generator, and transport map satisfying the following conditions and to generate intermediate samples. Here, is a metric defined on .
-
(A1)
(One-to-one mapping between and ) For any and , and .
-
(A2)
(Absolutely continuous representations) For any , is absolutely continuous, is defined on a compact set, and has finite second moments.
-
(A3)
(Optimal transportation) For any observed domain labels and , is the optimal transport map from to w.r.t. .
Note that condition is about inverse relations between the encoder and generator for fixed domain labels, is to guarantee the existence and uniqueness of optimal transport maps between observed conditional distributions w.r.t. , and is to build paths between observed conditional distributions with optimal transport maps.
Lemma 6
Suppose the encoder and generator satisfy the conditions and . Then, for any and ,
The encoder and generator can define a distance-preserving mapping (called isometric mapping) between two Wasserstein spaces, and we can connect their geometric structures, including geodesics and barycenters. With Lemma 6 and optimal transport maps, we can first generate geodesics on , and then project them to geodesics on to generate intermediate unobserved conditional distributions.
Theorem 7
Suppose the encoder, generator, and transport map satisfy conditions through . For any two observed domain labels and , their convex combination , and , the latent interpolation result of and its transported result can be expressed as . Then, the curve of distributions of latent interpolation results is the constant-speed geodesic from to in .
The conditional distributions of the samples generated by the proposed method constitute the Wasserstein geodesic, yielding the minimum transportation cost quantified by between the conditional distributions of observed domains.
4.3 Generation from the Wasserstein Barycenter with Wasserstein Geodesic Generator
This section extends our Wasserstein geodesic generator to accommodate scenarios involving multiple observed conditional distributions. We explain how the proposed distribution approximates the centroid of observed conditional distributions with an interpretable upper bound of the approximation error. Furthermore, we derive that the proposed distribution is the Wasserstein barycenter under some conditions.
We first define the Wasserstein barycenter, the centroid of distributions within the Wasserstein space.
Definition 8
(Wasserstein barycenter) (Agueh and Carlier, 2011) For any distributions defined on a Wasserstein space and non-negative real numbers , the Wasserstein barycenter of with weights is the unique solution of
Specifically, when , the constant-speed geodesic in Definition 4 serves as the Wasserstein barycenter of two distributions with weights . In this context of defining a centroid between two distributions, the Wasserstein geodesic is referred to as McCann’s interpolant (McCann, 1995). The Wasserstein barycenters have been acknowledged as an effective solution for aggregating high-dimensional distributions (Korotin et al., 2022), across various applications including data augmentation (Huguet et al., 2022; Bespalov et al., 2021; Zhu et al., 2023) and domain adaptation (Montesuma and Mboula, 2021).
The Wasserstein barycenter has several advantages in generating unobserved conditional distributions. First, it provides smooth and stable transitions between observed distributions, which is essential for synthesizing data for new, unobserved domains. This allows the generated data to inherit characteristics from observed conditional distributions without abrupt changes. Second, it minimizes the average optimal transportation costs to observed distributions, thus minimally altering observed data to synthesize unobserved data. Last, the Wasserstein barycenter can be employed to infer the characteristics of unobserved domains, e.g., estimating the conditional average treatment effect in clinical trials (Huguet et al., 2022). Despite these advantages, the computational complexity of estimating Wasserstein barycenter remains a significant bottleneck (Cuturi, 2013; Lin et al., 2020).
We extend the Wasserstein geodesic generator to generate unobserved intermediate distributions with multiple observed distributions. In Theorem 10, we establish an interpretable bound on the error incurred when approximating the Wasserstein barycenter using out proposed distribution. We further demonstrate that, under condition relating to the condition on the homogeneity of representations across observed domains, the proposed distribution is the Wasserstein barycenter. We begin by introducing a lemma that elucidates how the homogeneity affects on the average squared Wasserstein distances.
Lemma 9
Suppose the encoder and generator satisfy the conditions and . Then. for any observed domain labels and their convex combination , is equal to
(3) |
holds.
In Equation (3), the first term represents the Wasserstein variance (Martinet et al., 2022) of , while the second term denotes the variance of w.r.t. weights . When multiple convex combinations are possible, we select the optimal combination that minimizes the variance, . In the special case where and is univariate, this approach is equivalent to using the two nearest observed distributions to generate intermediate distributions. The Wasserstein variance quantifies the homogeneity of representations across observed conditional distributions and is zero if and only if the distributions of representations from all observed domains are identical, which is to say that the following condition holds.
-
(A4)
For any two observed domain labels and , .
For any observed domain labels , their convex combination , and , the latent interpolation result of and its transported results can be expressed as . In the subsequent theorem, we derive an upper bound for the difference between the average squared Wasserstein distances with our proposed distribution and with the Wasserstein barycenter. Additionally, when we further suppose the homogeneity of representations, condition , the proposed method generates the Wasserstein barycenter of observed conditional distributions.
Theorem 10
Suppose the encoder, generator, and transport map satisfy conditions through . Then,
|
(4) |
holds. When we further suppose the condition , the upper bound is zero and is the Wasserstein barycenter of w.r.t. weights .
To the best of authors’ knowledge, this is the first result that justifies latent interpolation from an optimal transport point of view, without resorting to Gaussian or univariate assumptions. The upper bound represents half of the average squared distances between and . They are identical in the univariate scenario, but not necessarily in other cases, which makes the non-negative upper bound. Although we have detailed results for the -Wasserstein distance where both existence and uniqueness of barycenters are extensively examined, similar results hold for general -Wasserstein distances. If through hold and has solutions (called Fréchet mean w.r.t. ), then the proposed distribution is a solution. Note that condition is weaker than the following condition.
-
(A5)
is constant w.r.t. , equivalently,
.
Most existing conditional generative models, including cVAE, cGAN, and cAAE, assume . When the unobserved domains possess patterns distinctly different from the observed ones, this condition may not satisfied. For example, in face frontalization (Huang et al., 2017), if we have facial images captured from the right and left angles and aim to synthesize frontal facial images, the frontal images might exhibit unique features, such as symmetrical facial structures, clear visibility of both eyes, and a full view of facial landmarks like the nose bridge and forehead. In the following theorem, under , we derive that the generation result follows the true conditional distribution.
Theorem 11
Suppose the encoder, generator, and transport map satisfy conditions through . Then, .
In summary, when through hold, we can construct distributions that change smoothly in-between observed conditional distributions by generating geodesics. When we further suppose , we can generate data from the Wasserstein barycenter without observations as described in Algorithm 1. Furthermore, the proposed distribution is the true conditional distribution when holds.
4.4 Implementation
The training of the proposed method is to learn networks satisfying conditions through and consists of two steps. The first step is to learn the encoder and generator pair and the second step is to learn the transport map with the learned encoder. These two steps learn the vertices and edges of the geodesic, respectively.
For the first step, motivated by Theorem 3 and condition , we minimize the reconstruction error with two penalty terms
(5) |
where the first term is the reconstruction error, the second term is to enforce the constraint on the encoder network in the derived representation of the upper bound in Theorem 3, and the last term is to enforce condition . We substitute the deterministic encoder Enc with and with , and set to learn the Wasserstein geodesic on which the information independent of domain labels is minimally changed. In implementation, we apply GAN for the second term and interpolation results of encoded values for the last term.
For the second step, with learned encoder from the first step, we solve Monge–Kantorovich transportation problems w.r.t. ,
subject to |
for all observed domain label values and . The objective is
(6) |
Here, is the distribution of where and are independent samples from and is sampled from . The first term is the transportation cost measured by and the second term is to enforce constraint on conditional distributions of transported data. Note that the second term is zero if and only if for all sampled from . The last term is the cycle consistency loss that encourages the inverse relation of the transport map from one domain to another and its vice versa. The cycle consistency loss has been proposed in the Data-to-Data Translation literature (Zhu et al., 2017; Choi et al., 2018) from a heuristic point of view to avoid mode collapse, but in our method, it enforces the transport map to satisfy the inverse relation of optimal transport maps between the two domains. We derive that the minimizer of objective in Equation (6) is unique and is the optimal transport map between observed domains in the following theorem.
Theorem 12
Let be the optimal transport map from the conditional distribution given domain label to that given w.r.t. . Then, with probability w.r.t. for all is the unique minimizer of objective in Equation (6).
In implementation, we apply WGAN with gradient penalty (Gulrajani et al., 2017) for the second term, employ an auxiliary regressor to enhance the visual quality and diversity of generated samples and add a reconstruction error, , for a regularization purpose.
With the learned encoder, generator, and transport map, we can generate samples whose conditional distributions are on the Wasserstein geodesic on which the information independent of domain labels is minimally changed.
4.5 Relationship with Other Methods
If either of the two steps of our training algorithm is dropped and an appropriate modification is made, our algorithm reduces to either cAAE or CycleGAN.
Suppose the second step is dropped. The first step alone learns the encoder and generator pair through minimizing objective in Equation (5). If we remove in the second term and drop the third term, objective in Equation (5) reduces to that of cAAE. Thus it is the in the second term that enables the proposed algorithm to learn the features independent of domain labels.
Now suppose the first step is removed. Since the second step optimizes objective in Equation (6) that includes , it inherently depends on the first step. However, if we redefine the distance not to depend on the encoder, the second step can be operated independently. By not inheriting the , one no longer learns a transport map while minimally changing contents independent of conditioning variables. If we replace in the first term with the norm and in the second term with the Jensen-Shannon divergence, and if there are only two observed domain labels and , objective in Equation (6) reduces to that of CycleGAN. Inheriting the results from Theorem 12, we can establish the following corollary.
Corollary 13
Let be the optimal transport map from the conditional distribution given domain label to that given w.r.t. , where or . Then, with probability w.r.t. for is the unique minimizer of the objective of CycleGAN.
Lu et al. (2019) specifically point out that theoretically, there is no claim on the detailed properties of the mapping established by CycleGAN. On the other hand, Korotin et al. (2019) used the cycle consistency term to promote inverse relations between optimal transport maps, albeit with an objective based on the dual form of -Wasserstein distance, which distinguishes it from CycleGAN’s objective. Because CycleGAN can be viewed a special case of our proposed method, now it can be interpreted in terms of optimal transport theory, which has not been established.
5 Experiments
5.1 Experimental Setting
We conduct experiments on the Extended Yale Face Database B (Extended Yale-B) (Georghiades et al., 2001) dataset. The Extended Yale-B dataset consists of face images from subjects, poses, and light conditions. We consider light conditions as domain labels. For light conditions, the azimuth and elevation of the light source are provided. The total number of images is about .
We split the Extended Yale-B dataset by subjects to construct training, validation, and test sets. The number of subjects is , , and for training, validation, and test, respectively. For a data pre-processing, we apply a face detection algorithm proposed by Viola and Jones (2001) to crop the face part.222We use the official code in OpenCV (Bradski, 2000). The range of images is scaled to and the horizontal flip with probability is applied during training.
For baselines, we consider cAAE, CycleGAN, and StarGAN (Choi et al., 2018). As described in Section 4.5, our encoder and generator pair can reduce to cAAE and our transport map can reduce to CycleGAN. StarGAN is a state-of-the-art Data-to-Data Translation method and we add it as a baseline for the transport map. Algorithms of CycleGAN and StarGAN in their published works are inapplicable to continuous domain labels, so we add source and target light conditions as a input of CycleGAN and change the auxiliary classifier of StarGAN to regressor for experiments.
In all methods, architectures of the encoder and generator networks are adopted from DCGAN (Radford et al., 2015), and the architecture of the transport map is adopted from StarGAN. Architectures are modified to concatenate light conditions to latent variables, and we control the size of the networks for a fair comparison. For both the proposed method and baselines, we train the encoder and generator pair for iterations with batch size of , and train the transport map for iterations with batch size of . We use the Adam (Kingma and Ba, 2014) optimizer and set the initial learning rate to and to linearly decrease to for the encoder and generator pair and to for the transport map. Implementation details including architectures are provided in Appendix B.
333The implementation code is provided at the following link:
http://github.com/kyg0910/Wasserstein-Geodesic-Generator-for-Conditional-Distributions.
5.2 Results
We compare the proposed method and baselines with three tasks: (i) Conditional generation for unobserved intermediate domains, (ii) Data-to-Data Translation, and (iii) Latent interpolation with the real data and their translation results. Figures 1, 2, and 3 present results for tasks (i), (ii), and (iii), respectively. The (ii) and (iii) evaluate transport maps when an image is given. We reiterate CycleGAN and StarGAN cannot generate samples while the proposed method can.
Figure 1 presents the conditional generation results for unobserved intermediate domains. We compare the proposed method and cAAE. The proposed method produces face images with clearer eyes, noses, and mouths than baselines. For each method, the leftmost and rightmost columns show generation results for observed domains whose values of (azimuth, elevation) are and , respectively, and intermediate columns show results for unobserved intermediate domains. For the proposed method, as described in Section 4.2, we first generate samples in the leftmost column, then transport the samples to the domain of the rightmost column, and finally apply latent interpolation where increases from to at equal intervals. For cAAE, we fix latent variable for each row and interpolate domain label with equal spacing.
Figure 2 compares the transportation results. We compare the proposed method, CycleGAN, and StarGAN. The leftmost panel visualizes the transportation results. The proposed method gradually casts a shadow reflecting the three-dimensional structure of the nose and mouth in the face, which makes the outcomes visually sharper and more plausible than baselines. The bottom row shows face images from various azimuth for a fixed subject, pose, and elevation. The elevation is and azimuth values are shown at the bottom. For the first three rows, the middle column shows the real data in the fourth row and other columns show transportation results by various methods for observed domains corresponding to each column. Each method translates the real data in the middle column to other observed domains. The middle panel shows the box plots of FID scores from various transport maps. The lower values are better and the proposed method outperforms baselines. The means of the FID scores of CycleGAN, StarGAN, and the proposed method with standard deviation are 174.3 (28.9), 109.7 (9.7), and 74.2 (10.9), respectively. To calculate the FID scores, we transport real face images from (azimuth, elevation) of to other observed domains and evaluate FID between transportation results and real images for every domains. The absolute value of azimuth and elevation is less than or equal to and , respectively, to remove outliers from extreme light conditions. The rightmost panel presents the surfaces of the FID scores from various transport maps as functions of azimuth and elevation. In all combinations of light conditions, the proposed method outperforms baselines. We estimate scores for unobserved domains by interpolating scores from adjacent observed domains to draw the plot.
Figure 3 presents the results of latent interpolation on unobserved intermediate domains with a real image and its translation result. We compare the proposed method, CycleGAN, and StarGAN. We fix the encoder and generator pair for all methods, and only change the transport map. The bottom row shows the ground-truth images of fixed subject and pose from two observed domains with (azimuth, elevation) of , for the leftmost, and , for the rightmost. In the top three rows, the leftmost column shows the ground-truth, the rightmost column shows the transportation results of the ground-truth by various transport maps, and the intermediate columns show latent interpolation results for unobserved intermediate domains. As in Figure 2, the outputs of the proposed method are visually sharper and more plausible than baselines.
6 Conclusion
We founded a theoretical framework for the space of conditional distributions given domain labels and proposed the Wasserstein geodesic generator, a novel conditional generator that learns the Wasserstein geodesic. We derived a tractable upper bound of the Wasserstein distance between conditional distributions to learn those given observed domains and applied optimal transport maps between them to generate a constant-speed Wasserstein geodesic of the conditional distributions given unobserved intermediate domains. Our work is the first to generate samples whose conditional distributions are fully characterized by a metric space w.r.t. a statistical distance. Experiments on face images with light conditions as domain labels demonstrate the efficacy of the proposed method with visually more plausible results and better FID scores than baselines.
Acknowledgments and Disclosure of Funding
The work of Young-geun Kim was supported by the National Research Foundation (NRF) of Korea under Grant NRF-2020R1A2C1A01011950 and by the National Institute of Mental Health of U.S. under Grant R01MH124106. The work of Kyungbok Lee and Myunghee Cho Paik was supported by the NRF under Grant NRF-2020R1A2C1A01011950.
Appendix A Theoretical Results with Proofs
A.1 Further Discussions and Proofs about the Conditional Sub-coupling
In this section, we discuss properties of the conditional sub-coupling, , and provide proofs. The following proposition describes properties of the conditional sub-coupling and corresponding properties of the upper bound of the expected Wasserstein distance between conditional distributions in Theorem 2.
Proposition 14
For any and , is nonempty, included in , and equal to if . These imply that the RHS in Theorem 2,
(7) |
is finite, greater than or equal to , and equal to if .
Proof
First, the conditional sub-coupling contains
, so it is nonempty. Next, for any , there is such that and .
This implies and . Thus, and it implies that Equation (7) is greater than or equal to . Finally, if , we have and for all and . Thus, and it concludes the proof.
The following proposition provides a detailed discussion about the relation between and , including non-Gaussian cases, by deriving a necessary condition for a coupling from to be in the conditional sub-coupling.
Proposition 15
For given two distributions and and a distribution , we denote the covariance matrix of by . Then,
(8) |
if . Furthermore, when , , and are multivariate Gaussian distributions, Equation (8) holds if and only if .
Proof
By the definition of , for any , there is such that and for all .
This implies the existence of random variables such that for all and .
Now, we have , , and , so the covariance matrix of is and it should be positive semi-definite.
For the final statement in the proposition, when , , and are multivariate Gaussian distributions and Equation (8) holds, we can define following a multivarate Gaussian distribution denoted by where , , and are means of , , and , respectively. Since the distribution of is in for all and , the proof is concluded.
That is, all the probability measures that can not be utilized to define the distribution of whose marginals on , , and are , and , respectively, are excluded in the conditional sub-coupling. Now, we provide Example 1 again and provide the corresponding proof.
Example 1. Let be and be . Then, includes if and only if .
Proof By Proposition 15, it is sufficient to solve
It is a quadratic inequality with respect to , and the solution is or .
A.2 Proofs of Theoretical Results
A.2.1 Proof of Theorem 2.
For any , there is such that and . By the definition of Wasserstein distance, . This implies that . Now, taking infimum over all concludes the proof.
A.2.2 Proof of Theorem 3.
We first provide a lemma to prove Theorem 3.
Lemma 16
For any two distributions and and , let be the set of all probability measures such that there exists satisfying , , , and . Then, .
Proof First, we prove . By definition, for any , there exists satisfying , , , and . Since and , the distribution of is an element of . Since , it is shown that .
Next, we prove . For any , there exists such that , , , and . Since , there exists such that and . As satisfies , , , and , it is shown that , which concludes the proof.
Now, we prove Theorem 3. Showing that is equal to is sufficient by Lemma 6 where denotes the set of all satisfying .
First, we show that LHS RHS. For any , there exists satisfying , , , and . This implies that
We denote the distribution of by . Since and , for all , , and . Thus, . This and imply . It concludes LHS RHS.
Next, we show LHS RHS. For any , there exists satisfying , , and . We denote and the distribution of by . Then,
|
Here, because satisfies , , , and . Thus, we have , which concludes the proof.
A.2.3 Proof of Lemma 6.
For any s.t. and , by the definition of the Wasserstein distance,
|
This implies that is greater than or equal to . Similarly, for any s.t. and ,
|
holds by and the definition of the Wasserstein distance, which implies .
A.2.4 Proof of Theorem 7.
By Lemma 6, it is sufficient to show that the distribution of , denoted by , is the constant-speed Wasserstein geodesic from to in . First, by , . By , , which implies . This and imply that . Next, for any , by the definition of the Wasserstein distance, , and Lemma 6, we have
|
Here, the last term is and it implies
Thus, equalities hold and it concludes the proof.
A.2.5 Proof of Lemma 9.
By Lemma 6, showing that is equal to is sufficient. By Proposition 4.2 in Agueh and Carlier (2011), for any measures vanishing on small sets,444A measure defined on -dimensional spaces is said to vanish on small sets if it vanishes on -rectifiable sets (Gangbo and Święch, 1998; Agueh and Carlier, 2011).
(9) |
and hold where is the set of joint distributions whose marginals are , is the unique solution of the LHS, and is the unique solution of the RHS. By Equation (9),
|
A.2.6 Proof of Theorem 10.
We first derive a lemma for Theorem 10.
Lemma 17
Suppose the encoder, generator, and transport map satisfy conditions through . Then,
|
(10) |
holds. When we further suppose the condition , is the Wasserstein barycenter of w.r.t. weights and all the terms in Equation (4) are equal to .
Proof First, we derive Equation (10). The second inequality is trivial by the definition of the infimum. For the first inequality, by Lemma 6, the definition of the Wasserstein distance, and Equation (9),
where is the unique solution of . Since
(11) |
holds for any sequence , we have that is equal to . The first term in the RHS equals to . This and Lemmas 6 and 9 imply
which concludes the first inequality by Lemma 6. For the last inequality, by the definition of the Wasserstein distance, . This, Equation (11), and Lemma 6 conclude the proof for the last inequality.
Next, we derive that the distribution of the latent interpolation result is the Wasserstein barycenter when holds. Since the Wasserstein variance of is zero, by Lemmas 6 and 9, it is sufficient to show that
(12) |
By , the optimal transportation cost from to is zero, which implies that almost surely for all . Thus, for all . This implies that the LHS in Equation (12) equals to , which concludes the proof. For general -Wasserstein distances, holds.
Last, we derive that all the terms in Equation (10) are equal to when holds. Since for all , derived in the above paragraph, implies that , the lower bound becomes . Similarly, the for all implies that the upper bound becomes . Now, Equation (11) concludes the proof.
Now, we show Theorem 10. By Lemma 17, we have
|
By (A3), is equal to , which concludes the proof of Equation (4). The upper bound is zero and the distribution of latent interpolation result is the Wasserstein barycenter by Lemma 17.
A.2.7 Proof of Theorem 11.
By , the data generation structure can be expressed as where independent of . The Wasserstein barycenter of w.r.t. is the same as that of w.r.t. , which implies that is the Wasserstein barycenter. Now, Theorem 10 concludes the proof.
A.2.8 Proof of Theorem 12.
We denote the first, second, and third term in objective (4) as follows.
-
•
-
•
-
•
By definition of the optimal transport map, and for all . Thus, is a minimizer. Let be a minimizer of objective (4). Then, and . By definition of the optimal transport map, it implies with probability w.r.t. for all . Thus, and it concludes that is the unique minimizer.
Appendix B Details on Experiments
B.1 Implementation Details
Encoder | Generator |
---|---|
Conv. with kernel 11x11, filter size 128, stride 1, padding 5 | ConvTran. with kernel 4x4, filter size 1024, stride 1, padding 0 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 6x6, filter size 128, stride 2, padding 2 | ConvTran. with kernel 4x4, filter size 1024, stride 2, padding 1 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 4x4, filter size 256, stride 2, padding 1 | ConvTran. with kernel 4x4, filter size 512, stride 2, padding 1 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 4x4, filter size 512, stride 2, padding 1 | ConvTran. with kernel 4x4, filter size 256, stride 2, padding 1 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 4x4, filter size 1024, stride 2, padding 1 | ConvTran. with kernel 4x4, filter size 128, stride 2, padding 1 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 4x4, filter size 1024, stride 2, padding 1 | ConvTran. with kernel 6x6, filter size 128, stride 2, padding 2 |
BatchNormalization | BatchNormalization |
LeakyReLU with slope 0.2 | LeakyReLU with slope 0.2 |
Conv. with kernel 4x4, filter size 128, stride 1, padding 0 | ConvTran. with kernel 11x11, filter size 1, stride 1, padding 5 |
Sigmoid |
Transport map | Residual block |
---|---|
Conv. with kernel 11x11, filter size 64, stride 1, padding 5 | Conv. with kernel 3x3, filter size 256, stride 1, padding 1 |
BatchNormalization | BatchNormalization |
ReLU | ReLU |
Conv. with kernel 4x4, filter size 128, stride 2, padding 1 | Conv. with kernel 3x3, filter size 256, stride 1, padding 1 |
BatchNormalization | BatchNormalization |
ReLU | |
Conv. with kernel 4x4, filter size 256, stride 2, padding 1 | |
BatchNormalization | |
ReLU | |
Residual block 1 | |
Residual block 2 | |
Residual block 3 | |
Residual block 4 | |
Residual block 5 | |
Residual block 6 | |
ConvTrans. with kernel 4x4, filter size 128, stride 2, padding 1 | |
BatchNormalization | |
ReLU | |
ConvTrans. with kernel 4x4, filter size 64, stride 2, padding 1 | |
BatchNormalization | |
ReLU | |
ConvTrans. with kernel 11x11, filter size 1, stride 1, padding 5 | |
Sigmoid |
Discriminator for generator | Discriminator for transport map | Auxiliary regressor |
---|---|---|
Linear with filter size 512 | Conv. with kernel 4x4, filter size 64, stride 2, padding 1 | Conv. with kernel 4x4, filter size 64, stride 2, padding 1 |
ReLU | LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 |
Linear with filter size 512 | Conv. with kernel 4x4, filter size 128, stride 2, padding 1 | Conv. with kernel 4x4, filter size 128, stride 2, padding 1 |
ReLU | LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 |
Linear with filter size 512 | Conv. with kernel 4x4, filter size 256, stride 2, padding 1 | Conv. with kernel 4x4, filter size 256, stride 2, padding 1 |
ReLU | LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 |
Linear with filter size 512 | Conv. with kernel 4x4, filter size 512, stride 2, padding 1 | Conv. with kernel 4x4, filter size 512, stride 2, padding 1 |
ReLU | LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 |
Linear with filter size 1 | Conv. with kernel 4x4, filter size 1024, stride 2, padding 1 | Conv. with kernel 4x4, filter size 1024, stride 2, padding 1 |
Sigmoid | LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 |
Conv. with kernel 4x4, filter size 2048, stride 2, padding 1 | Conv. with kernel 4x4, filter size 2048, stride 2, padding 1 | |
LeakyReLU with slope 0.01 | LeakyReLU with slope 0.01 | |
Conv. with kernel 3x3, filter size 1, stride 1, padding 1 | Conv. with kernel 3x3, filter size 2, stride 1, padding 1 | |
Average pooling with kernel 2x2, stride 2 | Average pooling with kernel 2x2, stride 2 |
Our work consists of three main networks, encoder, generator, and transport map, and three auxiliary networks, discriminator for generator, discriminator for transport map, and auxiliary regressor. Architectures of the encoder and generator networks are adopted from DCGAN (Radford et al., 2015), and the architecture of the transport map is adopted from StarGAN (Choi et al., 2018). Architectures are modified to concatenate light conditions to latent variables. Table 1 shows architectures of encoder and generator networks. Conv and ConvTran denote convolutional layer and convolutional transpose layer, respectively. Table 2 shows the architecture of the transport map network. We apply skip connection to input features in intermediate layers of convolutional layers into convolutional transpose layers. Table 3 shows architectures of discriminator for generator, discriminator for transport map, and auxiliary regressor.
We control the size of the networks of baselines for a fair comparison. For conditional AAE (cAAE, Makhzani et al., 2015), architectures are the same as ours except encoder and discriminator input only latent variable. For CycleGAN (Zhu et al., 2017), architectures are the same as ours. For StarGAN, architectures are the same as ours except translator map inputs only source data and target domain labels. The dimension of latent variable is . For both the proposed method and baselines, we train the encoder and generator pair for iterations with batch size of , and train the transport map for iterations with batch size of . We use the Adam (Kingma and Ba, 2014) optimizer and set the initial learning rate to and to linearly decrease to for the encoder and generator pair and to for the transport map. In the first step of training encoder and generator pair, we update encoder and generator for every iterations while update discriminator for generator for every iteration. In the second step of training transport map, we update transport map and auxiliary regressor for every iterations while update discriminator for transport map for every iteration. For a data pre-processing, we apply a face detection algorithm proposed by Viola and Jones (2001) to crop the face part. The resolution of image is scaled to , the range of images is scaled to , and the horizontal flip with probability is applied during training.
We denote losses as follows and provide values of coefficients for losses.
-
•
-
•
-
•
-
•
-
•
-
•
The coefficients of , , , , and are , , , , and , respectively. For , we consider and choose the model yielding the best validation FID scores. The coefficient of gradient penalty loss, regression loss, and reconsturction error in the second step, , is , , and , respectively. For CycleGAN, the coefficient of identity mapping loss is . We extended the definition of by introducing a hyperparameter where the extended formula is to balance distances on and on .
B.2 Further Results
Figure 4 presents further conditional generation results for unobserved intermediate domains. As in Figure 1, the proposed method produces face images with clearer eyes, noses, and mouths than baselines.
Figure 5 presents further results of latent interpolation on unobserved intermediate domains with a real image and its translation result. The bottom row shows the ground-truth images of fixed subject and pose from two observed domains with (azimuth, elevation) of , for the leftmost, and , for the rightmost. As in Figure 3 of the manuscript, the outputs of the proposed method are visually sharper and more plausible than the baselines.
B.3 Computing Infrastructure
We use about one hundred CPU cores and ten GPUs (five GeForce GTX 1080, two TITAN X, and three TITAN V) for experiments. A full training of the proposed method requires about GPU hours for the encoder and generator pair and GPU hours for the transport map.
References
- Agueh and Carlier (2011) M. Agueh and G. Carlier. Barycenters in the wasserstein space. SIAM Journal on Mathematical Analysis, 43(2):904–924, 2011.
- Alvarez-Melis and Fusi (2020) D. Alvarez-Melis and N. Fusi. Geometric dataset distances via optimal transport. Advances in Neural Information Processing Systems, 33:21428–21439, 2020.
- Antipov et al. (2017) G. Antipov, M. Baccouche, and J.-L. Dugelay. Face aging with conditional generative adversarial networks. In 2017 IEEE international conference on image processing (ICIP), pages 2089–2093. IEEE, 2017.
- Arjovsky et al. (2017) M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning (ICML), 2017.
- Bao et al. (2017) J. Bao, D. Chen, F. Wen, H. Li, and G. Hua. Cvae-gan: fine-grained image generation through asymmetric training. In Proceedings of the IEEE international conference on computer vision, pages 2745–2754, 2017.
- Bespalov et al. (2021) I. Bespalov, N. Buzun, O. Kachan, and D. V. Dylov. Data augmentation with manifold barycenters. arXiv preprint arXiv:2104.00925, 2021.
- Bespalov et al. (2022) I. Bespalov, N. Buzun, O. Kachan, and D. V. Dylov. Lambo: Landmarks augmentation with manifold-barycentric oversampling. IEEE Access, 10:117757–117769, 2022.
- Bishop (2006) C. M. Bishop. Pattern recognition and machine learning. springer, 2006.
- Bradski (2000) G. Bradski. The opencv library. Dr Dobb’s J. Software Tools, 25:120–125, 2000.
- Chao et al. (2016) W.-L. Chao, S. Changpinyo, B. Gong, and F. Sha. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pages 52–68. Springer, 2016.
- Chen et al. (2016) X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pages 2172–2180, 2016.
- Choi et al. (2018) Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8789–8797, 2018.
- Cisneros-Velarde and Bullo (2020) P. Cisneros-Velarde and F. Bullo. Distributed wasserstein barycenters via displacement interpolation. arXiv preprint arXiv:2012.08610, 2020.
- Craig (2016) K. Craig. The exponential formula for the wasserstein metric. ESAIM: Control, Optimisation and Calculus of Variations, 22(1):169–187, 2016.
- Csiszár (1964) I. Csiszár. Eine informationstheoretische ungleichung und ihre anwendung auf beweis der ergodizitaet von markoffschen ketten. Magyer Tud. Akad. Mat. Kutato Int. Koezl., 8:85–108, 1964.
- Cuturi (2013) M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26:2292–2300, 2013.
- Fan and Alvarez-Melis (2023) J. Fan and D. Alvarez-Melis. Generating synthetic datasets by interpolating along generalized geodesics. arXiv preprint arXiv:2306.06866, 2023.
- Frid-Adar et al. (2018) M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan. Synthetic data augmentation using gan for improved liver lesion classification. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pages 289–293. IEEE, 2018.
- Gangbo and Święch (1998) W. Gangbo and A. Święch. Optimal maps for the multidimensional monge-kantorovich problem. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 51(1):23–45, 1998.
- Georghiades et al. (2001) A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE transactions on pattern analysis and machine intelligence, 23(6):643–660, 2001.
- Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems (NeurIPS), 2014.
- Gulrajani et al. (2017) I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5769–5779, 2017.
- Heusel et al. (2017) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6626–6637, 2017.
- Higgins et al. (2017) I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
- Huang et al. (2017) R. Huang, S. Zhang, T. Li, and R. He. Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis. In Proceedings of the IEEE international conference on computer vision, pages 2439–2448, 2017.
- Huguet et al. (2022) G. Huguet, A. Tong, M. R. Zapatero, G. Wolf, and S. Krishnaswamy. Geodesic sinkhorn: optimal transport for high-dimensional datasets. arXiv preprint arXiv:2211.00805, 2022.
- Isola et al. (2017) P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
- Kameoka et al. (2018) H. Kameoka, T. Kaneko, K. Tanaka, and N. Hojo. Acvae-vc: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder. arXiv preprint arXiv:1808.05092, 2018.
- Kim et al. (2017) T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
- Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kingma and Welling (2014) D. P. Kingma and M. Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
- Korotin et al. (2019) A. Korotin, V. Egiazarian, A. Asadulaev, A. Safin, and E. Burnaev. Wasserstein-2 generative networks. arXiv preprint arXiv:1909.13082, 2019.
- Korotin et al. (2021) A. Korotin, V. V’yugin, and E. Burnaev. Mixability of integral losses: A key to efficient online aggregation of functional and probabilistic forecasts. Pattern Recognition, 120:108175, 2021.
- Korotin et al. (2022) A. Korotin, V. Egiazarian, L. Li, and E. Burnaev. Wasserstein iterative networks for barycenter estimation. Advances in Neural Information Processing Systems, 35:15672–15686, 2022.
- Lin et al. (2020) T. Lin, N. Ho, X. Chen, M. Cuturi, and M. Jordan. Fixed-support wasserstein barycenters: Computational hardness and fast algorithm. Advances in neural information processing systems, 33:5368–5380, 2020.
- Liu et al. (2017) M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In Advances in neural information processing systems, pages 700–708, 2017.
- Lu et al. (2019) G. Lu, Z. Zhou, Y. Song, K. Ren, and Y. Yu. Guiding the one-to-one mapping in cyclegan via optimal transport. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4432–4439, 2019.
- Makhzani et al. (2015) A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
- Martinet et al. (2022) G. G. Martinet, A. Strzalkowski, and B. Engelhardt. Variance minimization in the wasserstein space for invariant causal prediction. In International Conference on Artificial Intelligence and Statistics, pages 8803–8851. PMLR, 2022.
- McCann (1995) R. J. McCann. Existence and uniqueness of monotone measure-preserving maps. 1995.
- Mirza and Osindero (2014) M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
- Montesuma and Mboula (2021) E. F. Montesuma and F. M. N. Mboula. Wasserstein barycenter for multi-source domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16785–16793, 2021.
- Mroueh (2020) Y. Mroueh. Wasserstein style transfer. In International conference on artificial intelligence and statistics, pages 842–852. PMLR, 2020.
- Odena et al. (2017) A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In International conference on machine learning, pages 2642–2651, 2017.
- Rabin et al. (2012) J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision: Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29–June 2, 2011, Revised Selected Papers 3, pages 435–446. Springer, 2012.
- Rabin et al. (2014) J. Rabin, S. Ferradans, and N. Papadakis. Adaptive color transfer with relaxed optimal transport. In 2014 IEEE international conference on image processing (ICIP), pages 4852–4856. IEEE, 2014.
- Radford et al. (2015) A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Ramesh et al. (2021) A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
- Reed et al. (2016) S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. In International Conference on Machine Learning, pages 1060–1069. PMLR, 2016.
- Santambrogio (2015) F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94, 2015.
- Shao et al. (2019) S. Shao, P. Wang, and R. Yan. Generative adversarial networks for data augmentation in machine fault diagnosis. Computers in Industry, 106:85–93, 2019.
- Sohn et al. (2015) K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pages 3483–3491, 2015.
- Solomon et al. (2015) J. Solomon, F. De Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (ToG), 34(4):1–11, 2015.
- Srivastava et al. (2018) S. Srivastava, C. Li, and D. B. Dunson. Scalable bayes via barycenter in wasserstein space. The Journal of Machine Learning Research, 19(1):312–346, 2018.
- Szegedy et al. (2016) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
- Tolstikhin et al. (2018) I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schölkopf. Wasserstein auto-encoders. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
- Viola and Jones (2001) P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, volume 1, pages I–I. IEEE, 2001.
- Wang et al. (2018) Z. Wang, X. Tang, W. Luo, and S. Gao. Face aging with identity-preserved conditional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7939–7947, 2018.
- Xian et al. (2018) Y. Xian, T. Lorenz, B. Schiele, and Z. Akata. Feature generating networks for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5542–5551, 2018.
- Xie et al. (2019) Y. Xie, M. Chen, H. Jiang, T. Zhao, and H. Zha. On scalable and efficient computation of large scale optimal transport. In International Conference on Machine Learning, pages 6882–6892. PMLR, 2019.
- Xu et al. (2018) T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1316–1324, 2018.
- Zhang et al. (2017a) H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 5907–5915, 2017a.
- Zhang et al. (2017b) Z. Zhang, Y. Song, and H. Qi. Age progression/regression by conditional adversarial autoencoder. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5810–5818, 2017b.
- Zhao et al. (2018) C. Zhao, C. Chen, Z. He, and Z. Wu. Application of auxiliary classifier wasserstein generative adversarial networks in wireless signal classification of illegal unmanned aerial vehicles. Applied Sciences, 8(12):2664, 2018.
- Zhou et al. (2022) Z. Zhou, Z. Gong, P. Ravikumar, and D. I. Inouye. Iterative alignment flows. In International Conference on Artificial Intelligence and Statistics, pages 6409–6444. PMLR, 2022.
- Zhu et al. (2023) J. Zhu, J. Qiu, A. Guha, Z. Yang, X. Nguyen, B. Li, and D. Zhao. Interpolation for robust learning: Data augmentation on geodesics. arXiv preprint arXiv:2302.02092, 2023.
- Zhu et al. (2017) J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.