This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Uniform Interpolation Constrained Geodesic Learning on Data Manifold

Cong Geng, Jia Wang, Li Chen, Wenbo Bao, Chu Chu, Zhiyong Gao
Institute of Image Communication and Network Engineering
Shanghai Jiao Tong University, Shanghai, China
{gengcong,jiawang,hilichen,baowenbo,chu_chu,zhiyong.gao}@sjtu.edu.cn
Abstract

In this paper, we propose a method to learn a minimizing geodesic within a data manifold. Along the learned geodesic, our method is able to generate high-quality interpolations between two given data samples. Specifically, we use an autoencoder network to map data samples into the latent space and perform interpolation via an interpolation network. We add prior geometric information to regularize our autoencoder for the convexity of representations so that for any given interpolation approach, the generated interpolations remain within the distribution of the data manifold. The Riemannian metric on data manifold is induced by the canonical metric in the Euclidean space in which the data manifold is isometrically immersed. Based on this defined Riemannian metric, we introduce a constant-speed loss and a minimizing geodesic loss to regularize the interpolation network to generate uniform interpolations along the learned geodesic on the manifold. We provide a theoretical analysis of our model and use image interpolation as an example to demonstrate the effectiveness of our method.

1 Introduction

With the success of deep learning on approximating complex non-linear functions, representing data manifolds using neural networks has received considerable interest and been studied from multiple perspectives. Autoencoders (AE) [14], generative adversarial network (GAN) [13] and their combinations [16, 22] are notable generative models for learning the geometry structure or the distribution of data on the manifold. However, they usually fail to generate a convex latent representation when confronting with even simple curved manifolds such as swiss-roll and S-curve. For geodesic learning, Arvanitidis et al. [3, 24] and Chen et al. [10, 9] both present a magnification factor [7] to help find the shortest path that follows the regions of high data density in the latent space as the learned geodesic. But these methods explore geodesics on latent space rather than on data space and lack strict guarantees for geodesics in mathematics.

In this paper, we propose a method to capture the geometric structure of a data manifold and to find smooth geodesics between data samples on the manifold. First, motivated by DIMAL [17], we introduce a framework combining an autoencoder with traditional manifold learning methods to explore the geometric structure of a data manifold. In contrast to existing combinations of autoencoders and GANs, we resort to the classical manifold learning algorithms to obtain an approximation of geodesic distance. Then the encoder and decoder are optimized with this approximation being the constraint. Using this method, we alleviate the problem that the latent distribution is non-convex on curved manifolds, which means our encoder can unfold the data manifold to flattened latent representations.

Second, we propose a geodesic learning method to establish smooth geodesics between data samples on a data manifold. The Riemannian metric on the data manifold can be induced by the canonical metric in the Euclidean space in which the manifold is isometrically immersed. Existing methods cannot guarantee a uniform interpolation along the geodesic. We parameterize the generated curve by a parameter tt and propose a constant-speed loss to achieve a uniform interpolation by making t[0,1]t\in[0,1] be an arc-length parameter of the interpolation curve. According to Riemannian geometry [8], we propose a geodesic loss to force the interpolation network to generate points along a geodesic. Another key factor is a minimizing loss to make the geodesic be the minimizing one. Our contributions are summarized as follows:

  • We propose a framework in which an autoencoder and an interpolation network are introduced to explore the manifold structure. The autoencoder network promotes convex latent representations achieved by adding some prior geometric information to it, so that the generated samples are able to remain within the distribution of the data manifold.

  • We propose a constant-speed loss and a minimizing geodesic loss for the interpolation network to generate the minimizing geodesic on the manifold given two endpoints. We parameterize each geodesic by an arc-length parameter tt, such that we can generate points moving along the minimizing geodesic with constant speed and thus fulfill a uniform interpolation.

2 Related Work

Traditionally, manifold learning methods aim to infer latent representations that can capture the intrinsic geometric structure of data. They are generally applied in dimensionality reduction and data visualization. Classical nonlinear manifold learning algorithms such as Isomap [21], locally linear embedding [18] and local tangent space alignment (LTSA) [25] preserve local structures in small neighborhoods and derive the intrinsic features of data manifolds.

The interpolation between data samples has attracted wide attention because it can achieve a smooth transition from one sample to another, such as intermediate face generation and path planning. There are several works of literature focusing on the interpolation of data manifold. Agustsson et al. [2] propose to use distribution matching transport maps to obtain analogous latent space interpolation which preserves the prior distribution of the latent space, while minimally changing the original operation. It can generate interpolated samples with higher quality but it does not have an encoder to map the given data samples to the latent space. Several other works [5, 19, 12] that combine autoencoders with GANs are proposed to generate interpolated points on data manifold. But limited by the ability of GAN, they usually fail to generate a convex latent representation when confronting with curved manifolds such as swiss-roll and S-curve. To avoid this problem, Arvanitidis et al. [3, 24] and Chen et al. [10] both present that the magnification factor [7] can be seen as a measure of the local distortion of the latent space. This helps the generated geodesics to follow the regions of high data density in the latent space. Chen et al. [9] also find the shortest path in a finite graph of samples as the geodesics on the data manifold. This finite graph is built in the latent space using a binary tree data structure with edge weights based on Riemannian distances.

3 Preliminaries

We briefly present the essentials of Riemannian geometry. For a thorough presentation, see [8]. Suppose MM is a Riemannian differentiable manifold. α:(ϵ,ϵ)M\alpha:(-\epsilon,\epsilon)\to M is a differentiable curve in MM. Suppose that α(0)=pM\alpha(0)=p\in M, the tangent vector to the curve α\alpha at t=0t=0 is a function α(0)\alpha^{\prime}(0) defined on the arbitrary function ff on MM given by α(0)f=d(fα)dt|t=0\alpha^{\prime}(0)f=\frac{d(f\circ\alpha)}{dt}\bigg{|}_{t=0}. The set of all tangent vectors to MM at pp will be indicated by tangent space TpMT_{p}M. Let (U,x)(U,x) be a parametrization of MM at pp and for qU\forall q\in U, suppose x1(q)=(z1(q),z2(q),,zn(q))x^{-1}(q)=(z^{1}(q),z^{2}(q),\cdots,z^{n}(q)) is the local coordinate of qq. Then the tangent vector at pp of the coordinate curve zi|p\frac{\partial}{\partial z^{i}}\big{|}_{p} is defined aszi|p(f)=(fx)zi|p\frac{\partial}{\partial z^{i}}\big{|}_{p}(f)=\frac{\partial(f\circ x)}{\partial z^{i}}\big{|}_{p}. {zi|p,1in}\{\frac{\partial}{\partial z^{i}}\arrowvert_{p},1\leqslant i\leqslant n\} spans a tangent space TpMT_{p}M at pp. If the tangent vector is orthonormal with each other, {zi|p,1in}\{\frac{\partial}{\partial z^{i}}\arrowvert_{p},1\leqslant i\leqslant n\} is the orthonormal basis of tangent space.

Let MmM^{m} and NnN^{n} be two differentiable manifolds. Given a differentiable mapping φ:MN\varphi:M\to N, we can define a differential of φ\varphi at pp as linear mapping dφp:TpMTφ(p)Nd\varphi_{p}:T_{p}M\to T_{\varphi(p)}N given by dφ(p)(v)(f)=v(fφ)d\varphi(p)(v)(f)=v(f\circ\varphi), for fCφ(p)\forall f\in C_{\varphi(p)}^{\infty}. If dφ(p):TpMTφ(p)Nd\varphi(p):T_{p}M\to T_{\varphi(p)}N is injective for all pMp\in M, then the mapping φ\varphi is said to be an immersion.

A Riemannian manifold is a differentiable manifold equipped with a given Riemannian metric which assigns on each point pp of MM an inner product ,p\langle,\rangle_{p} (i.e. a symmetric, bilinear and positive-definite form) on the tangent space TpMT_{p}M. Furthermore, an isometric immersion φ:MN\varphi:M\to N is an immersion satisfying u,vp=dfp(u),dfp(v)f(p)\big{\langle}u,v\big{\rangle}_{p}=\big{\langle}df_{p}(u),df_{p}(v)\big{\rangle}_{f(p)} for pM\forall p\in M and u,vTpM\forall u,v\in T_{p}M.

A geodesic is a parametrized curve γ:IM\gamma:I\to M satisfying Ddt(dγdt)=0\frac{D}{dt}(\frac{d\gamma}{dt})=0 for all tIt\in I. Ddt\frac{D}{dt} is the covariant derivative of a vector field along γ\gamma [8]. dγdt\frac{d\gamma}{dt} is the tangent vector at tt.

4 Manifold Reconstruction

Original AE and some combinations between AE and GANs can generate high-quality samples from specific latent distributions on high-dimensional data manifold. But they fail to get convex embeddings on some curved surfaces with changing curvature such as swiss-roll or S-curve. Some other works [5, 19, 12] were proposed to generate interpolations within the distribution of real data by distinguishing interpolations from real samples. But they generate unsatisfying samples on the above-mentioned low-dimensional manifolds. The encoding results can be seen in Fig. 1. The reason for this problem may be the insufficient ability of GANs and the decoder of AE. The discriminator of GANs can only distinguish the similarities of distributions between generated samples and real ones and the autoencoders are prone to put the curvature information of data manifold in latent representations to make the network simple as possible. So the latent embeddings also have a changing curvature to save the curvature information. It may result in the non-convex representations.

In our method, we add some prior geometric information obtained by traditional manifold learning approaches to encoding a convex latent representation. Traditional nonlinear manifold learning approaches such as Isomap [21], LLE [18] and LTSA [25] are classical algorithms to get a convex embedding by unfolding some curved surfaces. We apply them to our method by adding regularization to the autoencoder. The loss function of the autoencoder can be written as:

LAE=D(E(x))x+1M2i,j(E(xi)E(xj)2DKSij)2,L_{AE}=\big{\|}D\big{(}E(x)\big{)}-x\big{\|}+\frac{1}{M^{2}}\sum_{i,j}\Big{(}\big{\|}E(x_{i})-E(x_{j})\big{\|}_{2}-DKS_{ij}\Big{)}^{2}, (1)

where xx is the input sampled on the data manifold. MM denotes the number of input xx. DKSijDKS_{ij} represents the approximated geodesic distance between E(xi)E(x_{i}) and E(xj)E(x_{j}). Encoder EE and Decoder DD of an autoencoder are trained to minimize the above loss LAEL_{AE}. DKSDKS can be obtained by building a graph from the data with k-nearest neighbors for each node and computing the corresponding pairwise geodesic distances using traditional manifold learning algorithms. For more details we refer the interested reader to [21, 18, 25]. With DKSijDKS_{ij} as an expected approximation, the encoder is forced to train towards obtaining a convex latent representation while the decoder is forced to learn the lost curvature information on latent embeddings. Behaviors induced by the LAEL_{AE} loss can be observed in Fig. 1: the swiss-roll can be flattened on 2-dimensional latent space. For our experiments, we choose LTSA to compute the approximated geodesic distance for regularizing the autoencoder because of its learned local geometry which views the neighborhood of a data point as a tangent space to flat the manifold.

Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption
(a) swiss-roll (b) AAE [16] (c) GAIA [19] (d) ACAI [5] (e) Chen [12] (f) ours
Figure 1: The encoding results of different methods for swiss-roll.

5 Geodesic Learning

In our model, we denote 𝒳\mathcal{X} as a data manifold. The Riemannian metric on 𝒳\mathcal{X} can be induced from the canonical metric on the Euclidean space N\mathbb{R}^{N} to guarantee the immersion is an isometric immersion. Thus to obtain a geodesic on manifold 𝒳\mathcal{X}, we can use the Riemannian geometry on N\mathbb{R}^{N} and the characteristics of isometric immersion.

5.1 Interpolation Network

We produce geodesics on manifold 𝒳\mathcal{X} by interpolating in the latent space and decoding them into data space. The method of interpolating in the latent space can be varied in different situations. The simplest interpolation is linear interpolation as z=(1t)z1+tz2z=(1-t)\cdot z_{1}+t\cdot z_{2}. For geodesic learning, linear interpolation is not applicable in most situations. Yang et al. [24] propose to use the restricted class of quadratic functions and Chen et al. [10] apply a neural network to parameterize the geodesic curves. For our interpolation network, we employ polynomial functions similar to Yang’s approach. The difference is that we employ cubic functions to parameterize interpolants considering the diversity of latent representations, i.e., c(t)=at3+bt2+ct+dc(t)=at^{3}+bt^{2}+ct+d. Therefore, a curve generated by our interpolation network has four mm-dimensional free parametric vectors a,b,ca,b,c and dd, where mm is the dimension of latent coordinates. In practice, we train a geodesic curve c(t)c(t) that connects two pre-specified points z0z0 and z1z1, so the function should be constrained to satisfy c(0)=z0c(0)=z0 and c(1)=z1c(1)=z1. We initialize our interpolation network by setting a=0a=0 and b=0b=0 to make the initial interpolation be a linear interpolation and perform the optimization using gradient descent. More details can be referred to in Yang’s paper [24].

5.2 Constant-Speed Loss

We can produce interpolations along a curve on manifold 𝒳\mathcal{X} by decoding the output c(t)c(t) of the interpolation network as tt from 0 to 1. We expect the parameter tt to be an arc-length parameter which means that the parameter tt is proportional to the arc length of the curve γ(t)\gamma(t). To realize this, the following theorem can provide theoretical support.

Theorem 5.2.1.

Suppose 𝒳N\mathcal{X}\subset\mathbb{R}^{N} is a Riemannian differentiable manifold. The Riemannain metric on 𝒳\mathcal{X} is induced from the canonical metric on N\mathbb{R}^{N}. If γ:I𝒳\gamma:I\to\mathcal{X} is a geodesic on 𝒳\mathcal{X}, [x1(t),x2(t),,xN(t)]\big{[}x_{1}(t),x_{2}(t),\cdots,x_{N}(t)\big{]} is the Cartesian coordinate of γ(t)\gamma(t) in N\mathbb{R}^{N}, then i(dxi(t)dt)2\sqrt{\sum_{i}\big{(}\frac{dx_{i}(t)}{dt}\big{)}^{2}} is a constant, fortI~{}\forall t\in I.

As stated in Theorem 5.2.1, the length of the tangent vector along a geodesic is constant. Let G(t)=D(c(t))G(t)=D(c(t)) denote the output of the decoder taking the interpolation curve as input, we design the following constant-speed loss as:

Lconspeed=1ni=1n(G(ti)2meant(G(t)2)1)2,L_{conspeed}=\frac{1}{n}\sum_{i=1}^{n}\Big{(}\frac{\|G^{\prime}(t_{i})\|_{2}}{mean_{t}\big{(}\|G^{\prime}(t)\|_{2}\big{)}}-1\Big{)}^{2}, (2)

where nn denotes the number of sampling points and meant(G(t)2)=1ni=1nG(ti)2mean_{t}\big{(}\|G^{\prime}(t)\|_{2}\big{)}=\frac{1}{n}\sum\limits_{i=1}^{n}\big{\|}G^{\prime}(t_{i})\big{\|}_{2}. G(t)G^{\prime}(t) denotes the derivative of the output G(t)G(t) with respect to tt. It can be viewed as the velocity at tt along G(t)G(t) and G(t)2\big{\|}G^{\prime}(t)\big{\|}_{2} represents the magnitude of this velocity corresponding to i(dxi(t)dt)2\sqrt{\sum_{i}\big{(}\frac{dx_{i}(t)}{dt}\big{)}^{2}} in Theorem 5.2.1. Therefore, from a certain angle, constant-speed loss makes G(t)G(t) have a constant speed with tt moving from 0 to 1.

5.3 Minimizing Geodesic Loss

After guaranteeing the output curve G(t)G(t) of our decoder has a constant speed, we need to let the curve G(t)G(t) be a geodesic. We have the following theorem to ensure a curve is a geodesic.

Theorem 5.3.1.

Suppose 𝒳N\mathcal{X}\subset\mathbb{R}^{N} is a Riemannian differentiable manifold. The Riemannain metric on 𝒳\mathcal{X} is induced from the canonical metric on N\mathbb{R}^{N}. γ:I𝒳\gamma:I\to\mathcal{X} is a curve on 𝒳\mathcal{X}, (U,h)(U,h) is a system of coordinates of 𝒳\mathcal{X} with γ(t)h(U)\gamma(t)\subset h(U). [z1,z2,,zm][z^{1},z^{2},\cdots,z^{m}] is the local coordinate of h(U)h(U), [x1(t),x2(t),,xN(t)]\big{[}x_{1}(t),x_{2}(t),\cdots,x_{N}(t)\big{]} is the Cartesian coordinate of γ(t)\gamma(t) in N\mathbb{R}^{N}, then γ(t)\gamma(t) is a geodesic on 𝒳\mathcal{X}, if and only if:

[x1′′(t),x2′′(t),,xN′′(t)][hz1|γ(t),hz2|γ(t),,hzm|γ(t)]=𝟎,fortI.[x_{1}^{\prime\prime}(t),x_{2}^{\prime\prime}(t),\cdots,x_{N}^{\prime\prime}(t)]\cdot\Big{[}\frac{\partial h}{\partial z^{1}}\Big{|}_{\gamma(t)},\frac{\partial h}{\partial z^{2}}\Big{|}_{\gamma(t)},\cdots,\frac{\partial h}{\partial z^{m}}\Big{|}_{\gamma(t)}\Big{]}=\bm{0},for~{}\forall t\in I. (3)

Theorem 5.3.1 demonstrates a curve is a geodesic if and only if its second derivative with respect to parameter tt is orthogonal to the tangent space. In practice, we can assume our encoder maps a point on the manifold 𝒳\mathcal{X} to its local coordinates. So based on Theorem 5.3.1, we are able to optimize the following problem as the geodesic loss:

Lgeo=1ni=1nG′′(ti)D(c(ti))2,L_{geo}=\frac{1}{n}\sum_{i=1}^{n}\big{\|}G^{\prime\prime}(t_{i})\cdot D^{\prime}\big{(}c(t_{i})\big{)}\big{\|}_{2}, (4)

where DD is the function of decoder and D(c(ti))D^{\prime}\big{(}c(t_{i})\big{)} denotes the derivative of DD at c(ti)c(t_{i}). G′′(t)G^{\prime\prime}(t) is a NN-dimensional vector corresponding to [x1′′(t),x2′′(t),,xN′′(t)][x_{1}^{\prime\prime}(t),x_{2}^{\prime\prime}(t),\cdots,x_{N}^{\prime\prime}(t)] and D(c(t))D^{\prime}\big{(}c(t)\big{)} is an N×mN\times m matrix corresponding to [hz1|γ(t),hz2|γ(t),,hzm|γ(t)]\big{[}\frac{\partial h}{\partial z^{1}}\big{|}_{\gamma(t)},\frac{\partial h}{\partial z^{2}}\big{|}_{\gamma(t)},\cdots,\frac{\partial h}{\partial z^{m}}\big{|}_{\gamma(t)}\big{]} in Theorem 5.3.1. Geodesic loss and constant-speed loss jointly force curve G(t)G(t) to have zero acceleration. That is, G(t)G(t) is a geodesic on data manifold. But geodesic connecting two points may not be unique, such as the geodesic on a sphere. According to Theorem 1 in supplementary materials, the minimizing geodesic is the curve with minimal length connecting two points. Thus our model takes advantage of the minimal length to ensure G(t)G(t) is a minimizing geodesic. We approximate the curve length using the summation of velocity at tit_{i}. The minimizing loss is proposed to minimize curve length as:

Lmin=i=1nG(ti)2.L_{min}=\sum_{i=1}^{n}\big{\|}G^{\prime}(t_{i})\big{\|}_{2}. (5)

For implementation, we use the following difference approximation to reduce the computational burden:

G(t)G(t+Δt)+G(tΔt)2Δt,G′′(t)G(t+Δt)+G(tΔt)2G(t)Δt2.G^{\prime}(t)\approx\frac{G(t+\Delta t)+G(t-\Delta t)}{2\Delta t},\quad G^{\prime\prime}(t)\approx\frac{G(t+\Delta t)+G(t-\Delta t)-2G(t)}{\Delta t^{2}}. (6)

For D(c(t))D^{\prime}\big{(}c(t)\big{)}, we can use the Jacobian of the decoder as implemented by Pytorch or difference approximation that is similar to G(t)G^{\prime}(t). To summarize this part, the overall loss function of our interpolation network is:

Ltotal=λ1Lconspeed+λ2Lgeo+λ3Lmin,L_{total}=\lambda_{1}L_{conspeed}+\lambda_{2}L_{geo}+\lambda_{3}L_{min}, (7)

where λ1\lambda_{1}, λ2\lambda_{2} and λ3\lambda_{3} are the weights to balance these three losses. They have a default setting as λ1=1\lambda_{1}=1, λ2=0.001\lambda_{2}=0.001 and λ3=10\lambda_{3}=10 to ensure a robust performance. Under this loss constraint, we can generate interpolations moving along the minimizing geodesic with constant speed and thus fulfill a uniform interpolation. The overall geodesic learning algorithm is given in Algorithm 1.

Algorithm 1 Geodesic Learning
1:Two random points x0x_{0}, x1x_{1} on dataset; NN points of time ti=i1N1(1iN)t_{i}=\frac{i-1}{N-1}(1\leq i\leq N); The number of iterations of the interpolation network nitern_{iter}; The weight parameters λ1,λ2,λ3\lambda_{1},\lambda_{2},\lambda_{3}; Delta-time Δt\Delta t.
2:Train an autoencoder by optimizing Eq. (1);
3:Obtain latent embeddings with encoder E of the trained autoencoder: z0=E(x0)z_{0}=E(x_{0}), z1=E(x1)z_{1}=E(x_{1});
4:Initialize interpolation network by setting a=0,b=0,c=z1z0,d=z0a=0,b=0,c=z_{1}-z_{0},d=z_{0};
5:for j=1j=1 to nitern_{iter} do
6:     Obtain latent representations of interpolations c(ti)c(t_{i}) using interpolation network.
7:     Decode c(ti)c(t_{i}) using decoder D to get G(ti)=D(c(ti))(1iN)G(t_{i})=D(c(t_{i}))(1\leq i\leq N).
8:     Based on Eq. (6), compute G(ti)G^{\prime}(t_{i}) and G′′(ti)G^{\prime\prime}(t_{i}) with Δt\Delta t.
9:     Compute the overall loss LtotalL_{total} using Eq. (7).
10:     Update interpolation network’s parameters a,b,c,da,b,c,d with RMSProp optimizer.
11:end for
12:return Interpolations G(ti)G(t_{i})

6 Experimental Results

In this section, we present experiments on the geodesic generation and image interpolation to demonstrate the effectiveness of our method.

6.1 Geodesic Generation

First, we do experiments on 3-dimensional datasets since their geodesics can be better visualized. We choose the semi-sphere and swiss-roll as our data manifolds.

Refer to caption Refer to caption Refer to caption Refer to caption
Refer to caption Refer to caption Refer to caption Refer to caption
(a) AAE [16] (b) A&H [4] (c) Chen’s method [10] (d) ours
Figure 2: Results of different interpolation methods on semi-sphere Dataset.
Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption
(a) AAE [16] (b) ACAI [5] (c) GAIA [19] (d) Chen’s method [12] (e) ours
Figure 3: Results of different interpolation methods on swiss-roll Dataset.

Semi-sphere dataset. We randomly sample 4,956 points subjecting to the uniform distribution on semi-sphere. In Fig. 2, we compare our approach with other interpolation methods, i.e., AAE [13], A&H [4] and Chen’s method [10]. From Riemannian geometry, we know the geodesic of a sphere under our defined Riemannian metric is a circular arc or part of it. The center of this circular arc is the center of this sphere. AAE cannot guarantee that the curve connecting two points is a geodesic. A&H can find the shortest path connecting two corresponding reconstructed endpoints. But the endpoints are inconsistent with the original inputs due to the uncertainty of their VAE network and their stochastic Riemannian metric. Chen’s method can generate interpolations along a geodesic, but they cannot fulfill a uniform interpolation. In Fig. 2, we observe that our method can generate uniform interpolation along a fairly accurate geodesic on semi-sphere based on the defined Riemannian metric.

Table 1: The estimated distance of interpolation curve trained with different losses proposed in our method.
Loss Distance
Linear 1.3110
LconspeedL_{conspeed} 1.4710
LminL_{min} 1.1571
Lconspeed+LminL_{conspeed}+L_{min} 1.1277
Lconspeed+LgeoL_{conspeed}+L_{geo} 6.0460
Lconspeed+Lgeo+LminL_{conspeed}+L_{geo}+L_{min} 1.1028
real geodesic 1.0776

Swiss-roll dataset. We choose swiss-roll to demonstrate the effectiveness of our method on manifolds that have large curvature variations. We randomly sample 5,000 points subjecting to the uniform distribution on the swiss-roll manifold. We compare our approach with AAE [16], ACAI [5], GAIA [19] and the method that applies GAN on feature space proposed by [12]. Fig. 3 shows the experimental results. We observe that except our approach, other methods fail to generate interpolations within the data manifold because they cannot encode efficient latent embeddings for linear interpolations.

We do ablation studies on the semi-sphere dataset to investigate the effect of different losses proposed in our method, including the constant-speed loss, geodesic loss, and minimizing loss. Fig. 4 presents the results obtained by the combinations of different losses. We observe that the constant-speed loss can promote a uniform interpolation. Compared with linear interpolation, our network can generate a shorter path that is close to real geodesic without the geodesic loss. Although our network can quickly converge to a fairly satisfactory result, the accuracy of the geodesic is not optimal. When incorporated with the geodesic loss, our interpolated points are fine-tuned to move along an accurate geodesic. The estimated curve length shown in Table 1 demonstrates our observation. The generated curve trained with all three losses contributes the best result and the estimated curve length 1.1028 is the closest to the length of real geodesic, namely 1.0776.

Refer to caption Refer to caption Refer to caption
(a) linear interpolation (b) LconspeedL_{conspeed} (c) LminL_{min}
Refer to caption Refer to caption Refer to caption
(d) Lconspeed+LminL_{conspeed}+L_{min} (e) Lconspeed+LgeoL_{conspeed}+L_{geo} (f) LtotalL_{total}
Figure 4: Ablation study of results trained with different losses proposed by our method.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 5: Top: decoded images along linear (bottom) and geodesic (top) interpolants. Bottom: the corresponding linear (red) and geodesic (black) interpolant trajectories on latent space.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(a) VAE
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
(b) AAE
Figure 6: Top: decoded images along linear (bottom) and geodesic (top) interpolants. Bottom: the corresponding velocity curves of the interpolants.
Refer to caption Refer to caption
(a) MNIST (b) Fashion-MNIST
Figure 7: The boxplots of the interpolant trajectory’s length to linear and geodesic interpolations for both VAE and AAE.

6.2 Image interpolation

To further demonstrate our model’s effectiveness in image interpolation, we choose MNIST [15] and Fashion-MNIST [23] datasets. They both consist of a training set of 60,000 examples and a test set of 10,000 examples associated with a label from 10 classes. We don’t employ our manifold reconstruction method because the concept of distance-based nearest neighbors is no longer meaningful when the dimension goes sufficiently high [1, 6]. Shao et al. [20] argue that generated high-dimensional manifold has some curvature, but it is close to being zero. Therefore, we can directly employ variational autoencoder (VAE) [14] and adversarial autoencoder (AAE) [16] to obtain latent embeddings. For each dataset, we randomly select two images as the start point and endpoint in the data manifold. We employ different interpolation methods to realize an image interpolation between these two images. For the MNIST dataset, Fig. 5 shows the generated geodesic and linear interpolant trajectories of an AAE where the imposed latent distribution is a 2-D donut. We observe that our approach finds a trajectory in the latent space that is longer than a simple linear interpolation, but the intermediate points look more realistic and provide a semantically meaningful morphing between its endpoints as it does not cross a region without data. Our interpolation method ensures our generated curves to follow the data manifold. From the fourth column in Fig. 5, we can see although the geodesic and linear trajectories are both on the data manifold, the interpolation results can also be very different. We provide more results on MNIST dataset in supplementary materials to verify the invariance of geodesic interpolation with different autoencoders.

For the Fashion-MNIST dataset, we also use the methods above to interpolate images between two selected images and show the results in Fig. 6. We use the velocity defined in Section 5.2 to evaluate the smoothness of different interpolation methods. Our geodesic interpolation can obtain always recognizable images with a smoother speed while linear interpolation may generate some ghosting with two articles when transiting over different classes. This demonstrates our geodesic learning method can fulfill a uniform interpolation along a geodesic.

We further verify the characteristic of the geodesic’s shortest path for our interpolation method. In Fig. 7, we present a comparison of the interpolant trajectory’s length to different interpolation approaches for both VAE and AAE. We randomly choose 250 pairs of endpoints on data manifold for each valuation and approximate the trajectory’s length using the summation of velocity at tit_{i} which is described in Section 5.3. Fig. 7 shows that our geodesic interpolation has smaller average length and variance on both MNIST and Fashion-MNIST datasets. This demonstrates our interpolation method can make the interpolation curve to possess more characteristics of geodesics compared with linear interpolation.

7 Conclusion

We explore the geometric structure of the data manifold by proposing a uniform interpolation constrained geodesic learning algorithm. We add prior geometric information to regularize our autoencoder to generate interpolants within the data manifold. We also propose a constant-speed loss and a minimizing geodesic loss to generate geodesic on the underlying manifold given two endpoints. Different from existing methods in which geodesic is defined as the shortest path on the graph connecting data points, our model defines geodesic consistent with the definition of a geodesic in Riemannian geometry. Experiments demonstrate our model can fulfill a uniform interpolation along the minimizing geodesics both on 3-D curved manifolds and high-dimensional image space.

References

  • [1] Charu C Aggarwal, Alexander Hinneburg, and Daniel A Keim. On the surprising behavior of distance metrics in high dimensional space. In International conference on database theory, pages 420–434. Springer, 2001.
  • [2] Eirikur Agustsson, Alexander Sage, Radu Timofte, and Luc Van Gool. Optimal transport maps for distribution preserving operations on latent spaces of Generative Models. In Proceedings of the 7th International Conference on Learning Representations(ICLR), 2019.
  • [3] Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent Space Oddity: On the Curvature of Deep Generative Models. In Proceedings of the 6th International Conference on Learning Representations(ICLR), 2018.
  • [4] Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, and Michael Schober. Fast and Robust Shortest Paths on Manifolds Learned from Data. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), page 10, 2019.
  • [5] David Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. In Proceedings of the 7th International Conference on Learning Representations(ICLR), 2019.
  • [6] Kevin Beyer, Jonathan Goldstein, Raghu Ramakrishnan, and Uri Shaft. When is “nearest neighbor” meaningful? In International conference on database theory, pages 217–235. Springer, 1999.
  • [7] Christopher M Bishop, Markus Svens’ en, and Christopher KI Williams. Magnification factors for the som and gtm algorithms. In Proceedings 1997 Workshop on Self-Organizing Maps, 1997.
  • [8] Manfredo Perdigao do Carmo. Riemannian geometry. Birkhäuser, 1992.
  • [9] Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, and Patrick van der Smagt. Fast approximate geodesics for deep generative models. In International Conference on Artificial Neural Networks, 2019.
  • [10] Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, and Patrick van der Smagt. Metrics for Deep Generative Models. in Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 2018.
  • [11] Weihuan Chen and Xingxiao Li. Introduction to Riemann Geometry: Volume I. Peking University Press, 2004.
  • [12] Ying-Cong Chen, Xiaogang Xu, Zhuotao Tian, and Jiaya Jia. Homomorphic latent space interpolation for unpaired image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2408–2416, 2019.
  • [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [14] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
  • [15] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
  • [16] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
  • [17] Gautam Pai, Ronen Talmon, Alex Bronstein, and Ron Kimmel. DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 2019.
  • [18] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000.
  • [19] Tim Sainburg, Marvin Thielk, Brad Theilman, Benjamin Migliori, and Timothy Gentner. Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions. arXiv preprint arXiv:1807.06650, 2018.
  • [20] H. Shao, A. Kumar, and P. T. Fletcher. The riemannian geometry of deep generative models. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 428–4288, 2018.
  • [21] Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319–2323, 2000.
  • [22] Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein Auto-Encoders. In Proceedings of the 6th International Conference on Learning Representations (ICLR), 2018.
  • [23] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 2017.
  • [24] Tao Yang, Georgios Arvanitidis, Dongmei Fu, Xiaogang Li, and Søren Hauberg. Geodesic Clustering in Deep Generative Models. arXiv:1809.04747 [cs, stat], September 2018.
  • [25] Zhenyue Zhang and Hongyuan Zha. Nonlinear dimension reduction via local tangent space alignment. In International Conference on Intelligent Data Engineering and Automated Learning, pages 477–481. Springer, 2003.
  Supplementary Materials: Uniform Interpolation Constrained Geodesic Learning on Data Manifold
 

Appendix A Theoretical part

Lemma 1.

If γ:IM\gamma:I\to M is a geodesic on Riemannian manifold MM, then the length of the tangent vector dγdt\frac{d\gamma}{dt} is constant.

Proof.

Suppose DD is the Riemannian connection of MM, then we have

ddtdγdt,dγdt=2Ddtdγdt,dγdt=0,\frac{d}{dt}\Big{\langle}\frac{d\gamma}{dt},\frac{d\gamma}{dt}\Big{\rangle}=2\Big{\langle}\frac{D}{dt}\frac{d\gamma}{dt},\frac{d\gamma}{dt}\Big{\rangle}=0, (8)

Thus,

|γ(t)|=C(constant)|\gamma^{\prime}(t)|=C~{}(constant) (9)

Proof of Theorem 5.2.1.

Proof.

Suppose the Riemannian metric on 𝒳\mathcal{X} is induced from N\mathbb{R}^{N} by identity mapping ii. [x1,x2,,xN][x_{1},x_{2},\cdots,x_{N}] is the Cartesian coordinate system of N\mathbb{R}^{N}. [x1(t),x2(t),,xN(t)]\big{[}x_{1}(t),x_{2}(t),\cdots,x_{N}(t)\big{]} is the Cartesian coordinate of γ(t)\gamma(t). Based on the characteristic of didi, we obtain the corresponding tangent vector of γ(t)\gamma^{\prime}(t) in Tγ(t)NT_{\gamma(t)}\mathbb{R}^{N}:

diγ(t)(γ(t))=dxi(t)dtxi,di_{\gamma(t)}\big{(}\gamma^{\prime}(t)\big{)}=\frac{dx_{i}(t)}{dt}\frac{\partial}{\partial x^{i}},\quad (10)

According to the canonical metric on N\mathbb{R}^{N}, we have

|diγ(t)(γ(t))|=dxi(t)dtxi,dxi(t)dtxi=i=1N(dxi(t)dt)2.\Big{|}di_{\gamma(t)}\big{(}\gamma^{\prime}(t)\big{)}\Big{|}=\sqrt{\Big{\langle}\frac{dx_{i}(t)}{dt}\frac{\partial}{\partial x^{i}},\frac{dx_{i}(t)}{dt}\frac{\partial}{\partial x^{i}}\Big{\rangle}}=\sqrt{\sum_{i=1}^{N}\Big{(}\frac{dx_{i}(t)}{dt}\Big{)}^{2}}. (11)

Because γ(t)\gamma(t) is a geodesic on 𝒳\mathcal{X}, from Lemma 1, we know |γ(t)||\gamma^{\prime}(t)| is a constant. Considering identity mapping ii is a isometric immersion, we have

|diγ(t)(γ(t))|=|γ(t)|.\Big{|}di_{\gamma(t)}\big{(}\gamma^{\prime}(t)\big{)}\Big{|}=|\gamma^{\prime}(t)|. (12)

Thus, i(dxi(t)dt)2\sqrt{\sum_{i}\big{(}\frac{dx_{i}(t)}{dt}\big{)}^{2}} is a constant, fortI~{}\forall t\in I.

Proof of Theorem 5.3.1.

Proof.

Suppose the Riemannian metric on 𝒳\mathcal{X} is induced from N\mathbb{R}^{N} by identity mapping ii. [x1,x2,,xN][x_{1},x_{2},\cdots,x_{N}] is the Cartesian coordinate system of N\mathbb{R}^{N}. From the definition of hh, we can get h(z1,z2,,zm)=x1,x2,,xNh(z_{1},z_{2},\cdots,z_{m})=x_{1},x_{2},\cdots,x_{N}. Suppose hjzi\frac{\partial h^{j}}{\partial z^{i}} is the jj-th component of hzi\frac{\partial h}{\partial z^{i}}, according to the characteristic of di, we can deduce the corresponding tangent vector of zi\frac{\partial}{\partial z_{i}} on Tγ(t)NT_{\gamma(t)}\mathbb{R}^{N} as follows:

diγ(t)(zi)=hjzixj|γ(t),di_{\gamma(t)}\Big{(}\frac{\partial}{\partial z^{i}}\Big{)}=\frac{\partial h^{j}}{\partial z^{i}}\frac{\partial}{\partial x^{j}}\Big{|}_{\gamma(t)}, (13)

{diγ(t)(zi),1iN}\{di_{\gamma(t)}\big{(}\frac{\partial}{\partial z^{i}}\big{)},1\leqslant i\leqslant N\} spans a tangent space Tγ(t)i(𝒳)T_{\gamma(t)}i(\mathcal{X}) at γ(t)\gamma(t). Suppose DD is the Riemannian connection on 𝒳\mathcal{X}. According to the definition of D(f(γ(t)))D\big{(}f_{*}(\gamma^{\prime}(t))\big{)} (refer to Chapter 2, Eq.(3.1), in [11]), we have

dD(diγ(t)(γ(t)))dt=Ddiγ(t)(γ(t))diγ(t)(γ(t))=(d(diγ(t)(γ(t)))dt).\frac{dD\big{(}di_{\gamma(t)}\big{(}\gamma^{\prime}(t)\big{)}\big{)}}{dt}=D_{di_{\gamma(t)}(\gamma^{\prime}(t))}di_{\gamma(t)}(\gamma^{\prime}(t))=\Big{(}\frac{d\big{(}di_{\gamma(t)}(\gamma^{\prime}(t))\big{)}}{dt}\Big{)}^{\top}. (14)

The first equality holds by Remark 3.2 of Chapter 2 in [11]. \top denotes the orthographic projection from N\mathbb{R}^{N} to the tangent space on i(𝒳)i(\mathcal{X}). Because Riemannian connection is invariant under isometric condition (see Theorem 4.6 of Chapter 2 in [11]), we have

diγ(t)(Dγ(t)γ(t))=Ddiγ(t)(γ(t))diγ(t)(γ(t))=(d(diγ(t)(γ(t)))dt).di_{\gamma(t)}(D_{\gamma^{\prime}(t)}\gamma^{\prime}(t))=D_{di_{\gamma(t)}(\gamma^{\prime}(t))}di_{\gamma(t)}(\gamma^{\prime}(t))=\Big{(}\frac{d\big{(}di_{\gamma(t)}(\gamma^{\prime}(t))\big{)}}{dt}\Big{)}^{\top}. (15)

Combined with Eq. (10), we can deduce

(d(diγ(t)(γ(t)))dt)=(d2xi(t)dt2xi).\Big{(}\frac{d\big{(}di_{\gamma(t)}(\gamma^{\prime}(t))\big{)}}{dt}\Big{)}^{\top}=\Big{(}\frac{d^{2}x_{i}(t)}{dt^{2}}\frac{\partial}{\partial x^{i}}\Big{)}^{\top}. (16)

Therefore, based on the definition of geodesic, γ:I𝒳\gamma:I\to\mathcal{X} is a geodesic on 𝒳\mathcal{X}, if and only if:

(d2xi(t)dt2xi)=0.\Big{(}\frac{d^{2}x_{i}(t)}{dt^{2}}\frac{\partial}{\partial x^{i}}\Big{)}^{\top}=0. (17)

It means d2xi(t)dt2xi\frac{d^{2}x_{i}(t)}{dt^{2}}\frac{\partial}{\partial x^{i}} is orthogonal to tangent space Tγ(t)i(𝒳)T_{\gamma(t)}i(\mathcal{X}). That is,

d2xi(t)dt2xi,diγ(t)(zi)=0,1im\Big{\langle}\frac{d^{2}x_{i}(t)}{dt^{2}}\frac{\partial}{\partial x^{i}},di_{\gamma(t)}\Big{(}\frac{\partial}{\partial z^{i}}\Big{)}\Big{\rangle}=0,\quad 1\leqslant i\leqslant m (18)

Eq. (18) is eqivalent to

j=1Nd2xj(t)dt2hjzi=0,1im\sum_{j=1}^{N}\frac{d^{2}x_{j}(t)}{dt^{2}}\cdot\frac{\partial h^{j}}{\partial z^{i}}=0,\quad 1\leqslant i\leqslant m (19)

We write it in the form of matrix multiplication,

[x1′′(t),x2′′(t),,xN′′(t)][hz1|γ(t),hz2|γ(t),,hzm|γ(t)]=𝟎.[x_{1}^{\prime\prime}(t),x_{2}^{\prime\prime}(t),\cdots,x_{N}^{\prime\prime}(t)]\cdot\Big{[}\frac{\partial h}{\partial z^{1}}\Big{|}_{\gamma(t)},\frac{\partial h}{\partial z^{2}}\Big{|}_{\gamma(t)},\cdots,\frac{\partial h}{\partial z^{m}}\Big{|}_{\gamma(t)}\Big{]}=\bm{0}. (20)

Theorem 1.

If a piece-wise differentiable curve γ:[a,b]M\gamma:[a,b]\to M [8], with parameter proportional to arc length, has length less or equal to the length of any other piece-wise differentiable curve joining γ(a)\gamma(a) to γ(b)\gamma(b) then γ\gamma is a geodesic.

Proof of Theorem 1 can be referred to in corollary 3.9 of Chapter 3 in [8]

Appendix B Experimental part

B.1 Model Architecture

For 3-dimensional datasets, we trained our interpolation network for 3000 iterations. Table 2 describes the autoencoder architecture.

For MNIST and Fashion-MNIST datasets, we trained our interpolation for 30,000 iterations. Table 3 and Table 4 describe the network architecture of VAE and AAE respectively.

Table 2: The network architecture trained for 3-D datasets
Encoder
Operation Input Output
FC 3 64
BN,PReLU,FC 64 64
BN,PReLU,FC 64 64
BN,PReLU,FC 64 64
BN,PReLU,FC 64 3
FC 3 2
Decoder
BN,FC 2 64
BN,ReLU,FC 64 64
BN,ReLU,FC 64 64
BN,ReLU,FC 64 64
BN,ReLU,FC 64 64
BN,ReLU,FC 64 3
Table 3: The network architecture of VAE trained for MNIST and Fashion-MNIST datasets
Encoder
Operation Kernel Strides Feature maps
μ\mu Conv2D,LReLU 4 2 64
Conv2D,BN,LReLU 4 2 128
Conv2D,BN,LReLU 4 2 256
Conv2D,BN,LReLU 4 2 512
Reshape,Linear 64
BN,ReLU 64
Reshape,Linear 2
σ\sigma Conv2D,LReLU 4 2 64
Conv2D,BN,LReLU 4 2 128
Conv2D,BN,LReLU 4 2 256
Conv2D,BN,LReLU 4 2 512
Reshape,Linear 64
BN,ReLU 64
Reshape,Linear 2
Decoder
LReLU,ConvTranspose2D,BN 4 2 512
ReLU,ConvTranspose2D,BN 4 2 256
ReLU,ConvTranspose2D,BN 4 2 128
ReLU,ConvTranspose2D,BN 4 2 64
ReLU,ConvTranspose2D,Tanh 4 2 1
Table 4: The network architecture of AAE trained for MNIST and Fashion-MNIST datasets
Encoder
Operation Kernel Strides Feature maps
Conv2D,LReLU 4 2 64
Conv2D,BN,LReLU 4 2 128
Conv2D,BN,LReLU 4 2 256
Conv2D,BN,LReLU 4 2 512
Conv2D,BN 4 2 2
Decoder
LReLU,ConvTranspose2D,BN 4 2 512
ReLU,ConvTranspose2D,BN 4 2 256
ReLU,ConvTranspose2D,BN 4 2 128
ReLU,ConvTranspose2D,BN 4 2 64
ReLU,ConvTranspose2D,Tanh 4 2 1

B.2 More experimental results on MNIST dataset

In this section, we provide more results for image interpolation on the MNIST dataset to verify the invariance of geodesic interpolation. Fig. 8 illustrates the visual interpolation results for both VAE and AAE being the autoencoder respectively. It worth mentioning that geodesics do not depend on the distributions of latent space because they only depend on the geometry structure of data manifold. Thus we should theoretically get the same interpolation results for different autoencoders if the interpolations are along geodesics. From Fig. 8, we can observe for linear interpolation, different autoencoders result in very different results. For geodesic interpolation, although there is a slight difference between results with different autoencoders, the general transition tendency is consistent. Specifically, when we transit digit ’2’ to ’4’ shown in Fig. 8, the linear interpolation with AAE will transit ’2’ to ’1’, then to ’4’, which is different from that with VAE transiting ’2’ to ’4’ directly. But geodesic interpolation with VAE and AAE both transit ’2’ to ’4’ directly and have a semantic transfer at similar sampling time. Other digit transitions have similar situations. These experimental results prove the invariance of geodesic interpolation with different autoencoders.

Refer to caption Refer to caption
(a) VAE (b) AAE
Figure 8: Visual interpolation results with different interpolation methods: interpolated images along linear (bottom) and geodesic (top) interpolants.