This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Efficient Riemannian Optimization on the Stiefel Manifold via the Cayley Transform

Jun Li, Li Fuxin, Sinisa Todorovic
School of EECS
Oregon State University
Corvallis, OR 97331
{liju2,lif,sinisa}@oregonstate.edu
Abstract

Strictly enforcing orthonormality constraints on parameter matrices has been shown advantageous in deep learning. This amounts to Riemannian optimization on the Stiefel manifold, which, however, is computationally expensive. To address this challenge, we present two main contributions: (1) A new efficient retraction map based on an iterative Cayley transform for optimization updates, and (2) An implicit vector transport mechanism based on the combination of a projection of the momentum and the Cayley transform on the Stiefel manifold. We specify two new optimization algorithms: Cayley SGD with momentum, and Cayley ADAM on the Stiefel manifold. Convergence of Cayley SGD is theoretically analyzed. Our experiments for CNN training demonstrate that both algorithms: (a) Use less running time per iteration relative to existing approaches that enforce orthonormality of CNN parameters; and (b) Achieve faster convergence rates than the baseline SGD and ADAM algorithms without compromising the performance of the CNN. Cayley SGD and Cayley ADAM are also shown to reduce the training time for optimizing the unitary transition matrices in RNNs.

1 Introduction

Orthonormality has recently gained much interest, as there are significant advantages of enforcing orthonormality on parameter matrices of deep neural networks. For CNNs, Bansal et al. (2018) show that orthonormality constraints improve accuracy and gives a faster empirical convergence rate, Huang et al. (2018a) show that orthonormality stabilizes the distribution of neural activations in training, and Cogswell et al. (2015) show that orthonormality reduces overfitting and improves generalization. For RNNs, Arjovsky et al. (2016); Zhou et al. (2006) show that the orthogonal hidden-to-hidden matrix alleviates the vanishing and exploding-gradient problems.

Riemannian optimization on the Stiefel manifold, which represents the set of all orthonormal matrices of the same size, is an elegant framework for optimization under orthonormality constraints. But its high computational cost is limiting its applications, including in deep learning. Recent efficient approaches incorporate orthogonality in deep learning only for square parameter matrices (Arjovsky et al., 2016; Dorobantu et al., 2016), or indirectly through regularization (Bansal et al., 2018), which however does not guarantee the exact orthonormality of parameter matrices.

To address the aforementioned limitations, our first main contribution is an efficient estimation of the retraction mapping based on the Cayley transform for updating large non-square matrices of parameters on the Stiefel manifold. We specify an efficient iterative algorithm for estimating the Cayley transform that consists of only a few matrix multiplications, while the closed-form Cayley transform requires costly matrix inverse operations (Vorontsov et al., 2017; Wisdom et al., 2016). The efficiency of the retraction mapping is both theoretically proved and empirically verified in the paper.

Our second main contribution is aimed at improving convergence rates of training by taking into account the momentum in our optimization on the Stiefel manifold. We derive a new approach to move the tangent vector between tangent spaces of the manifold, instead of using the standard parallel transport. Specifically, we regard the Stiefel manifold as a submanifold of a Euclidean space. This allows for representing the vector transport (Absil et al., 2009) as a projection onto the tangent space. As we show, since the Cayley transform implicitly projects gradients onto the tangent space, the momentum updates result in an implicit vector transport. Thus, we first compute a linear combination of the momentum and the gradient in the Euclidean space, and then update the network parameters using the Cayley transform, without explicitly performing the vector transport.

We apply the above two contributions to generalize the standard SGD with momentum and ADAM (Kingma & Ba, 2014) to the Stiefel manifold, resulting in our two new optimization algorithms called Cayley SGD with momentum and Cayley ADAM. A theoretical analysis of the convergence of Cayley SGD is presented. Similar analysis for Cayley ADAM is omitted, since it is very similar to the analysis presented in (Becigneul & Ganea, 2019).

Cayley SGD and Cayley ADAM are empirically evaluated on image classification using VGG and Wide ResNet on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton, 2009). The experiments show that Cayley SGD and Cayley ADAM achieve better classification performance and faster convergence rate than the baseline SGD with momentum and ADAM. While the baselines take less time per epoch since they do not enforce the orthonormality constraint, they take more epochs until convergence than Cayley SGD and Cayley ADAM. In comparison with existing optimization methods that also account for orthonormality – e.g., Polar decomposition, QR decomposition, or closed-form Cayley transform – our Cayley SGD and Cayley ADAM run much faster and yield equally good or better performance in image classification.

Finally, we apply the aforementioned two contributions to training of the unitary RNNs. Wisdom et al. (2016) proposes the full capacity unitary RNN that updates the hidden-to-hidden transition matrix with the closed-form Cayley transform. In contrast, for our RNN training, we use the more efficient Cayley SGD with momentum and Cayley ADAM. The results show that our RNN training takes less running time per iteration without compromising performance.

2 Related Work

There is a host of literature on using orthonormality constraints in neural-network training. This section reviews closely related work, which can be broadly divided in two groups: orthonormality regularization and Riemannian optimization.

Regularization approaches can be divided as hard, which strictly enforce orthonormality, and soft. Hard regularizations are computationally expensive. For example, Huang et al. (2018b) extend Batch Normalization (Ioffe & Szegedy, 2015) with ZCA, and hence require costly eigenvalue decomposition. Huang et al. (2018a) derive a closed-form solution that also requires eigenvalue decomposition. Bansal et al. (2018); Cisse et al. (2017); Xiong et al. (2016) introduce mutual coherence regularization and spectral restricted isometry regularization; however, these regularizations are soft in that they can not guarantee orthonormality.

Riemannian optimization guarantees that the solution respects orthonormality constraints. For example, Cho & Lee (2017) replace Batch Normalization layers in a CNN with Riemannian optimization on the Grassmann manifold G(n,1)G(n,1), where the parameters are normalized but not orthonormalized. Also, Vorontsov et al. (2017); Wisdom et al. (2016); Lezcano-Casado & Martínez-Rubio (2019); Helfrich et al. (2017) perform Riemannian optimization on the group of unitary matrices to stablize RNNs training, but their technique cannot be applied to non-square parameter matrices. Becigneul & Ganea (2019) introduce a more general Riemannian optimization, but do not show how to efficiently perform this optimization on the Stiefel manifold.

The key challenge of Riemannian optimization is that exponential mapping — the standard step for estimating the next update point — is computationally expensive on the Stiefel manifold. Some methods use an efficient pseudo-geodesic retraction instead of the exponential mapping. For example, Absil & Malick (2012); Gawlik & Leok (2018); Manton (2002) use a projection-based method to map the gradient back to the Stiefel manifold that rely on computational expensive SVD. Other approaches are based on the closed-form Cayley transform (Fiori et al., 2012; Jiang & Dai, 2015; Nishimori & Akaho, 2005; Zhu, 2017), but require the costly matrix inversion. Also, Wen & Yin (2013) reduce the cost of the Cayley transform by making the restrictive assumption that the matrix size n×pn\times p is such that 2pn2p\ll n. Unfortunately, this algorithm is not efficient when 2pn2p\geq n.

3 Preliminary

This section briefly reviews some well-known properties of the Riemannian and Stiefel manifolds. The interested reader is referred to Boothby (1986); Edelman et al. (1998) and references therein.

3.1 Riemannian Manifold

Definition 1.

Riemannian Manifold: A Riemannian manifold (,ρ\mathcal{M},\rho) is a smooth manifold \mathcal{M} equipped with a Riemannian metric ρ\rho defined as the inner product on the tangent space TxT_{x}\mathcal{M} for each point xx, ρx(,):Tx×Tx\rho_{x}(\cdot,\cdot):T_{x}\mathcal{M}\times T_{x}\mathcal{M}\to\mathbb{R}.

Definition 2.

Geodesic, Exponential map and Retraction map: A geodesic is a locally shortest curve on a manifold. An exponential map, Expx()\text{Exp}_{x}(\cdot), maps a tangent vector, vTxv\in T_{x}, to a manifold, \mathcal{M}. Expx(tv)\text{Exp}_{x}(tv) represents a geodesic γ(t):t[0,1]\gamma(t):t\in[0,1] on a manifold, s.t. γ(0)=x,γ˙(0)=v\gamma(0)=x,\dot{\gamma}(0)=v. A retraction map is defined as a smooth mapping Rx:TxR_{x}:T_{x}\mathcal{M}\to\mathcal{M} on a manifold iff Rx(0)=xR_{x}(0)=x and DRx(0)=idTxDR_{x}(0)=\text{id}_{T_{x}\mathcal{M}}, where DRxDR_{x} denotes the derivative of RxR_{x}, idTx\text{id}_{T_{x}\mathcal{M}} denotes an identity map defined on TxT_{x}\mathcal{M}. It is easy to show that any exponential map is a retraction map. As computing an exponential map is usually expensive, a retraction map is often used as an efficient alternative.

Definition 3.

Parallel transport and vector transport: Parallel transport is a method to translate a vector along a geodesic on a manifold while preserving norm. A vector transport τ\tau is a smooth map defined on a retraction RR of a manifold \mathcal{M}, τ:Tx×TxTR(ηx),(ηx,ξx)τηx(ξx)\tau:T_{x}\mathcal{M}\times T_{x}\mathcal{M}\to T_{R(\eta_{x})},(\eta_{x},\xi_{x})\mapsto\tau_{\eta_{x}}(\xi_{x}). τ\tau satisfies the following properties: (1) Underlying retraction: τηx(ξx)TR(ηx)\tau_{\eta_{x}}(\xi_{x})\in T_{R(\eta_{x})}; (2) Consistency: τ0xξx=ξx,ξxTx\tau_{0_{x}}\xi_{x}=\xi_{x},\forall\xi_{x}\in T_{x}\mathcal{M}; (3) Linearity: τηx(aξx+bζx)=aτηx(ξx)+bτηx(ζx)\tau_{\eta_{x}}(a\xi_{x}+b\zeta_{x})=a\tau_{\eta_{x}}(\xi_{x})+b\tau_{\eta_{x}}(\zeta_{x}). Usually, a vector transport is a computationally efficient alternative to a parallel transport.

3.2 Stiefel Manifold

The Stiefel manifold St(n,p)\text{St}(n,p), npn\geq p, is a Riemannian manifold that is composed of all n×pn\times p orthonormal matrices {Xn×p:XTX=I}\{X\in\mathbb{R}^{n\times p}:X^{T}X=I\}. In the rest of the paper, we will use notation =St(n,p)\mathcal{M}=\text{St}(n,p) to denote the Stiefel manifold. We regard \mathcal{M} as an embeded submanifold of a Euclidean space. Hence, the Riemannian metric ρ\rho is the Euclidean metric as:ρX(Z1,Z2)=tr(Z1Z2)\rho_{X}(Z_{1},Z_{2})=tr(Z_{1}^{\top}Z_{2}), where Z1,Z2Z_{1},Z_{2} are tangent vectors in TXT_{X}\mathcal{M}. The tangent space of \mathcal{M} at XX is defined as

TX={Z:ZX+XZ=0}\displaystyle T_{X}\mathcal{M}=\{Z:Z^{\top}X+X^{\top}Z=0\} (1)

The projection of a matrix Zn×pZ\in\mathbb{R}^{n\times p} to TXT_{X}\mathcal{M} can be computed as

πTX(Z)\displaystyle\pi_{T_{X}}(Z) =WX\displaystyle=WX
where:W\displaystyle\text{where:}\;W =W^W^,W^=ZX12X(XZX).\displaystyle=\hat{W}-\hat{W}^{\top},\quad\hat{W}=ZX^{\top}-\frac{1}{2}X(X^{\top}ZX^{\top}). (2)

Given a derivative of the objective function f(X)\nabla f(X) at XX in the Euclidean space, we can compute the Riemannian gradient f(X)\nabla_{\mathcal{M}}f(X) on the Stiefel manifold as a projection onto TXT_{X}\mathcal{M} using πTX(f(X))\pi_{T_{X}}(\nabla f(X)) given by Eq.(3.2). It follows that optimization of ff on the Riemannian manifold can be computed as follows. First, compute f(Xt)\nabla_{\mathcal{M}}f(X_{t}) in TXT_{X}\mathcal{M}. Second, transport the momentum MtM_{t} to the current tangent space TXT_{X}\mathcal{M} and combine it linearly with the current Riemannian gradient f(Xt)\nabla_{\mathcal{M}}f(X_{t}) to update the momentum Mt+1M_{t+1}. Finally, third, update the new parameter Xt+1X_{t+1} along a curve on the manifold with the initial direction as Mt+1M_{t+1}.

While the exponential map and parallel transport can be used to update parameters and momentums in optimization on the Riemannian manifold, they are computationally infeasible on the Stiefel manifold. In the following section, we specify our computationally efficient alternatives.

3.2.1 Parameter Updates by Iterative Cayley Transform

The Cayley transform computes a parametric curve on the Stiefel manifold using a skew-symmetric matrix  (Nishimori & Akaho, 2005). The closed-form of the Cayley transform is given by:

Y(α)=(Iα2W)1(I+α2W)X,Y(\alpha)=(I-\frac{\alpha}{2}W)^{-1}(I+\frac{\alpha}{2}W)X, (3)

where WW is a skew-symmetric matrix, i.e. W=WW^{\top}=-W, XX is on the Stiefel manifold, and α0\alpha\geq 0 is a parameter that represents the length on the curve. It is straightforward to verify that

Y(0)=XandY(0)=WX.\displaystyle Y(0)=X\quad\textit{and}\quad Y^{{}^{\prime}}(0)=WX. (4)

From Definition 2 and the definition of the tangent space of the Stiefel manifold given by Eq.(1), the Cayley transform is a valid retraction map on the Stiefel manifold. By choosing W=W^W^W=\hat{W}-\hat{W}^{\top}, where W^=f(X)X12X(Xf(X)X)\hat{W}=\nabla f(X)X^{\top}-\frac{1}{2}X(X^{\top}\nabla f(X)X^{\top}) as in Eq.(3.2), we see that the Cayley transform implicitly projects gradient on the tangent space as its initial direction. Therefore, the Cayley transform can represent an update for the parameter matrices on the Stiefel manifold.

However, the closed-form Cayley transform in Eq.(3) involves computing the expensive matrix inversion, which cannot be efficiently performed in large deep neural networks.

Our first contribution is an iterative estimation of the Cayley transform that efficiently uses only matrix multiplications, and thus is more efficient than the closed form in Eq.(3). We represent the Cayley transform with the following fixed-point iteration:

Y(α)=X+α2W(X+Y(α)),Y(\alpha)=X+\frac{\alpha}{2}W\left(X+Y(\alpha)\right), (5)

which can be proved by moving Y(α)Y(\alpha) on the right-hand side to the left-hand side in Eq.(3). The expression in Eq.(5) is an efficient approximation of Eq.(3). In Sec. 5, we will analyze its convergence rate to the closed-form Eq.(3).

3.2.2 Momentum Updates by the Implicit Vector Transport

Our second contribution is an efficient way to perform momentum updates on the Stiefel manifold. We specify an implicit vector transport by combining the Cayley transform and momentum updates in an elegant way without explicitly computing the vector transport. As the Stiefel manifold can be viewed as a submanifold of the Euclidean space \mathbb{R}, we have a natural inclusion of the tangent space TXT_{X}\mathcal{M}\subset\mathbb{R}. Consequently, the vector transport on the Stiefel manifold is the projection on the tangent space. Formally, for two tangent vectors ξX,ηXTX\xi_{X},\eta_{X}\in T_{X}\mathcal{M}, the vector transport of ξX\xi_{X} along a retraction map r(ηX)r(\eta_{X}), denoted as τηX(ξX)\tau_{\eta_{X}}(\xi_{X}), can be computed as:

τηX(ξX)\displaystyle\tau_{\eta_{X}}(\xi_{X}) =πTr(ηX)(ξX).\displaystyle=\pi_{T_{r(\eta_{X})}}(\xi_{X}). (6)

We specify the retraction map r()r(\cdot) as the Cayley transform YY in Eq.(3). At optimization step kk, in Eq.(6), we choose ξX=ηX=Mk\xi_{X}=\eta_{X}=M_{k}, where MkM_{k} is the momentum in step k1k-1. Then, we compute the vector transport of MkM_{k} as τMk(Mk)=πTXk(Mk)\tau_{M_{k}}(M_{k})=\pi_{T_{X_{k}}}(M_{k}). As the projection onto a tangent space is a linear map, then ατMk(Mk)+βf(Xk)=απTXk(Mk)+βπTXk(f(Xk))=πTXk(αMk+βf(Xk))\alpha\tau_{M_{k}}(M_{k})+\beta\nabla_{\mathcal{M}}f(X_{k})=\alpha\pi_{T_{X_{k}}}(M_{k})+\beta\pi_{T_{X_{k}}}(\nabla f(X_{k}))=\pi_{T_{X_{k}}}(\alpha M_{k}+\beta\nabla f(X_{k})). Thus we first compute a linear combination of the Euclidean gradient f(X)\nabla f(X) and the momentum MkM_{k}, as in the Euclidean space, and then use the iterative Cayley transform to update the parameters, without explicitly estimating the vector transport, since the Cayley transform implicitly project a vector onto the tangent space.

4 Algorithms

This section specifies our Cayley SGD and Cayley ADAM algorithms. Both represent our efficient Riemannian optimization on the Stiefel manifold that consists of two main steps. As the Cayley transform implicitly projects gradient and momentum vectors onto the tangent space, we first linearly combine the momentum of the previous iteration with the stochastic gradient of the objective function ff at the current point XX, denoted as 𝒢(X)\mathcal{G}(X). Then, we use the iterative Cayley transform to estimate the next optimization point based on the updated momentum. This is used to generalize the conventional SGD with momentum and ADAM algorithms to our two new Riemannian optimizations on the Stiefel manifold, as described in Section 4.1 and Section 4.2.

4.1 Cayley SGD with Momentum

We generalize the heavy ball (HB) momentum update (Ghadimi et al., 2015; Zavriev & Kostyuk, 1993) in the kkth optimization step111Note that we use index ii in superscript to indicate our iterative steps in estimation of the Cayley transform, and index kk in subscript to indicate optimization steps. to the Stiefel manifold. Theoretically, it can be represented as:

Mk+1=βπ𝒯Xk(Mk)𝒢(Xk),Xk+1=Y(α,Xk,Wk)\displaystyle M_{k+1}=\beta\pi_{\mathcal{T}_{X_{k}}}(M_{k})-\mathcal{G}_{\mathcal{M}}(X_{k}),\quad X_{k+1}=Y(\alpha,X_{k},W_{k}) (7)

where Y(α,Xk,Wk)Y(\alpha,X_{k},W_{k}) is a curve that starts at XkX_{k} with length α\alpha on the Stiefel manifold, specified by the Cayley transform in Eq.(3). In practice, we efficiently perform the updates in Eq.(7) by the proposed iterative Cayley transform and implicit vector transport on the Stiefel manifold. Specially, we first update the momentum as if it were in the Euclidean space. Then, we update the new parameters by iterative Cayley transform. Finally, we correct the momentum Mk+1M_{k+1} by projecting it to TXkT_{X_{k}}\mathcal{M}. The details of our Cayley SGD algorithm are shown in Alg. 1.

Algorithm 1 Cayley SGD with Momentum
1:Input: learning rate ll, momentum coefficient β\beta, ϵ=108\epsilon{=}10^{-8}, q=0.5q=0.5, s=2s=2.
2:Initialize X1X_{1} as an orthonormal matrix; and M1=0M_{1}=0
3:for k=0k=0 to TT do
4:     Mk+1βMk𝒢(Xk)M_{k+1}\leftarrow\beta M_{k}-\mathcal{G}(X_{k}), \triangleright Update the momentum
5:     Wk^Mk+1Xk12Xk(XkMk+1Xk)\hat{W_{k}}\leftarrow M_{k+1}X_{k}^{\top}-\frac{1}{2}X_{k}(X_{k}^{\top}M_{k+1}X_{k}^{\top}) \triangleright Compute the auxiliary matrix
6:     WkWk^Wk^W_{k}\leftarrow\hat{W_{k}}-\hat{W_{k}^{\top}}
7:     Mk+1WkXkM_{k+1}\leftarrow W_{k}X_{k}. \triangleright Project momentum onto the tangent space
8:     αmin{l,2q/(Wk+ϵ)}\alpha\leftarrow\min\{l,2q/(\|W_{k}\|+\epsilon)\} \triangleright Select adaptive learning rate for contraction mapping
9:     Initialize Y0X+αMk+1Y^{0}\leftarrow X+\alpha M_{k+1} \triangleright Iterative estimation of the Cayley Transform
10:     for i=1i=1 to ss do
11:         YiXk+α2Wk(Xk+Yi1)Y^{i}\leftarrow X_{k}+\displaystyle\frac{\alpha}{2}W_{k}(X_{k}+Y^{i-1})      
12:     Update Xk+1YsX_{k+1}\leftarrow Y^{s}

4.2 ADAM on the Stiefel Manifold

Algorithm 2 Cayley ADAM
1:Input: learning rate ll, momentum coefficients β1\beta_{1} and β2\beta_{2}, ϵ=108\epsilon=10^{-8}, q=0.5q=0.5, s=2s=2.
2:Initialize X1X_{1} as an orthonormal matrix. M1=0M_{1}=0, v1=1v_{1}=1
3:for k=0k=0 to TT do
4:     Mk+1β1Mk+(1β1)𝒢(Xk)M_{k+1}\leftarrow\beta_{1}M_{k}+(1-\beta_{1})\mathcal{G}(X_{k}) \triangleright Estimate biased momentum
5:     vk+1β2vk+(1β2)𝒢(Xk)2v_{k+1}\leftarrow\beta_{2}v_{k}+(1-\beta_{2}){\|\mathcal{G}(X_{k})\|^{2}}
6:     v^k+1vk+1/(1β2k)\hat{v}_{k+1}\leftarrow v_{k+1}/(1-\beta_{2}^{k}) \triangleright Update biased second raw moment estimate
7:     r(1β1k)vk+1^+ϵr\leftarrow(1-\beta_{1}^{k})\sqrt{\hat{v_{k+1}}+\epsilon} \triangleright Estimate biased-corrected ratio
8:     W^kMk+1Xk12Xk(XkMk+1Xk)\hat{W}_{k}\leftarrow{M}_{k+1}X_{k}^{\top}-\frac{1}{2}X_{k}(X_{k}^{\top}{M}_{k+1}X_{k}^{\top}) \triangleright Compute the auxiliary skew-symmetric matrix
9:     Wk(W^kW^k)/rW_{k}\leftarrow(\hat{W}_{k}-\hat{W}_{k}^{\top})/r
10:     Mk+1rWkXkM_{k+1}\leftarrow rW_{k}X_{k} \triangleright Project momentum onto the tangent space
11:     αmin{l,2q/(Wk+ϵ)}\alpha\leftarrow\min\{l,2q/(\|W_{k}\|+\epsilon)\} \triangleright Select adaptive learning rate for contraction mapping
12:     Initialize Y0XkαMk+1Y^{0}\leftarrow X_{k}-\alpha{M}_{k+1} \triangleright Iterative estimation of the Cayley Transform
13:     for i=1i=1 to ss do
14:         YiXkα2W(Xk+Yi1)Y^{i}\leftarrow X_{k}-\frac{\alpha}{2}W(X_{k}+Y^{i-1})      
15:     Update Xk+1YsX_{k+1}\leftarrow Y^{s}

ADAM is a recent first-order optimization method for stochastic objective functions. It estimates adaptive lower-order moments and uses adaptive learning rate. The algorithm is designed to combine the advantages of both AdaGrad, which performs well in sparse-gradient cases, and RMSProp, which performs well in non-stationary cases.

We generalize ADAM to the Stiefel manifold by making three modifications to the vanilla ADAM. First, we replace the standard gradient and momentum with the corresponding ones on the Stiefel manifold, as described in Section 4.1. Second, we use a manifold-wise adaptive learning rate that assign a same learning rate for all entries in a parameter matrix as in (Absil et al., 2009). Third, we use the Cayley transform to update the parameters. Cayley ADAM is summarized in Alg 2.

5 Convergence Analysis

In this section, we analyze convergence of the iterative Cayley transform and Cayley SGD with momentum. To facilitate our analysis, we make the following common assumption.

Assumption 1.

The gradient f\nabla f of the objective function ff is Lipschitz continuous

f(X)f(Y)LXY,X,Y,where L>0 is a constant.\|\nabla f(X)-\nabla f(Y)\|\leq L\|X-Y\|,\quad\forall X,Y,\ \text{where }L>0\text{ is a constant}. (8)

Lipschitz continity is widely applicable to deep learning architectures. For some models using ReLU, the derivative of ReLU is Lipschitz continuous almost everywhere with an appropriate Lipschitz constant LL in Assumption 1 , except for a small neighbourhood around 0, whose measure tends to 0. Such cases do not affect either analysis in theory or training in practice.

The above assumption allows proving the following contraction mapping theorem.

Theorem 1.

For α(0,min{1,2W})\alpha\in(0,\min\{1,\frac{2}{\|W\|}\}), the iteration Yi+1=X+α2W(X+Yi)Y^{i+1}=X+\frac{\alpha}{2}W\left(X+Y^{i}\right) is a contraction mapping and converges to the closed-form Cayley transform Y(α)Y(\alpha) given by Eq.(3). Specifically, at iteration ii, YiY(α)=o(α2+i)||Y^{i}-Y(\alpha)||=o(\alpha^{2+i}).

Theorem 1 shows the iterative Cayley transform will converge. Specially, it converges faster than other approximation algorithms, such as, e.g., the Newton iterative and Neumann Series which achieves error bound o(αi)o(\alpha^{i}) at the ithi^{th} iteration. We further prove the following result on convergence:

Theorem 2.

Given an objective function f(X)f(X) that satisfies Assumption 1, let Cayley SGD with momentum run for tt iterations with 𝒢(Xk)\mathcal{G}(X_{k}). For α=min{1βL,At+1}\alpha=\min\{\frac{1-\beta}{L},\frac{A}{\sqrt{t+1}}\}, where AA is a positive constant, we have: mink=0,,tE[f(Xk)2]=o(1t+1)0,ast\min_{k=0,\cdots,t}E[\|\nabla_{\mathcal{M}}f(X_{k})\|^{2}]=o(\frac{1}{\sqrt{t+1}})\to 0,\text{as}\ t\to\infty.

The proofs of Theorem 1 and Theorem 2 are presented in the appendix.

6 Experiments

6.1 orthonormality in CNNs

In CNNs, for a convolutional layer with kernel K^cout×cin×h×w\hat{K}\in\mathbb{R}^{c_{out}\times c_{in}\times h\times w}, we first reshape K^\hat{K} into a matrix K{K} of size p×np\times n, where p=coutp=c_{out}, n=cin×h×wn=c_{in}\times h\times w. In most cases, we have pnp\leq n. Then, we restrict the matrix K{K} on the Stiefel manifold using Cayley SGD or Cayley ADAM, while other parameters are optimized with SGD and ADAM.

Datasets: We evaluate Cayley SGD or Cayley ADAM in image classification on the CIFAR10 and CIFAR100 datasets (Krizhevsky & Hinton, 2009). CIFAR10 and CIFAR100 consist of of 50,000 training images and 10,000 test images, and have 10 and 100 mutually exclusive classes.

Models: We use two networks — VGG (Simonyan & Zisserman, 2014) and Wide ResNet (Zagoruyko & Komodakis, 2016) — that obtain state of the art performance on CIFAR10 and CIFAR100. For VGG, every convolutional layer is followed by a batch normalization layer and a ReLU. For Wide ResNet, we use basic blocks, where two consecutive 3-by-3 convolutional layers are followed by the batch normalization and ReLU activation, respectively.

Training Strategies: We use different learning rates lel_{e} and lstl_{st} for weights on the Euclidean space and the Stiefel manifold, respectively. We set the weight decay as 0.0005, momentum as 0.9, and minibatch size as 128. The initial learning rates are set as le=0.01l_{e}=0.01 and lst=0.2l_{st}=0.2 for Cayley SGD and le=0.01l_{e}=0.01 and lst=0.4l_{st}=0.4 for Cayley ADAM. During training, we reduce the learning rates by a factor of 0.20.2 at 60, 120, and 160 epochs. The total number of epochs in training is 200. In training, the data samples are normalized using the mean and variance of the training set, and augmented by randomly flipping training images.

Our baselines include SGD with momentum and ADAM. We follow the same training strategies as mentioned above, except for the initial learning rates set to 0.1 and 0.001, respectively.

Table 1: Classification errors(%) on CIFAR10.
Method VGG-13 VGG-16 VGG-19 WRN 52-1 WRN 16-4 WRN 28-10
SGD 5.88 6.32 6.49 6.23 4.96 3.89
ADAM 6.43 6.61 6.92 6.77 5.32 3.86
Cayley SGD 5.90 5.77 5.85 6.35 5.15 3.66
Cayley ADAM 5.93 5.88 6.03 6.44 5.22 3.57
Table 2: Classification errors(%) on CIFAR100.
Method VGG-13 VGG-16 VGG-19 WRN 52-1 WRN 16-4 WRN 28-10
SGD 26.17 26.84 27.62 27.44 23.41 18.66
ADAM 26.58 27.10 27.88 27.89 24.45 18.45
Cayley SGD 24.86 25.48 25.68 27.64 23.71 18.26
Cayley ADAM 25.10 25.61 25.70 27.91 24.18 18.10

Performance: Table 1 and Table 2 show classification errors on CIFAR10 and CIFAR100 respectively using different optimization algorithms. As shown in the tables, the proposed two algorithms achieve competitive performance, and for certain deep architectures, the best performance. Specifically, the network WRN-28-10 trained with Cayley ADAM achieves the best error rate of 3.57%3.57\% and 18.10%18.10\% on CIFAR10 and CIFAR100 respectively. Fig. 1 compares training curves of our algorithms and baselines in terms of epochs, and shows that both Cayley SGD and Cayley ADAM converge faster than the baselines. In particular, the training curves of the baselines tend to get stuck in a plateau before the learning rate drops, which is not the case with our algorithms. This might be because the baselines do not enforce orthonormality of network parameters. In training, the backpropagation of orthonormal weight vectors, in general, does not affect each other, and thus has greater chances to explore new parameter regions. Fig. 2 also compares the training loss curve in terms of time. Our Cayley SGD and Cayley ADAM converge the fastest among methods that also address orthonormality. Although the baselines SGD and ADAM converge faster at the beginning due to their training efficiency, our Cayley SGD and Cayley ADAM can catch up with the baseline after 12000 seconds, which corresponds to the 120th epoch of SGD and ADAM, and the 60th epoch of Cayley SGD and Cayley ADAM.

Refer to caption
(a) CIFAR10
Refer to caption
(b) CIFAR100
Figure 1: Training loss curves of different optimization algorithms for WRN-28-10 for epoach 40-200. (a) Results on CIFAR10. (b) Results on CIFAR100. Both figures show that our Cayley SGD and Cayley ADAM achieve the top two fastest convergence rates.
Refer to caption
(a) CIFAR10
Refer to caption
(b) CIFAR100
Figure 2: Training loss curves of different optimization algorithms for WRN-28-10 in terms of seconds. (a) Results on CIFAR10. (b) Results on CIFAR100.
Table 3: Error rate and training time per epoch comparison to baselines with WRN-28-10 on CIFAR10 and CIFAR100. All experiments are performed on one TITAN Xp GPU.
Error Rate(%)
Method CIFAR10 CIFAR100 Training time(s)
Baselines SGD 3.89 18.66 102.5
ADAM 3.85 18.52 115.2
Soft orthonormality SO (Bansal et al., 2018) 3.76 18.56 297.3
DSO (Bansal et al., 2018) 3.86 18.21 311.0
SRIP (Bansal et al., 2018) 3.60 18.19 321.8
Hard orthonormality OMDSM (Huang et al., 2018a) 3.73 18.61 943.6
DBN (Huang et al., 2018b) 3.79 18.36 889.4
Polar (Absil et al., 2009) 3.75 18.50 976.5
QR (Absil et al., 2009) 3.75 18.65 469.3
Wen&Yin (Wen & Yin, 2013) 3.82 18.70 305.8
Cayley closed form w/o momentum 3.80 18.68 1071.5
Cayley SGD (Ours) 3.66 18.26 218.7
Cayley ADAM (Ours) 3.57 18.10 224.4

Comparison with State of the Art. We compare the proposed algorithms with two sets of state of the art. The first set of approaches are soft orthonormality regularization approaches (Bansal et al., 2018). Specially, for a weight matrix Kn×pK\in\mathbb{R}^{n\times p}, SO penalizes KKIF2||KK^{\top}-I||^{2}_{F}, DSO penalizes (KKIF2+KKIF2||KK^{\top}-I||^{2}_{F}+||K^{\top}K-I||^{2}_{F}), the SRIP penalizes the spectral norm of (KKI)(KK^{\top}-I). The second set of approaches includes the following hard orthonormality methods: Polar decomposition(Absil et al., 2009), QR decomposition(Absil et al., 2009), closed-form Cayley transform, Wen&Yin (Wen & Yin, 2013), OMDSM(Huang et al., 2018a), DBN(Huang et al., 2018b). Note that we do not include momentum in Polar decomposition and QR decomposition as previous work does not specify the momentum. Also, we use the closed-form Cayley transform without momentum as an ablation study of the momentum effect. All experiments are evaluated on the benchmark network WRN-28-10. Table 3 shows that our algorithms achieve comparable error rates with state of the art. All algorithms are run on one TITAN Xp GPU, and their average training time are compared per epoch. Table 3 shows that our algorithms run much faster than existing algorithms, except for the baseline SGD and ADAM which do not impose orthonormality constraints.

6.2 orthonormality in RNNs

In RNNs, the hidden-to-hidden transition matrix KK can be modeled as a unitary matrix (Arjovsky et al., 2016). Wisdom et al. (2016) model the hidden-to-hidden transition matrix as a full-capacity unitary matrix on the complex Stiefel manifold: St(N)={KN×N:KHK=I}\text{St}(\mathbb{C}^{N})=\{K\in\mathbb{C}^{N\times N}:K^{H}K=I\}.

Pixel-by-pixel MNIST: We evaluate the proposed algorithms on the challenging pixel-by-pixel MNIST task for long-term memory. The task was used to test the capacity of uRNNs (Arjovsky et al., 2016; Wisdom et al., 2016). Following the same setting as in Wisdom et al. (2016), we reshape MNIST images of 28×2828\times 28 pixels to a T=784T=784 pixel-by-pixel sequences, and select 5,000 out of the 60,000 training examples for the early stopping validation.

Training: Wisdom et al. (2016) restricted the transition unitary matrix on the Stiefel manifold via a closed-form Cayley transform. On the contrary, we use Cayley SGD with momentum and Cayley ADAM to reduce the training time. Table 4 shows that the proposed algorithms reduce the training time by about 35% for all settings of the network, while maintaining the same level of accuracy. All experiments are performed on one TITAN Xp GPU.

Checking Unitariness: To show that the proposed algorithms are valid optimization algorithms on the Stiefel manifold, we check the unitariness of the hidden-to-hidden matrix KK by computing the error term KHKIF||K^{H}K-I||_{F} during training. Table 5 compares average errors for varying numbers of iterations ss. As can be seen, the iterative Cayley transform can approximate the unitary matrix when s=2s=2. The iterative Cayley transform performs even better than the closed form Cayley transform, which might be an effect of the rounding error of the matrix inversion as in Eq.(3).

Table 4: Pixel-by-pixel MNIST accuracy and training time per iteration of the closed-form Cayley Transform, Cayley SGD, and Cayley ADAM for Full-uRNNs (Wisdom et al., 2016). All experiments are performed on one TITAN Xp GPU.
Closed-Form Cayley SGD Cayley ADAM
Model Hidden Size Acc(%) Time(s) Acc(%) Time(s) Acc(%) Time(s)
Full-uRNN 116 92.8 2.10 92.6 1.42 92.7 1.50
Full-uRNN 512 96.9 2.44 96.7 1.67 96.9 1.74
Table 5: Checking unitariness by computing the error KHKIF||K^{H}K-I||_{F} for varying numbers of iterations in the iterative Cayley transform and the closed-form Cayley transform.
Hidden Size s=0 s=1 s=2 s=3 s=4 Closed-form
n=116 3.231e-3 2.852e-4 7.384e-6 7.353e-6 7.338e-6 8.273e-5
n=512 6.787e-3 5.557e-4 2.562e-5 2.547e-5 2.544e-5 3.845e-5

7 Conclusion

We proposed an efficient way to enforce the exact orthonormal constraints on parameters by optimization on the Stiefel manifold. The iterative Cayley transform was applied to the conventional SGD and ADAM for specifying two new algorithms: Cayley SGD with momentum and Cayley ADAM, and the theoretical analysis of convergence of the former. Experiments show that both algorithms achieve comparable performance and faster convergence over the baseline SGD and ADAM in training of the standard VGG and ResNet on CIFAR10 and CIFAR100, as well as RNNs on the pixel-by-pixel MNIST task. Both Cayley SGD with momentum and Cayley ADAM take less runtime per iteration than all existing hard orthonormal methods and soft orthonormal methods, and can be applied to non-square parameter matrices.

Acknowledgements

This work was supported in part by NSF grant IIS-1911232, DARPA XAI Award N66001-17-2-4029 and AFRL STTR AF18B-T002.

References

  • Absil & Malick (2012) P-A Absil and Jérôme Malick. Projection-like retractions on matrix manifolds. SIAM Journal on Optimization, 22(1):135–158, 2012.
  • Absil et al. (2009) P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization algorithms on matrix manifolds. Princeton University Press, 2009.
  • Arjovsky et al. (2016) Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120–1128, 2016.
  • Bansal et al. (2018) Nitin Bansal, Xiaohan Chen, and Zhangyang Wang. Can we gain more from orthogonality regularizations in training deep cnns? arXiv preprint arXiv:1810.09102, 2018.
  • Becigneul & Ganea (2019) Gary Becigneul and Octavian-Eugen Ganea. Riemannian adaptive optimization methods. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1eiqi09K7.
  • Boothby (1986) William M Boothby. An introduction to differentiable manifolds and Riemannian geometry, volume 120. Academic press, 1986.
  • Cho & Lee (2017) Minhyung Cho and Jaehyung Lee. Riemannian approach to batch normalization. In Advances in Neural Information Processing Systems, pp. 5225–5235, 2017.
  • Cisse et al. (2017) Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  854–863. JMLR. org, 2017.
  • Cogswell et al. (2015) Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. arXiv preprint arXiv:1511.06068, 2015.
  • Dorobantu et al. (2016) Victor Dorobantu, Per Andre Stromhaug, and Jess Renteria. Dizzyrnn: Reparameterizing recurrent neural networks for norm-preserving backpropagation. arXiv preprint arXiv:1612.04035, 2016.
  • Edelman et al. (1998) Alan Edelman, Tomás A Arias, and Steven T Smith. The geometry of algorithms with orthogonality constraints. SIAM journal on Matrix Analysis and Applications, 20(2):303–353, 1998.
  • Fiori et al. (2012) Simone Fiori, Tetsuya Kaneko, and Toshihisa Tanaka. Learning on the compact stiefel manifold by a cayley-transform-based pseudo-retraction map. In The 2012 International Joint Conference on Neural Networks (IJCNN), pp.  1–8. IEEE, 2012.
  • Gawlik & Leok (2018) Evan S Gawlik and Melvin Leok. High-order retractions on matrix manifolds using projected polynomials. SIAM Journal on Matrix Analysis and Applications, 39(2):801–828, 2018.
  • Ghadimi et al. (2015) Euhanna Ghadimi, Hamid Reza Feyzmahdavian, and Mikael Johansson. Global convergence of the heavy-ball method for convex optimization. In Control Conference (ECC), 2015 European, pp.  310–315. IEEE, 2015.
  • Helfrich et al. (2017) Kyle Helfrich, Devin Willmott, and Qiang Ye. Orthogonal recurrent neural networks with scaled cayley transform. arXiv preprint arXiv:1707.09520, 2017.
  • Huang et al. (2018a) Lei Huang, Xianglong Liu, Bo Lang, Adams Wei Yu, Yongliang Wang, and Bo Li. Orthogonal weight normalization: Solution to optimization over multiple dependent Stiefel manifolds in deep neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018a.
  • Huang et al. (2018b) Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018b.
  • Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp.  448–456, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/ioffe15.html.
  • Jiang & Dai (2015) Bo Jiang and Yu-Hong Dai. A framework of constraint preserving update schemes for optimization on stiefel manifold. Mathematical Programming, 153(2):535–575, 2015.
  • Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
  • Lezcano-Casado & Martínez-Rubio (2019) Mario Lezcano-Casado and David Martínez-Rubio. Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group. arXiv preprint arXiv:1901.08428, 2019.
  • Manton (2002) Jonathan H Manton. Optimization algorithms exploiting unitary constraints. IEEE Transactions on Signal Processing, 50(3):635–650, 2002.
  • Nishimori & Akaho (2005) Yasunori Nishimori and Shotaro Akaho. Learning algorithms utilizing quasi-geodesic flows on the stiefel manifold. Neurocomputing, 67:106–135, 2005.
  • Simonyan & Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • Vorontsov et al. (2017) Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning recurrent networks with long term dependencies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp.  3570–3578. JMLR. org, 2017.
  • Wen & Yin (2013) Zaiwen Wen and Wotao Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142(1-2):397–434, 2013.
  • Wisdom et al. (2016) Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880–4888, 2016.
  • Xiong et al. (2016) Wei Xiong, Bo Du, Lefei Zhang, Ruimin Hu, and Dacheng Tao. Regularizing deep convolutional neural networks with a structured decorrelation constraint. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp.  519–528. IEEE, 2016.
  • Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
  • Zavriev & Kostyuk (1993) SK Zavriev and FV Kostyuk. Heavy-ball method in nonconvex optimization problems. Computational Mathematics and Modeling, 4(4):336–341, 1993.
  • Zhou et al. (2006) Jianping Zhou, Minh N. Do, and Jelena Kovacevic. Special paraunitary matrices, Cayley transform, and multidi- mensional orthogonal filter banks. IEEE Trans. Image Processing, 15(2):511–519, 2006.
  • Zhu (2017) Xiaojing Zhu. A riemannian conjugate gradient method for optimization on the stiefel manifold. Computational Optimization and Applications, 67(1):73–110, 2017.

Appendix A Preliminary

In this section, we derive some properties of the Stiefel manifold in this section to facilitate the proofs of Theorem 1 and Theorem 2 in the main paper.

Considering the Stiefel manifold \mathcal{M} is a bounded set and Lipschitz Assumption 1 on the gradient in Sec. 5, one straightforward conclusion is that both f(X)\nabla f(X) and its Stiefel manifold gradient f(X)\nabla_{\mathcal{M}}f(X) that is a projection onto the tangent space are bounded. Formally, there exists a positive constant GG, such that

f(X)f(X)G,X\displaystyle||\nabla_{\mathcal{M}}f(X)||\leq||\nabla f(X)||\leq G,\forall X\in\mathcal{M} (9)

As stochastic gradient 𝒢(X)=𝒢(X;ξ)\mathcal{G}(X)=\mathcal{G}(X;\xi) is the gradient of a sub-dataset, where ξ\xi is a stochastic variable for data samples and we are working are a finite dataset, it’s straightforward to show that 𝒢(X)\mathcal{G}(X) and its Riemannian stochastic gradient 𝒢(X)\mathcal{G}_{\mathcal{M}}(X) are also bounded. For brevity, we still use the same upper bound GG, such that:

𝒢(X)𝒢(X)G,X\displaystyle||\mathcal{G}_{\mathcal{M}}(X)||\leq||\mathcal{G}(X)||\leq G,\forall X\in\mathcal{M} (10)

Recall the recursion in Eq.(7), we show that the momentum is also bounded:

Mk+1\displaystyle||M_{k+1}|| βMk+𝒢(Xk)\displaystyle\leq\beta||M_{k}||+||\mathcal{G}_{\mathcal{M}}(X_{k})||
i=0kβki𝒢(Xi)\displaystyle\leq\sum_{i=0}^{k}\beta^{k-i}||\mathcal{G}_{\mathcal{M}}(X_{i})||
11βG\displaystyle\leq\frac{1}{1-\beta}G (11)

Therefore, we know that WkW_{k} in Eq.(7) is bounded.

Appendix B Proof of Theorem 1

Proof.

By subtracting the iterative relationship Eq.(5) by its ithi^{th} iteration Yi+1=X+α2W(X+Yi)Y^{i+1}=X+\frac{\alpha}{2}W\left(X+Y^{i}\right), we have:

Yi+1Y(α)αW2YiY(α)\displaystyle||Y^{i+1}-Y({\alpha})||\leq\frac{\alpha||W||}{2}||Y^{i}-Y({\alpha})|| (12)

Therefore, since WW is bounded, for α<2W\alpha<\frac{2}{\|W\|}, such that αW2<1\frac{\alpha\|W\|}{2}<1, the iteration in Eq.(5) is a contraction mapping, and it will converge to the closed-from solution Y(α)Y(\alpha).

By differentiate Eq.(5), we have:

dY(α)dα\displaystyle\frac{dY(\alpha)}{d\alpha} =W(X+Y(α)2)+α2WdY(α)dα\displaystyle=W(\frac{X+Y(\alpha)}{2})+\frac{\alpha}{2}W\frac{dY(\alpha)}{d\alpha}
d2Y(α)dα2\displaystyle\frac{d^{2}Y(\alpha)}{d\alpha^{2}} =(Iα2W)1WdY(α)dα\displaystyle=(I-\frac{\alpha}{2}W)^{-1}W\frac{dY(\alpha)}{d\alpha} (13)

therefore, dY(α)dα\frac{dY(\alpha)}{d\alpha} and d2Y(α)dα2\frac{d^{2}Y(\alpha)}{d\alpha^{2}} are bounded, i.e. there exist a positive constant CC, such that:

d2Y(α)dα2C\displaystyle||\frac{d^{2}Y(\alpha)}{d\alpha^{2}}||\leq C (14)

Using the Taylor expansion of YY in Eq.(3), we have:

Y(α)=Xk+αMk+1+12α2d2Y(γkα)dα2\displaystyle Y(\alpha)=X_{k}+\alpha M_{k+1}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}} (15)

where γk(0,1)\gamma_{k}\in(0,1). Given Y0=Xk+αMk+1Y^{0}=X_{k}+\alpha M_{k+1}, we have:

Y0Y(α)=o(α2)\displaystyle||Y^{0}-Y(\alpha)||=o(\alpha^{2}) (16)

Since WW is bounded and αW2<1\frac{\alpha||W||}{2}<1, then,

YiY(α)(αW2)iY0Y(α)=o(α2+i)\displaystyle||Y^{i}-Y(\alpha)||\leq(\frac{\alpha||W||}{2})^{i}||Y^{0}-Y(\alpha)||=o(\alpha^{2+i}) (17)

Appendix C Proof of Theorem 2

Proof.

Use Taylor expansion of Y(α)Y(\alpha), the process of Cayley SGD with momentum Eq.(7) can be written as:

Mk+1\displaystyle M_{k+1} =π𝒯Xk(βMk)𝒢(Xk)\displaystyle=\pi_{\mathcal{T}_{X_{k}}}(\beta M_{k})-\mathcal{G}_{\mathcal{M}}(X_{k})
Xk+1\displaystyle X_{k+1} =Xk+αMk+1+12α2d2Y(γkα)dα2\displaystyle=X_{k}+\alpha M_{k+1}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}} (18)

where γk(0,1)\gamma_{k}\in(0,1).

Using the fact that

M=π𝒯X(M)+π𝒩X(M)\displaystyle M=\pi_{\mathcal{T}_{X}}(M)+\pi_{\mathcal{N}_{X}}(M) (19)

where π𝒩X(M)\pi_{\mathcal{N}_{X}}(M) is the projection onto the normal space, and

π𝒩X(M)=XXM+MX2\displaystyle\pi_{\mathcal{N}_{X}}(M)=X\frac{X^{\top}M+M^{\top}X}{2} (20)

Then, the projection of momentum can be represented as:

π𝒯Xk(Mk)\displaystyle\pi_{\mathcal{T}_{X_{k}}}(M_{k}) =Mkπ𝒩Xk(Mk)\displaystyle=M_{k}-\pi_{\mathcal{N}_{X_{k}}}(M_{k})
=MkXkXkMk+MkXk2\displaystyle=M_{k}-X_{k}\frac{X_{k}^{\top}M_{k}+M_{k}^{\top}X_{k}}{2}
=Mk12Xk{[Xk1+αMk+12α2d2Y(γk1α)dα2]Mk\displaystyle=M_{k}-\frac{1}{2}X_{k}\{[X_{k-1}+\alpha M_{k}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}]^{\top}M_{k}
+Mk[Xk1+αMk+12α2d2Y(γk1α)dα2]}\displaystyle+M_{k}^{\top}[X_{k-1}+\alpha M_{k}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}]\}
=MkαXkMkMk14α2Xk[d2Y(γk1α)dα2Mk+Mkd2Y(γk1α)dα2]\displaystyle=M_{k}-\alpha X_{k}M_{k}^{\top}M_{k}-\frac{1}{4}\alpha^{2}X_{k}[\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}^{\top}M_{k}+M_{k}^{\top}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}] (21)

Then the momentum update in Eq.(7) is equivalent to:

Mk+1\displaystyle M_{k+1} =βMkαβXkMkMk𝒢(Xk)\displaystyle=\beta M_{k}-\alpha\beta X_{k}M_{k}^{\top}M_{k}-\mathcal{G}_{\mathcal{M}}(X_{k})
14α2βXk[d2Y(γk1α)dα2Mk+Mkd2Y(γk1α)dα2]\displaystyle-\frac{1}{4}\alpha^{2}\beta X_{k}[\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}^{\top}M_{k}+M_{k}^{\top}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}] (22)

Therefore, the paramter update in Eq.(7) can be represented as:

Xk+1\displaystyle X_{k+1} =Xk+αβMkα𝒢(Xk)\displaystyle=X_{k}+\alpha\beta M_{k}-\alpha\mathcal{G}_{\mathcal{M}}(X_{k})
14α3βXk[d2Y(γk1α)dα2Mk+Mkd2Y(γk1α)dα2]\displaystyle-\frac{1}{4}\alpha^{3}\beta X_{k}[\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}^{\top}M_{k}+M_{k}^{\top}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}]
α2βXkMkMk+12α2d2Y(γkα)dα2\displaystyle-\alpha^{2}\beta X_{k}M_{k}^{\top}M_{k}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}}
=Xk+β(XkXk1)α𝒢(Xk)+α2U\displaystyle=X_{k}+\beta(X_{k}-X_{k-1})-\alpha\mathcal{G}_{\mathcal{M}}(X_{k})+\alpha^{2}U (23)

where

U\displaystyle U =14αβXk[d2Y(γk1α)dα2Mk+Mkd2Y(γk1α)dα2]\displaystyle=-\frac{1}{4}\alpha\beta X_{k}[\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}^{\top}M_{k}+M_{k}^{\top}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}}]
βXkMkMk+12d2Y(γkα)dα2β2d2Y(γk1α)dα2\displaystyle-\beta X_{k}M_{k}^{\top}M_{k}+\frac{1}{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}}-\frac{\beta}{2}\frac{d^{2}Y(\gamma_{k-1}\alpha)}{d\alpha^{2}} (24)

Since M||M||, X||X||, andd2Ydα2\|\frac{d^{2}Y}{d\alpha^{2}}\| are bounded, there is a positive constant DD, such that

UD\displaystyle||U||\leq D (25)

To facilitate the analysis of Cayle SGD with momentum, we introduce auxiliary variables {Pk}\{P_{k}\}, such that:

Zk+1\displaystyle Z_{k+1} =Zkα1β𝒢(Xk)+α21βU\displaystyle=Z_{k}-\frac{\alpha}{1-\beta}\mathcal{G}_{\mathcal{M}}(X_{k})+\frac{\alpha^{2}}{1-\beta}U

where

Zk\displaystyle Z_{k} =Xk+Pk\displaystyle=X_{k}+P_{k} (27)

and

Pk={β1β(XkXk1),k10,k=0P_{k}=\left\{\begin{aligned} &\frac{\beta}{1-\beta}(X_{k}-X_{k-1}),k\geq 1\\ &0,k=0\end{aligned}\right.

Since f(X)f(X) is a smooth function according to Assumption 1, we have:

f(Y)f(X)tr(f(X)(YX))\displaystyle f(Y)-f(X)-tr(\nabla f(X)^{\top}(Y-X))
=\displaystyle= 01tr(f(Y+t(YX))(YX))𝑑ttr(f(X)(YX))\displaystyle\int_{0}^{1}\nabla tr(f(Y+t(Y-X))^{\top}(Y-X))dt-tr(\nabla f(X)^{\top}(Y-X))
\displaystyle\leq 01(f(Y+t(YX))f(X))𝑑t×YX\displaystyle||\int_{0}^{1}(\nabla f(Y+t(Y-X))-\nabla f(X))dt||\times||Y-X||
\displaystyle\leq 01Lt(YX)𝑑t×YX\displaystyle\int_{0}^{1}L||t(Y-X)||dt\times||Y-X||
\displaystyle\leq L2YX2\displaystyle\frac{L}{2}||Y-X||^{2} (28)

Then, we have

f(Zk+1)\displaystyle f(Z_{k+1})\leq f(Zk)+tr(f(Zk)(Zk+1Zk))+L2Zk+1Zk2\displaystyle f(Z_{k})+tr(\nabla f(Z_{k})^{\top}(Z_{k+1}-Z_{k}))+\frac{L}{2}||Z_{k+1}-Z_{k}||^{2}
=\displaystyle= f(Zk)+tr(f(Zk)(Zk+1Zk))+L2α1β𝒢(Xk)α21βU2\displaystyle f(Z_{k})+tr(\nabla f(Z_{k})^{\top}(Z_{k+1}-Z_{k}))+\frac{L}{2}||\frac{\alpha}{1-\beta}\mathcal{G}_{\mathcal{M}}(X_{k})-\frac{\alpha^{2}}{1-\beta}U||^{2}
\displaystyle\leq f(Zk)+tr(f(Zk)(Zk+1Zk))+Lα2(1β)2𝒢(Xk)2+Lα4(1β)2U2\displaystyle f(Z_{k})+tr(\nabla f(Z_{k})^{\top}(Z_{k+1}-Z_{k}))+\frac{L\alpha^{2}}{(1-\beta)^{2}}||\mathcal{G}_{\mathcal{M}}(X_{k})||^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}||U||^{2}
\displaystyle\leq f(Zk)α1βtr(𝒢(Xk)f(Zk))+α21βtr(Uf(Zk))+Lα2(1β)2G2\displaystyle f(Z_{k})-\frac{\alpha}{1-\beta}tr(\mathcal{G}_{\mathcal{M}}(X_{k})^{\top}\nabla f(Z_{k}))+\frac{\alpha^{2}}{1-\beta}tr(U^{\top}\nabla f(Z_{k}))+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}
+\displaystyle+ Lα4(1β)2D2\displaystyle\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2}
\displaystyle\leq f(Zk)α1βtr(𝒢(Xk)f(Zk))+α22(1β)(U2+f(Zk)2)+Lα2(1β)2G2\displaystyle f(Z_{k})-\frac{\alpha}{1-\beta}tr(\mathcal{G}_{\mathcal{M}}(X_{k})^{\top}\nabla f(Z_{k}))+\frac{\alpha^{2}}{2(1-\beta)}(||U||^{2}+||\nabla f(Z_{k})||^{2})+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}
+\displaystyle+ Lα4(1β)2D2\displaystyle\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2}
\displaystyle\leq f(Zk)α1βtr(𝒢(Xk)f(Zk))+α22(1β)(D2+G2)\displaystyle f(Z_{k})-\frac{\alpha}{1-\beta}tr(\mathcal{G}_{\mathcal{M}}(X_{k})^{\top}\nabla f(Z_{k}))+\frac{\alpha^{2}}{2(1-\beta)}(D^{2}+G^{2})
+\displaystyle+ Lα2(1β)2G2+Lα4(1β)2D2\displaystyle\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2} (29)

By taking expectation over the both sides, we have:

𝔼[f(Zk+1)f(Zk))]\displaystyle\mathbb{E}[f(Z_{k+1})-f(Z_{k}))]
\displaystyle\leq 𝔼[α1βtr(f(Zk)f(Xk))]+α22(1β)(D2+G2)+Lα2(1β)2G2+Lα4(1β)2D2\displaystyle\mathbb{E}[-\frac{\alpha}{1-\beta}tr(\nabla f(Z_{k})^{\top}\nabla_{\mathcal{M}}f(X_{k}))]+\frac{\alpha^{2}}{2(1-\beta)}(D^{2}+G^{2})+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2}
\displaystyle\leq 𝔼[α1βtr(f(Zk)f(Xk))f(Xk)α1βtr(f(Xk)f(Xk))]\displaystyle\mathbb{E}[-\frac{\alpha}{1-\beta}tr(\nabla f(Z_{k})-\nabla f(X_{k}))^{\top}\nabla_{\mathcal{M}}f(X_{k})-\frac{\alpha}{1-\beta}tr(\nabla f(X_{k})^{\top}\nabla_{\mathcal{M}}f(X_{k}))]
+\displaystyle+ α22(1β)(D2+G2)+Lα2(1β)2G2+Lα4(1β)2D2\displaystyle\frac{\alpha^{2}}{2(1-\beta)}(D^{2}+G^{2})+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2}
=\displaystyle= 𝔼[α1βtr(f(Zk)f(Xk))f(Xk)α1βf(Xk)2]\displaystyle\mathbb{E}[-\frac{\alpha}{1-\beta}tr(\nabla f(Z_{k})-\nabla f(X_{k}))^{\top}\nabla_{\mathcal{M}}f(X_{k})-\frac{\alpha}{1-\beta}||\nabla_{\mathcal{M}}f(X_{k})||^{2}]
+\displaystyle+ α22(1β)(D2+G2)+Lα2(1β)2G2+Lα4(1β)2D2\displaystyle\frac{\alpha^{2}}{2(1-\beta)}(D^{2}+G^{2})+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2} (30)

By noticing that:

α1β(f(Zk)f(Xk))f(Xk)\displaystyle-\frac{\alpha}{1-\beta}(\nabla f(Z_{k})-\nabla f(X_{k}))^{\top}\nabla_{\mathcal{M}}f(X_{k})
\displaystyle\leq 12Lf(Zk)f(Xk)2+Lα22(1β)2f(Xk)2\displaystyle\frac{1}{2L}||\nabla f(Z_{k})-\nabla f(X_{k})||^{2}+\frac{L\alpha^{2}}{2(1-\beta)^{2}}||\nabla_{\mathcal{M}}f(X_{k})||^{2} (31)

Then

𝔼[f(Zk+1)f(Zk))]\displaystyle\quad\mathbb{E}[f(Z_{k+1})-f(Z_{k}))]
12L𝔼f(Zk)f(Xk)2+(Lα22(1β)2α1β)𝔼f(Xk)2\displaystyle\leq\frac{1}{2L}\mathbb{E}||\nabla f(Z_{k})-\nabla f(X_{k})||^{2}+(\frac{L\alpha^{2}}{2(1-\beta)^{2}}-\frac{\alpha}{1-\beta})\mathbb{E}||\nabla_{\mathcal{M}}f(X_{k})||^{2}
+α22(1β)(D2+G2)+Lα2(1β)2G2+Lα4(1β)2D2\displaystyle+\frac{\alpha^{2}}{2(1-\beta)}(D^{2}+G^{2})+\frac{L\alpha^{2}}{(1-\beta)^{2}}G^{2}+\frac{L\alpha^{4}}{(1-\beta)^{2}}D^{2} (32)

According to the Lipschitz continuous property in Assumption 1, we have:

f(Zk)f(Xk)2\displaystyle||\nabla f(Z_{k})-\nabla f(X_{k})||^{2}\leq L2ZkXk2\displaystyle L^{2}||Z_{k}-X_{k}||^{2}
=\displaystyle= L2β2(1β)2XkXk12\displaystyle\frac{L^{2}\beta^{2}}{(1-\beta)^{2}}||X_{k}-X_{k-1}||^{2}
=\displaystyle= L2β2(1β)2αMk+12α2d2Y(γkα)dα22\displaystyle\frac{L^{2}\beta^{2}}{(1-\beta)^{2}}||\alpha M_{k}+\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}}||^{2}
\displaystyle\leq 2L2β2(1β)2(αMk2+12α2d2Y(γkα)dα22)\displaystyle\frac{2L^{2}\beta^{2}}{(1-\beta)^{2}}(||\alpha M_{k}||^{2}+||\frac{1}{2}\alpha^{2}\frac{d^{2}Y(\gamma_{k}\alpha)}{d\alpha^{2}}||^{2})
\displaystyle\leq 2L2α2β2(1β)2(G2(1β)2+α2C24)\displaystyle\frac{2L^{2}\alpha^{2}\beta^{2}}{(1-\beta)^{2}}(\frac{G^{2}}{(1-\beta)^{2}}+\frac{\alpha^{2}C^{2}}{4}) (33)

Therefore,

𝔼[f(Zk+1)f(Zk))]\displaystyle\quad\mathbb{E}[f(Z_{k+1})-f(Z_{k}))]
B𝔼f(Xk)2+α2B\displaystyle\leq-B\mathbb{E}||\nabla_{\mathcal{M}}f(X_{k})||^{2}+\alpha^{2}B^{{}^{\prime}} (34)

where

B\displaystyle B =α1βLα22(1β)2\displaystyle=\frac{\alpha}{1-\beta}-\frac{L\alpha^{2}}{2(1-\beta)^{2}} (35)
B\displaystyle B^{{}^{\prime}} =Lβ2(1β)2(G2(1β)2+α2C24)+D2+G22(1β)+LG2(1β)2+Lα2(1β)2D2\displaystyle=\frac{L\beta^{2}}{(1-\beta)^{2}}(\frac{G^{2}}{(1-\beta)^{2}}+\frac{\alpha^{2}C^{2}}{4})+\frac{D^{2}+G^{2}}{2(1-\beta)}+\frac{LG^{2}}{(1-\beta)^{2}}+\frac{L\alpha^{2}}{(1-\beta)^{2}}D^{2} (36)

Since α1βL\alpha\leq\frac{1-\beta}{L}, then

B\displaystyle B =α1βLα22(1β)2\displaystyle=\frac{\alpha}{1-\beta}-\frac{L\alpha^{2}}{2(1-\beta)^{2}}
=α1β(1αL2(1β))\displaystyle=\frac{\alpha}{1-\beta}(1-\frac{\alpha L}{2(1-\beta)})
α2(1β)>0\displaystyle\geq\frac{\alpha}{2(1-\beta)}>0 (37)

By summing Eq.(C) over all kk, we have

Bk=0t𝔼f(Xk)2\displaystyle B\sum_{k=0}^{t}\mathbb{E}||\nabla_{\mathcal{M}}f(X_{k})||^{2} 𝔼[f(Z0)f(Zt+1)]+(t+1)α2B\displaystyle\leq\mathbb{E}[f(Z_{0})-f(Z_{t+1})]+(t+1)\alpha^{2}B^{{}^{\prime}}
f(Z0)f+(t+1)α2B\displaystyle\leq f(Z_{0})-f_{*}+(t+1)\alpha^{2}B^{{}^{\prime}} (38)

Then

mink=0,,t𝔼[f(Xk)2]f(Z0)f(t+1)B+α2BB\min\limits_{k=0,\cdots,t}\mathbb{E}[||\nabla_{\mathcal{M}}f(X_{k})||^{2}]\leq\frac{f(Z_{0})-f_{*}}{(t+1)B}+\alpha^{2}\frac{B^{{}^{\prime}}}{B} (39)
mink=0,,t𝔼[f(Xk)2]2(f(Z0)f)(1β)(t+1)α+α2B(1β)\displaystyle\min\limits_{k=0,\cdots,t}\mathbb{E}[||\nabla_{\mathcal{M}}f(X_{k})||^{2}]\leq\frac{2(f(Z_{0})-f_{*})(1-\beta)}{(t+1)\alpha}+\alpha 2B^{{}^{\prime}}(1-\beta) (40)

Use the fact that α=min{1βL,At+1}\alpha=\min\{\frac{1-\beta}{L},\frac{A}{\sqrt{t+1}}\}, and notice that Z0=X0Z_{0}=X_{0}, therefore,

mink=0,,t𝔼[f(xk)2]\displaystyle\min\limits_{k=0,\cdots,t}\mathbb{E}[||\nabla_{\mathcal{M}}f(x_{k})||^{2}]
\displaystyle\leq 2(f(X0)f)(1β)t+1max{L1β,t+1A}+2AB(1β)t+1\displaystyle\frac{2(f(X_{0})-f_{*})(1-\beta)}{t+1}\max\{\frac{L}{1-\beta},\frac{\sqrt{t+1}}{A}\}+\frac{2AB^{{}^{\prime}}(1-\beta)}{\sqrt{t+1}}
\displaystyle\leq 2(f(X0)f)(1β)t+1max{L1β,t+1A}\displaystyle\frac{2(f(X_{0})-f_{*})(1-\beta)}{t+1}\max\{\frac{L}{1-\beta},\frac{\sqrt{t+1}}{A}\}
+\displaystyle+ 2A(1β)t+1[Lβ2(1β)2(G2(1β)2+A2C24(t+1))\displaystyle\frac{2A(1-\beta)}{\sqrt{t+1}}[\frac{L\beta^{2}}{(1-\beta)^{2}}(\frac{G^{2}}{(1-\beta)^{2}}+\frac{A^{2}C^{2}}{4(t+1)})
+\displaystyle+ D2+G22(1β)+LG2(1β)2+LA2(t+1)(1β)2D2]\displaystyle\frac{D^{2}+G^{2}}{2(1-\beta)}+\frac{LG^{2}}{(1-\beta)^{2}}+\frac{LA^{2}}{(t+1)(1-\beta)^{2}}D^{2}] (41)

Therefore, mink=0,,t𝔼[f(Xk)2]=o(1t+1)0,ast\min\limits_{k=0,\cdots,t}\mathbb{E}[||\nabla_{\mathcal{M}}f(X_{k})||^{2}]=o(\frac{1}{\sqrt{t+1}})\to 0,\text{as}\ t\to\infty.