This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Multi-step Estimation for
Gradient-based Meta-learning

Jin-Hwa Kim
SK Telecom
[email protected]
&Junyoung Park
SK Telecom
[email protected] &Yongseok Choi
SK Telecom
[email protected]
Abstract

Gradient-based meta-learning approaches have been successful in few-shot learning, transfer learning, and a wide range of other domains. Despite its efficacy and simplicity, the burden of calculating the Hessian matrix with large memory footprints is the critical challenge in large-scale applications. To tackle this issue, we propose a simple yet straightforward method to reduce the cost by reusing the same gradient in a window of inner steps. We describe the dynamics of the multi-step estimation in the Lagrangian formalism and discuss how to reduce evaluating second-order derivatives estimating the dynamics. To validate our method, we experiment on meta-transfer learning and few-shot learning tasks for multiple settings. The experiment on meta-transfer emphasizes the applicability of training meta-networks, where other approximations are limited. For few-shot learning, we evaluate time and memory complexities compared with popular baselines. We show that our method significantly reduces training time and memory usage, maintaining competitive accuracies, or even outperforming in some cases.

1 Introduction

Meta-learning is a paradigm that improves the learning of knowledge for fast adaption to a novel task from learning tasks  [1, 2, 3, 4]. Meta-learning methods include model-based [5, 3, 6], metric [7, 8, 9, 10], and optimization-based [11, 12] approaches. Gradient-based method, one of optimization-based methods, exploits bi-level learning [13, 12], where inner-level tasks are solved by a gradient-based optimization for given knowledge (meta-parameters), while outer-level learns the common knowledge across multiple tasks (meta-objective). Then, it performs error back-propagation [14] through the inner optimization path [15, 13] for the meta-objective. This is successfully applied to hyper-parameter optimization [13, 12], few-shot learning [11], transfer learning [16], etc.

However, gradient-based methods face the challenge of evaluating second-order derivatives whose computational cost is proportional to the number of inner optimization steps. This challenge is critical when the number of parameters or the meta-parameters forming networks (meta-networks) [16, 17] is significantly increased for high-dimensional problems [18, 19, 20, 13, 12]. Due to the computational issues, meta-learning over long inner optimization steps is an active research area [21].

In the paper, we provide a simple yet straightforward approximation to gradient-based meta-learning. In Figure 1, we observe that the difference between consecutive task-gradients asymptotically converges to zero in training. Based on this, we take the first task-gradient in each non-overlapping window of inner-steps, and reuse the task-gradient in the inner optimization of the window to skip Hessian-vector products except one for each window, estimating the dynamics of inner optimization. In Proposition 1, we show that our multi-step estimation is not compatible with a method merely decreasing the number of inner steps, and empirically validate the approximated dynamics retain the performance using multiple meta-transfer learning tasks and few-shot learning tasks.

For the related works, MAML approximation [11, 22, 23], implicit gradient [24, 25, 17], and Hessian-free methods for meta-reinforcement learning (Meta-RL) [26, 27] can be considered. The MAML approximations, First-order MAML [11], Reptile [22] including iMAML [25] find parameter initialization for fast adaptation sharing parameter space with meta-parameter’s, which prohibits generalization to the other meta-learning approaches, e.g., learning meta-networks. The implicit gradient methods decouple inner optimization trajectory with the calculation of meta-gradient. However, it is not suitable for large-scale applications since this requires the convergence of inner optimization (or its approximation) and induces the computational cost of matrix inversion. The policy gradient in Meta-RL suffers a high variance from the sampling of trajectories for sparse rewards, so the Gaussian smoothing for hessian-free methods is favored in reinforcement learning [26, 27]. However, since Hessian-vector product can be efficiently performed using the reverse mode of automatic differentiation in practice [28, 25], the sampling-based approaches less appealing to the other cases for not-too-high variance situation.

We summarize the contributions of this study as follows:

  • We approximate multi-step meta-learning by reusing the task-gradient in the consecutive inner-steps to avoid ‘full’ Hessian-vector products, which is supported by empirical evidence that the normalized difference between consecutive task-gradients converges to zero.

  • We efficiently reduce the training time for meta-transfer learning by 35% and for few-shot learning by up to 50%, while significantly reducing memory footprints, where the first-order and implicit-gradient methods cannot effectively apply to learn meta-networks.

  • We validate our argument on the assorted benchmarks of meta-transfer learning tasks with six different transfer learning settings and few-shot learning tasks on Omniglot and miniImageNet datasets showing that competitive accuracies or even outperforming baselines.

2 Preliminaries

2.1 Back-propagation through optimizing steps

Consider the following bilevel optimization problem:

minθinf{(sθ,θ)|sθargminsEθ(s)}\displaystyle\min_{\theta}~{}\inf\{\mathcal{L}(s_{\theta},\theta)|s_{\theta}\in\operatorname*{arg\,min}_{s}E_{\theta}(s)\} (1)

where \mathcal{L} and EθE_{\theta} are called outer-objective and inner-objective, respectively. We view meta-learning as an instantiation of this framework [12], and with this view \mathcal{L} is also called a meta-objective. In a gradient-based approach [18, 19, 13, 12], the inner problem is solved by a dynamical system illustrating gradient-based optimization. Let the state (e.g., parameters ϕ\phi and velocities v\mathrm{v} for SGD with momentum) of optimization and the meta-parameter be st\mathrm{s}_{t} and θ\theta, respectively. Then the dynamics Φt\Phi_{t} for a given θ\theta is defined as follows [13]:

st+1\displaystyle\mathrm{s}_{t+1} =Φt(st;θ)\displaystyle=\Phi_{t}(\mathrm{s}_{t};\theta) (2)

for every t{1,,T}t\in\{1,...,T\}. It describes a one-step optimization of the inner-objective t(ϕt;θ)\mathcal{L}_{t}(\phi_{t};\theta) at tt-step. Notice that s1\mathrm{s}_{1} is related to θ\theta for meta-objective (e.g., ϕ1=θ\phi_{1}=\theta for MAML [11] or via inner-objective [16]). Then, using the dynamics, the derivative of the meta-objective function \mathcal{L} with respective to the meta-parameter is:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} =t=1TΛtΦt(st;θ)θ\displaystyle=\sum_{t=1}^{T}\Lambda_{t}^{\intercal}\frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)}{\partial\theta} (3)

where the Lagrangian multipliers Λt\Lambda_{t} (see [15, 13] for this interpretation) or back-propagated errors along with the steps are updated by

Λt1=ΛtΦt(st;θ)st,ΛT=sT+1.\displaystyle\Lambda_{t-1}^{\intercal}=\Lambda_{t}^{\intercal}\frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)}{\partial\mathrm{s}_{t}},\hskip 10.00002pt\Lambda_{T}=\frac{\partial\mathcal{L}}{\partial\mathrm{s}_{T+1}}. (4)

Refer [13] for the derivation of this result. For example, but not restricted to, the stochastic gradient descent (SGD) with momentum is defined as:

vt+1μvt+gt+ωϕt,ϕt+1ϕtηvt+1\displaystyle\mathrm{v}_{t+1}\leftarrow\mu\mathrm{v}_{t}+\mathrm{g}_{t}+\omega\phi_{t},~{}~{}~{}\phi_{t+1}\leftarrow\phi_{t}-\eta\mathrm{v}_{t+1} (5)

where μ,gt,ω,\mu,\mathrm{g}_{t},\omega, and η\eta are momentum, ϕt(ϕt;θ)\nabla_{\phi}\mathcal{L}_{t}(\phi_{t};\theta), weight decay, and learning rate, respectively. Here, st=[ϕt,vt]\mathrm{s}_{t}=[\phi_{t},\mathrm{v}_{t}]^{\intercal}. The SGD instantiates the dynamics as follows:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} =t=1TΛt[Φt(st,θ)ϕθΦt(st,θ)vθ]=t=1TΛt[η1]gtθ\displaystyle=\sum_{t=1}^{T}\Lambda_{t}^{\intercal}\begin{bmatrix}\frac{\partial\Phi_{t}(\mathrm{s}_{t},\theta)_{\phi}}{\partial\theta}\\ \frac{\partial\Phi_{t}(\mathrm{s}_{t},\theta)_{\mathrm{v}}}{\partial\theta}\end{bmatrix}=\sum_{t=1}^{T}\Lambda_{t}^{\intercal}\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{t}}{\partial\theta} (6)

and the corresponding Λt\Lambda_{t} is updated by

Λt1\displaystyle\Lambda_{t-1}^{\intercal} =Λt[Φt(st;θ)ϕϕtΦt(st;θ)ϕvtΦt(st;θ)vϕtΦt(st;θ)vvt]=Λt[1ηωηgtϕtημω+gtϕtμ].\displaystyle=\Lambda_{t}^{\intercal}\begin{bmatrix}\frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)_{\phi}}{\partial\phi_{t}}&\frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)_{\phi}}{\partial\mathrm{v}_{t}}\\ \frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)_{\mathrm{v}}}{\partial\phi_{t}}&\frac{\partial\Phi_{t}(\mathrm{s}_{t};\theta)_{\mathrm{v}}}{\partial\mathrm{v}_{t}}\end{bmatrix}=\Lambda_{t}^{\intercal}\begin{bmatrix}1-\eta\omega-\eta\frac{\partial\mathrm{g}_{t}}{\partial\phi_{t}}&-\eta\mu\\ \omega+\frac{\partial\mathrm{g}_{t}}{\partial\phi_{t}}&\mu\end{bmatrix}. (7)

Here, we remark that the calculation of error back-propagation through optimizing steps is straightforward as previously discussed [15] since the optimizing steps are also differentiable. Notice that g/ϕ\partial{\mathrm{g}}/\partial{\phi} is Hessian matrix and g/θ\partial{\mathrm{g}}/\partial{\theta} is second-order derivative with two variables, ϕ\phi and θ\theta. The computational bottleneck is to get the second-order derivatives; however, we can take advantage of Hessian-vector products using automatic differentiation [28], and it typically costs the five-fold computations and no more than twice memory of t(ϕt)\nabla\mathcal{L}_{t}(\phi_{t}) [25]. For this reason, the sampling-based approximations of Hessian [26, 27] are prone to be costly in practice.

2.2 Applications

Learning meta-networks. The meta-parameters of meta-networks usually do not share with the task-parameters. However, the two groups of parameters can be inter-dependent through the loss functions in inner-loop, i.e., gt=ϕt(ϕt,θk)\mathrm{g}_{t}=\nabla_{\phi}\mathcal{L}_{t}(\phi_{t},\theta_{k}). The meta-update by meta-gradient is defined, using the previous equations, as follows:

θk+1=θkηmeta(ϕT+1)θ.\displaystyle\theta_{k+1}=\theta_{k}-\eta_{\text{meta}}\frac{\partial\mathcal{L}(\phi_{T+1})}{\partial\theta}. (8)

Few-shot learning. The MAML algorithm [11] sets the initial task-parameters with the current meta-parameters as ϕ1θk\phi_{1}\leftarrow\theta_{k} and the meta-parameters are updated by the meta-gradient as follows:

θk+1=θkηmeta1Mi=1M(i)(ϕT+1(i))θ\displaystyle\theta_{k+1}=\theta_{k}-\eta_{\text{meta}}\frac{1}{M}\sum_{i=1}^{M}\frac{\partial\mathcal{L}^{(i)}(\phi_{T+1}^{(i)})}{\partial\theta} (9)

where ϕT+1(i)\phi_{T+1}^{(i)} is the task-parameters after the TT steps of dynamics for the ii-th task, and MM is the number of few-shot learning tasks. Since the MAML sets ϕ1=θ\phi_{1}=\theta and the loss function of inner-loop is a function of ϕ\phi, we can rewrite Equation 3 as follows:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} =Λ1Φ1(s1)s1s1θ.\displaystyle=\Lambda_{1}^{\intercal}\frac{\partial\Phi_{1}(\mathrm{s}_{1})}{\partial\mathrm{s}_{1}}\frac{\partial\mathrm{s}_{1}}{\partial\theta}. (10)

3 Multi-step estimation

Refer to caption
Figure 1: Gradient difference in inner-step

3.1 Motivation

Since the computational cost is proportional to the number of calculations of ϕ2t(ϕt)\nabla^{2}_{\phi}\mathcal{L}_{t}(\phi_{t}) and θϕt(ϕt)\nabla_{\theta}\nabla_{\phi}\mathcal{L}_{t}(\phi_{t}) in the inner-loop, we propose to approximate t(ϕt)t(ϕt)tΔtn={t,t+1,,t+n1}\nabla\mathcal{L}_{t^{\prime}}(\phi_{t^{\prime}})\approx{\nabla}\mathcal{L}_{t}(\phi_{t})~{}\forall t^{\prime}\in\Delta_{t}^{n}=\{t,t+1,\cdots,t+n-1\}, not over-wrapping the intervals, which means tΔtn\mathcal{L}_{t^{\prime}\in\Delta_{t}^{n}} are the same. In other words, we reuse ϕt(ϕt)\nabla_{\phi}\mathcal{L}_{t}(\phi_{t}) (i.e., gt\mathrm{g}_{t}) in the dynamics. The error of the approximation is minimized when t(ϕt)t′′(ϕt′′)\|\nabla\mathcal{L}_{t^{\prime}}(\phi_{t^{\prime}})-\nabla\mathcal{L}_{t^{\prime\prime}}(\phi_{t^{\prime\prime}})\| is close to zero where tt^{\prime} and t′′t^{\prime\prime} are in Δtn\Delta_{t}^{n}. This assumption is validated in our experiment as shown in Figure 1. Where we measure the normalized norm of t(ϕt+1)t(ϕt)/t(ϕt+1)\|\nabla\mathcal{L}_{t}(\phi_{t+1})-\nabla\mathcal{L}_{t}(\phi_{t})\|/\|\nabla\mathcal{L}_{t}(\phi_{t+1})\| in training, the difference diminishes to zero quickly in the early stage. The experimental details can be found in Appendix A. In the following section, we show that how this estimation shapes the dynamics.

3.2 Multi-step estimated dynamical system

For the estimated system, we re-define the previous dynamical system as follows:

st+n\displaystyle\mathrm{s}_{t+n} =Φ^tn(st;θ)\displaystyle=\hat{\Phi}_{t}^{n}(\mathrm{s}_{t};\theta) (11)

where Φ^tn\hat{\Phi}_{t}^{n} implicitly moves nn times with the fixed ϕt(ϕt)\nabla_{\phi}\mathcal{L}_{t}(\phi_{t}). Using the dynamics of the estimated nn-step optimization, the meta-gradient in Equation 3 and the Lagrangian multipliers are as follows:

θk=1KΛknΦ^(k1)n+1n(s(k1)n+1;θ)θ,Λ(k1)n=ΛknΦ^(k1)n+1n(s(k1)n+1;θ)s(k1)n+1\displaystyle\frac{\partial\mathcal{L}}{\partial\theta}\approx\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\frac{\partial\hat{\Phi}_{(k-1)n+1}^{n}(\mathrm{s}_{(k-1)n+1};\theta)}{\partial\theta},\hskip 10.00002pt\Lambda_{(k-1)n}^{\intercal}=\Lambda_{kn}^{\intercal}\frac{\partial\hat{\Phi}_{(k-1)n+1}^{n}(\mathrm{s}_{(k-1)n+1};\theta)}{\partial\mathrm{s}_{(k-1)n+1}} (12)

for T=KnT=Kn where KK\in\mathbb{N}. It is noteworthy that the transformed dynamics effectively decrease the number of second-order derivative evaluations from KnKn to KK. For the SGD with momentum,

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} k=1KΛkn(i=0n1[1ηωημωμ]i[η1]g(k1)n+1θ).\displaystyle\approx\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\sum_{i=0}^{n-1}\begin{bmatrix}1-\eta\omega&-\eta\mu\\ \omega&\mu\end{bmatrix}^{i}\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)}. (13)

For the proof, please see Appendix B. Letting ω=μ=0\omega=\mu=0, for the naïve SGD, we have that

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} k=1Kλkn(nηg(k1)n+1θ)\displaystyle\approx\sum_{k=1}^{K}\lambda_{kn}^{\intercal}\bigg{(}-n\eta\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (14)

where Λ\Lambda is reduced to a vector λ\lambda since μ=0\mu=0 as follows:

λ(k1)n=λkn(1nηg(k1)n+1ϕ(k1)n+1),λKn=ϕKn+1\displaystyle\lambda_{(k-1)n}^{\intercal}=\lambda_{kn}^{\intercal}\big{(}1-n\eta\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\phi_{(k-1)n+1}}\big{)},\hskip 10.00002pt\lambda_{Kn}^{\intercal}=\frac{\partial\mathcal{L}}{\partial\phi_{Kn+1}} (15)

therefore, for the naïve SGD, the multi-step estimated dynamical system is equivalent to the single-step system of two changed hyper-parameters ηnη\eta^{\prime}\leftarrow n\eta and TK=T/nT^{\prime}\leftarrow K=T/n. However, be advised that, in general, the multi-step dynamics cannot substitute with the change of hyper-parameters due to the interaction of states as stated in Proposition 1 and the proof can be found in Appendix C. An empirical validation in a meta-transfer learning task can be found in Appendix H.

Proposition 1.

Multi-step estimation using SGD with momentum cannot be realized by one-step SGD with momentum consisting of different optimization hyper-parameters.

A similar method [29] is proposed for the classification task using a large mini-batch in a distributed environment. In this scenario, when the mini-batch size is multiplied by nn for the nn computing nodes, as a consequence, the number of iterations per epoch is decreased by 1/n1/n, they multiply the learning rate by nn to estimate the previous learning procedure. Note that we generalize for any differentiable optimization in a dynamical system, which can be of independent interest. The implementation of multi-step estimation is straightforward using the automatic differentiation of Φ^tn\hat{\Phi}_{t}^{n} (ref. Algorithm1).

4 Related work

In this section, we compare the approximation methods of meta-learning in terms of computational and memory complexities. This comparison reveals the shortcomings of each method to apply for given meta-learning algorithms.

First-order MAML [11]. It assumes that the task-parameters ϕ\phi are simply independent from the given meta-parameters θ\theta in calculating the meta-gradient for each task in Equation 3 as follows:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} (ϕT+1)ϕ\displaystyle\approx\frac{\partial\mathcal{L}(\phi_{T+1})}{\partial\phi} (16)

where ϕT+1\phi_{T+1} is the output of task-parameters via the TT-step dynamics. Notice that the meta-parameters could be in a different parameter space from the task-parameters’ space for learning meta-networks, which inevitably deteriorates performance preventing to learn meta-networks.

Reptile [22]. Similarly to the first-order MAML, the task-parameters are assumed to be independent from the meta-parameters. It ignores the gradients in the inner steps, rather it makes the meta-parameters slowly move toward the task-parameters as follows:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} θϕT+1.\displaystyle\approx\theta-\phi_{T+1}. (17)

Note that Reptile and first-order MAML are identical under the proximal regularization [25].

iMAML [25]. The implicit MAML uses an implicit Jacobian exploiting a stationary inner solution ϕ\phi_{\star} with respect to t\mathcal{L}_{t} as follows:

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} =ϕθϕ(ϕ)=(𝕀+1λϕ2t(ϕ))1ϕ(ϕ)\displaystyle=\frac{\partial\phi_{\star}}{\partial\theta}\nabla_{\phi}\mathcal{L}(\phi_{\star})=\big{(}\mathbb{I}+\frac{1}{\lambda}\nabla^{2}_{\phi}\mathcal{L}_{t}(\phi_{\star})\big{)}^{-1}\nabla_{\phi}\mathcal{L}(\phi_{\star}) (18)

where λ\lambda is a hyper-parameter related to a proximal regularization for ϕ\phi_{\star} to be close to θ\theta, and t\mathcal{L}_{t} is a single loss function in the inner-loop. Since the Jacobian ϕθ\frac{\partial\phi_{\star}}{\partial\theta} depends on t(ϕ)\nabla\mathcal{L}_{t}(\phi_{\star}), this method decouples the meta-gradient computation with inner optimization trajectory. The limitations are additional computations to find a inner-level solution and approximating an inversion of matrix, which are unfavorable for a large-scale application.

Other Hessian-free methods. HF-MAML [30] exploits the first-order approximation of Hessian-vector product [31] for one-step MAML. More recently, ES-MAML [26] uses a zero-order smoothing method for the Hessian approximation, while it has a large estimation error with a small number of samplings. GGS-MAML [27] proposes a gradient-based Gaussian smoothing (GGS) method as a variant. The MAML-based approaches including the iMAML cannot approximate the second-order derivatives with two variables, meta and task-parameters, θϕ(ϕ)\nabla_{\theta}\nabla_{\phi}\mathcal{L}(\phi), which are used to learn meta-networks. And, as mentioned before, since automatic differentiation efficiently computes Hessian-vector products in practice [25], these Hessian-free methods are less beneficial to our view.

5 Experiments

5.1 Meta-transfer learning

Algorithm 1 Multi-step estimated meta-transfer learning (cf. Algorithm 1 of the original work [16])
1:  Input: Training examples 𝒟𝚝𝚛={(xi,yi)}\mathcal{D}_{\tt{tr}}=\{(x_{i},y_{i})\}
2:  Hyper-parameters: Mini-batch size BB, optimizer state vv, inner steps T=KNT=KN
3:  repeat
4:     Sample a batch 𝒟𝚝𝚛𝚊𝚒𝚗\mathcal{B}\subset\mathcal{D}_{\tt train} with ||=B|\mathcal{B}|=B
5:     Update ϕ\phi to minimize 1B(x,y)𝚝𝚘𝚝𝚊𝚕(ϕ|x,y,θ)\frac{1}{B}\sum_{(x,y)\in\mathcal{B}}\mathcal{L}_{\tt{total}}(\phi|x,y,\theta)
6:     Initialize s0(ϕ,v)s_{0}\leftarrow(\phi,v) to begin inner-steps
7:     for k=0k=0 to K1K-1 do
8:        gNkϕ𝚝𝚏𝚛(ϕNk;θ,x)g_{Nk}\leftarrow\nabla_{\phi}\mathcal{L}_{\tt{tfr}}(\phi_{Nk};\theta,x\in\mathcal{B})
9:        for n=0n=0 to N1N-1 do
10:           sNk+n+1ΦNk+n(sNk+n;θ,gNk)s_{Nk+n+1}\leftarrow\Phi_{Nk+n}\big{(}s_{Nk+n};\theta,g_{Nk}\big{)} }\left.\rule{0.0pt}{24.0pt}\right\} This part describes Φ^NkN\hat{\Phi}_{Nk}^{N} in Eqn. 11 which estimates sN(k+1)s_{N(k+1)}.
11:        end for
12:     end for
13:     sT+1ΦT(sT;θ,ϕ𝚊𝚌𝚌(ϕT;(x,y)))s_{T+1}\leftarrow\Phi_{T}\big{(}s_{T};\theta,\nabla_{\phi}\mathcal{L}_{\tt{acc}}(\phi_{T};(x,y)\in\mathcal{B})\big{)}
14:     Meta-update θ\theta using θ1B(x,y)𝚊𝚌𝚌(ϕT+1|x,y)\nabla_{\theta}\frac{1}{B}\sum_{(x,y)\in\mathcal{B}}\mathcal{L}_{\tt{acc}}(\phi_{T+1}|x,y) through ϕt\phi_{t}
15:  until done

Task. To validate the efficacy of our method, we apply our method to a recently proposed meta-learning to learn meta-networks θ\theta which control the transferability to task-networks ϕ\phi in transfer learning [16]. It transfers the acquired knowledge of a source model from training with a large source dataset to the other task having possibly different target model architecture and a relatively smaller target dataset, using the following compound loss with a weight of β\beta (we set β=0.5\beta=0.5):

𝚝𝚘𝚝𝚊𝚕(ϕ|𝒟𝚝𝚛,θ)\displaystyle\mathcal{L}_{\tt{total}}(\phi|\mathcal{D}_{\tt{tr}},\theta) =𝚊𝚌𝚌(ϕ|𝒟𝚝𝚛)+β𝚝𝚏𝚛(ϕ|𝒟𝚝𝚛,θ)\displaystyle=\mathcal{L}_{\tt{acc}}(\phi|\mathcal{D}_{\tt{tr}})+\beta\mathcal{L}_{\tt{tfr}}(\phi|\mathcal{D}_{\tt{tr}},\theta) (19)

where 𝚊𝚌𝚌\mathcal{L}_{\tt{acc}} and 𝚝𝚏𝚛\mathcal{L}_{\tt{tfr}} denote a classification loss and a transfer loss, respectively. 𝒟𝚝𝚛\mathcal{D}_{\tt{tr}} is the examples from the training split for each corresponding dataset.

Meta-networks. The meta-networks involve in 𝚝𝚏𝚛\mathcal{L}_{\tt{tfr}} through θ\theta. For each pair of intermediate representations of source and target models for the same target input, 𝚝𝚏𝚛\mathcal{L}_{\tt{tfr}} measures a transfer loss, weighting with nesting two groups of learnable parameters in meta-networks for pair-wise (where) and channel-wise (what) transferability. Note that since it transfers the knowledge across heterogeneous architectures, small networks are used to match between the dimensions of representations from source and target models. For ResNets, we take the outputs of each stage (three stages of ResNet-32 and four of ResNet-34), while, for VGG-9, the inputs of the five down-scaling layers. Please refer to their work [16] and the code 111https://github.com/alinlab/L2T-ww. Unless stated otherwise, we follow their experimental setting, including how to set aside validation splits (i.e., 10% of training data) to select model.

Meta-algorithm. The Algorithm 1 includes T=KNT=KN consecutive transfer losses, which is the point we estimate. Notice that, in the original paper [16], they formally validate the efficacy of meta networks and the proposed bi-level scheme with two separated inner-objective functions (Line 10 and 13 in Algorithm 1) in their Appendix C.1 and C.2 showing significant improvements in multiple tasks. They empirically find that T=2T=2 is optimal (we also observe it in Appendix H), and we estimate with K=1K=1. We emphasize that the transfer losses are for the meta-networks controlling transferability, the feature matching networks, and target models, which are used to calculate the meta-gradient.

Table 1: Time elapse for one epoch (second) using a Titan Xp. ±\pm denotes the standard deviation of three models for Bird [32]. Eventually, our method reduces 7+ hours (35%) for 200 epochs with competitive accuracy.
Method Elapse Total Ratio
Baseline [16] 379.7±\pm30.9 21h 1.00
Multi-step 248.7±\pm14.2 14h 0.65

Domains and models. Following the previous [16], we validate on six datasets, where two datasets have the small-sized inputs of 32x32, the others have the large-sized inputs of 224x224. For the small-sized inputs, TinyImageNet [33] is used as a source domain, while CIFAR-100 [34] and STL-10 [35] as target domains. ResNet-32 [36, 16] and VGG-9 [37, 38] are the source and target models, respectively. For the second part using the 224x224 inputs, ImageNet [39] is used as a source domain, while Caltech-UCSD Bird 200 (Bird) [32], MIT Indoor Scene Recognition (Indoor) [40], Stanford 40 Actions (Action) [41] and Stanford Dogs (Dog) [42] as target domains. ResNet-34 and ResNet-18 [36] are the source and target models, respectively. We resize and crop images if necessary (Appendix D).

Time complexity. Table 1 shows that our method significantly reduces training time over 7 hours (35%), and the same tendency can be found across multiple benchmarks (Appendix G). Although meta-learning greatly boosts the performance of transfer learning [16], the increased training time was one of drawbacks of meta-transfer learning. Thus from this result, we argue that our method is a valuable approach in practice. For the results with T=1,3T=1,3, please refer to Figure 4 in Appendix H.

Accuracy. To assess our multi-step estimation, along with momentum SGD in Table 2, we use Adam [43] for its adaptive gradient dynamics in Table 3 (Appendix E for hyper-parameters). Multi-step estimation (ours) gives competitive results compared with Baseline [16] on both optimizers through extensive benchmarks. We speculate that, empirically, our assumption in Section 3.1 on the convergence of gradient difference does not cause any significant performance drop, and our multi-step dynamical system possibly works well with the optimizers other than SGD, i.e., Adam.

Table 2: Classification accuracy (%) of the transferred models using the SGD with momentum (inner-level). Scratch does not perform transfer learning. Baseline [16] is L2T-ww (all-to-all) in their paper. ±\pm denotes the standard deviation of three models.
Source TinyImageNet ImageNet
Target CIFAR-100 STL-10 Bird Indoor Action Dog
Scratch 67.69±\pm0.22 65.18±\pm0.91 42.15±\pm0.75 48.91±\pm0.53 36.93±\pm0.68 58.08±\pm0.26
Baseline [16] 70.96±\pm0.61 78.31±\pm0.21 65.05±\pm1.19 64.85±\pm2.75 63.08±\pm0.88 78.08±\pm0.96
Multi-step 70.97±\pm0.38 77.83±\pm0.74 64.89±\pm0.98 66.67±\pm0.56 61.93±\pm3.32 78.15±\pm0.22
Table 3: Classification accuracy (%) of the transferred models using the Adam [43] (inner-level).
Source TinyImageNet ImageNet
Target CIFAR-100 STL-10 Bird Indoor Action Dog
Scratch 67.69±\pm0.22 65.18±\pm0.91 42.15±\pm0.75 48.91±\pm0.53 36.93±\pm0.68 58.08±\pm0.26
Baseline (ours) 69.35±\pm0.09 80.61±\pm0.29 66.26±\pm0.51 67.74±\pm0.64 62.05±\pm2.15 74.38±\pm0.45
Multi-step 70.12±\pm0.23 79.58±\pm0.54 66.51±\pm1.08 67.14±\pm1.08 65.43±\pm0.23 77.65±\pm0.11

Other approximations. First-order MAML and Reptile cannot be applied to Algorithm 1 due to their assumption for parameter-initialization approach. Yet, an implementation of First-order MAML where the meta-objective is replaced with 𝚝𝚘𝚝𝚊𝚕\mathcal{L}_{\tt{total}} instead of 𝚊𝚌𝚌\mathcal{L}_{\tt{acc}} to get θ𝚝𝚘𝚝𝚊𝚕(ϕT+1)0\nabla_{\theta}\mathcal{L}_{\tt{total}}(\phi_{T+1})\neq 0 (note that θ𝚊𝚌𝚌(ϕT+1)=0\nabla_{\theta}\mathcal{L}_{\tt{acc}}(\phi_{T+1})=0) can be considered. Notice that the gradient is not computed through ϕt<T+1\phi_{t<T+1} for first-order approximation. This method severely underperforms (41.46±\pm2.34) compared with Multi-step estimation (64.89±\pm0.98) for Bird experiment. The iMAML cannot apply due to the assumption of costly inner-convergence, and sampling-based methods for Hessian are less beneficial by the trade-off between the number of samplings, related to accuracy, and computational cost.

5.2 Few-shot learning

Table 4: Omniglot few-shot classification accuracy (%). The second section uses our implementation for a fair comparison. Multi-step (NN) uses the same second order derivatives for NN consecutive inner updates. \dagger denotes the meta-gradient is computed using Hessian-free method and proximal regularization [25], but the others do not apply this regularization. ±\pm denotes 95% confidence interval.
Method 5-way 1-shot 5-way 5-shot 20-way 1-shot 20-way 5-shot
MAML [11] 98.7  ±\pm0.4 99.9  ±\pm0.1 95.8  ±\pm0.3 98.9  ±\pm0.2
FO-MAML [11] 98.3  ±\pm0.5 99.2  ±\pm0.2 89.4  ±\pm0.5 97.9  ±\pm0.1
Reptile [22] 97.68±\pm0.04 99.48±\pm0.06 89.43±\pm0.14 97.12±\pm0.32
iMAML [25] 99.50±\pm0.26 99.74±\pm0.11 96.18±\pm0.36 99.14±\pm0.10
MAML (ours) 99.10±\pm0.77 99.66±\pm0.44 95.80±\pm0.81 98.73±\pm0.37
FO-MAML (ours) 98.40±\pm1.11 99.46±\pm0.58 90.53±\pm1.13 96.73±\pm0.55
Multi-step (2) 99.10±\pm0.81 99.66±\pm0.43 95.60±\pm0.78 98.70±\pm0.38
Multi-step (4) 98.97±\pm1.02 99.66±\pm0.47 95.75±\pm0.77 98.60±\pm1.03
Multi-step (8) 98.70±\pm1.03 99.50±\pm0.52 94.24±\pm0.95 97.70±\pm0.51

Task. To further validate our method, we study on few-shot learning task, following a standard N-way K-shot protocol [11]. We use Omniglot [44] and miniImageNet [5] datasets, which are popular in the literatures. To focus on the analysis of computational complexity (time and memory), we compare with MAML [11], FO-MAML (First-order MAML) [11], Reptile [22] and iMAML [25] , since these variants of MAML methods are well-studied for the complexity comparison.

Experimental details. We follow the previous experimental setup [11] in terms of neural network architecture and data preprocessing. We first tried reproducing MAML [11] following all the same settings described in the paper. However, we could not have the numbers, especially for the 20-way 1-shot task on Omniglot. Notice that this issue is also reported in [45, 46]. Instead, we use the SGD with momentum optimizer for the inner updates in lieu of vanilla SGD [11] to match with the scores in the paper. We denote Multi-step (NN) as our multi-step estimation method reusing ϕt(ϕt)\nabla_{\phi}\mathcal{L}_{t}(\phi_{t}) NN-times for the inner updates. To apply our method, we take eight inner updates and compare Multi-step (2), Multi-step (4) and Multi-step (8). The average accuracy and 95% confidence interval (CI) are computed by averaging on 600 test episodes. Please refer Appendix F for further experimental details.

Omniglot performance. Table 4 shows the performance of the baselines and our methods on Omniglot dataset. Ours significantly outperforms first-order methods like FO-MAML and Reptile, especially on 20-way 1-shot (more than 5% accuracy gap), and competitive with MAML and iMAML on all cases. The performances of iMAML are slightly better than ours, but that may due to longer inner updates (iMAML uses 16 and 25 update steps for 5-way, 20-way experiments, respectively) or the usage of proximal regularization [25], or both. Note that the performances of Multi-steps (NN) are slightly decreasing as NN is increasing as expected, but they retain their accuracies compared with MAML (ours). In most cases, the margin of performance is under 0.4% except Multi-step (8) on 20-way tasks, which are at most 1.56%.

miniImageNet performance. Table 5 shows the result on miniImageNet. Similarly, Multi-steps (NN) retain performances when N4N\leq 4, but not for N=8N=8 (2.59% mean accuracy drop on 5-way 5-shot task), which shows a limit of our method depending on the number of estimation steps. Reptile is the best among all methods on miniImageNet, but, interestingly, the approximation method already surpasses MAML. The gain may attribute to the change of inner optimizer to Adam [43] and other hyper-parameters. Finally, we emphasize that our method smoothly controls the amount of approximation using NN, outperforming or being competitive with other baselines.

Refer to caption
Figure 2: Memory usage and training time of 5-way 1-shot task for Omniglot. Best viewed in color. For a fair comparison, our code is based on the MAML implementation from %****␣experiments.tex␣Line␣100␣****https://github.com/dragen1860/MAML-Pytorch.
Table 5: Few-shot classification accuracy (%) for miniImageNet. The second section uses our implementations for a fair comparison. \dagger uses Hessian-free method and proximal regularization [25]. ±\pm denotes 95% confidence interval.
Method 5-way 1-shot 5-way 5-shot
MAML [11] 48.70±\pm1.84 63.11±\pm0.92
FO-MAML [11] 48.07±\pm1.75 63.15±\pm0.91
Reptile [22] 49.97±\pm0.32 65.99±\pm0.58
iMAML [25] 49.30±\pm1.88 -
MAML (ours) 48.49±\pm0.83 63.43±\pm0.71
FOMAML (ours) 44.90±\pm0.75 61.57±\pm0.68
Multi-step (2) 48.68±\pm0.82 63.70±\pm0.73
Multi-step (4) 48.90±\pm0.82 63.67±\pm0.75
Multi-step (8) 46.85±\pm0.79 60.84±\pm0.71

Time and memory. We measure time and memory usage of various algorithms for 5-way 1-shot on Omniglot, as shown in Figure 2 (for more results, Figure 5 in Appendix). Note that Multi-steps (NN) significantly reduces memory and time consumption as NN increases. In particular, compared with MAML with eight update steps, Multi-steps (4) is two times faster and uses significantly less memory with just 0.1% accuracy drop. FO-MAML is an efficient method in both time and space complexities since it does not need to save the intermediate states of inner steps and to calculate second-order derivatives through updating trajectories. However, it is likely to underperform compared to the others, as shown in Table 4. For the same reason, iMAML uses less memory as much as first-order methods; however, it suffers the increased time in our experiments of less-than-eight inner-steps due to iterative computations for inverting a matrix to estimate the meta-gradients for each task.

6 Conclusions

We propose a simple yet robust method to estimate multi-step meta-learning, lazily updating the gradients of inner-steps to minimize the cost of time and memory significantly. Compared with the other approximations, our method is more general since it does not assume the share of meta and task-parameter spaces as in parameter initialization approaches, and does not rely on iterative methods to approximate meta-gradients or Hessian matrix which have a computational downside. For the challenging meta-transfer learning tasks on several datasets, our method significantly reduces meta-training time while maintaining competitive accuracies or even outperforming in some configurations. For the few-shot learning tasks, we deeply analyze on time and memory complexities to highlight on the gradual approximation with the hyper-parameter of NN, reassuring our results on meta-transfer learning. However, notice that our method may not work well under the circumstance where task-parameters are re-initialized irrespective of meta-parameters for each task optimization [12], since, in that case, the assumption observed in Figure 1 may not hold.

Broader Impact

This work proposes an efficient meta-learning method retaining competitive performances which potentially saves computing resources for our environment. We believe the benefit or disadvantage from this research is not particularly limited to a certain group. Our method does not leverage the biases in the data validating on various meta-transfer learning settings and the few-shot learning experiments on Omniglot and miniImageNet.

Acknowledgments

The authors would like to thank Dong-Yeon Cho for helpful feedbacks to polish Abstract and Introduction.

Author Contributions

J.-H.K. initiated the work as the corresponding author, proposed the methods, performed the experiments, and wrote and edited the draft of paper. J.P. contributed to Lemma 1, Proposition 1, Section 5.2, performed the experiments, and helped editing the paper. Y.C. performed the experiments and contributed to editing the paper.

References

  • Schmidhuber [1987] Jürgen Schmidhuber. Evolutionary principles in self-referential learning. PhD thesis, Technische Universität München, 1987.
  • Hinton and Plaut [1987] Geoffrey E Hinton and David C Plaut. Using fast weights to deblur old memories. In Proceedings of the ninth annual conference of the Cognitive Science Society, pages 177–186, 1987.
  • Hochreiter et al. [2001] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94, 2001.
  • Andrychowicz et al. [2016] Marcin Andrychowicz, Misha Denil, Sergio Gómez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems 29, pages 3988–3996, 2016.
  • Ravi and Larochelle [2017] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations, 2017.
  • Mishra et al. [2018] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In International Conference on Learning Representations, 2018.
  • Vinyals et al. [2016] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016.
  • Snell et al. [2017] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077–4087, 2017.
  • Sung et al. [2018] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1199–1208, 2018.
  • Garcia and Bruna [2018] Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In International Conference on Learning Representations, 2018.
  • Finn et al. [2017] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In 34th International Conference on Machine Learning, pages 1126–1135, 2017.
  • Franceschi et al. [2018] Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel Programming for Hyperparameter Optimization and Meta-Learning. In 35th International Conference on Machine Learning, pages 2537–2548, 2018.
  • Franceschi et al. [2017] Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In 34th International Conference on Machine Learning, pages 1903–1913, 2017.
  • Rumelhart et al. [1986] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning Representations by Back-propagating Errors. Nature, 323(6088):533–536, 1986.
  • LeCun [1988] Yann LeCun. A theoretical framework for back-propagation. In Proceedings of the 1988 connectionist models summer school, pages 21–28, 1988.
  • Jang et al. [2019] Yunhun Jang, Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Learning What and Where to Transfer. In International Conference on Machine Learning, 2019.
  • Lorraine et al. [2020] Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing Millions of Hyperparameters by Implicit Differentiation. In The 23rd International Conference on Artificial Intelligence and Statistics, 2020.
  • Domke [2012] Justin Domke. Generic methods for optimization-based modeling. In Artificial Intelligence and Statistics, pages 318–326, 2012.
  • Maclaurin et al. [2015] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pages 2113–2122, 2015.
  • Pedregosa [2016] Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In International Conference on Machine Learning, 2016.
  • Hospedales et al. [2020] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta Learning in Neural Networks : A Survey. arXiv preprint arXiv:2004.05439, 2020.
  • Nichol et al. [2018] Alex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. arXiv preprint arXiv:1803.02999, 2018.
  • Raghu et al. [2020] Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. In International Conference on Learning Representations, 2020.
  • Bengio [2000] Yoshua Bengio. Gradient-Based Optimization of Hyper-parameters. Neural Computation, 12(8):1889–1900, 2000.
  • Rajeswaran et al. [2019] Aravind Rajeswaran, Chelsea Finn, Sham Kakade, and Sergey Levine. Meta-Learning with Implicit Gradients. In Advances in Neural Information Processing Systems 33, 2019.
  • Song et al. [2020] Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, and Yunhao Tang. ES-MAML: Simple Hessian-Free Meta Learning. In International Conference on Learning Representations, 2020.
  • Ji et al. [2020] Kaiyi Ji, Junjie Yang, and Yingbin Liang. Multi-Step Model-Agnostic Meta-Learning: Convergence and Improved Algorithms. arXiv preprint arXiv:2002.07836, 2020.
  • Griewank [1993] Andreas Griewank. Some bounds on the complexity of gradients, jacobians, and hessians. In Complexity in numerical optimization, pages 128–162. World Scientific, 1993.
  • Goyal et al. [2017] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv preprint arXiv:1706.02677, 2017.
  • Fallah et al. [2019] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms. arXiv preprint arXiv:1908:10400, 2019.
  • Martens [2010] James Martens. Deep learning via Hessian-free optimization. In 27th International Conference on Machine Learning, pages 735–742, 2010.
  • Wah et al. [2011] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. Technical report, California Institute of Technology, 2011.
  • CS231N [2019] Stanford CS231N. Tiny imagenet visual recognition challenge. https://tiny-imagenet.herokuapp.com, 2019.
  • Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  • Coates et al. [2011] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223, 2011.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations, 2015.
  • Srinivas and Fleuret [2018] Suraj Srinivas and François Fleuret. Knowledge transfer with jacobian matching. In 35th International Conference on Machine Learning, pages 7515–7523, 2018.
  • Deng et al. [2009] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
  • Quattoni and Torralba [2009] Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In IEEE Conference on Computer Vision and Pattern Recognition, pages 413–420, 2009.
  • Yao et al. [2011] Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas Guibas, and Li Fei-Fei. Human action recognition by learning bases of action attributes and parts. In IEEE International Conference on Computer Vision, pages 1331–1338, 2011.
  • Khosla et al. [2011] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In 1st Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
  • Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.
  • Lake et al. [2011] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In CogSci, 2011.
  • Antoniou et al. [2019] Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. In International Conference on Learning Representations, 2019.
  • Jamal et al. [2018] Muhammad Abdullah Jamal, Guo-Jun Qi, and Mubarak Shah. Task-agnostic meta-learning for few-shot learning. CoRR, abs/1805.07722, 2018.
  • Russakovsky et al. [2014] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, and Others. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, pages 1–42, 2014.
  • Loshchilov and Hutter [2017] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, 2017.

A The gradient difference in the inner-step

For the measurement of the gradient difference between two steps in the inner-loop, we perform the transfer learning using meta-learning [16]. The task is to transfer the knowledge of ResNet-34 pretrained on the ImageNet [47] to ResNet-18 for the CUB200 dataset [32]. We follow the experimental protocol in the paper. We measure the two-norm of the gradient difference for the matching loss in the inner-step.

B Multi-step estimation of SGD with momentum

In Section 3.2, the dynamics of the estimated nn-step optimization define the meta-gradient as follows:

θk=1KΛknΦ^(k1)n+1nθ,Λ(k1)n=ΛknΦ^(k1)n+1ns(k1)n+1\displaystyle\frac{\partial\mathcal{L}}{\partial\theta}\approx\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\frac{\partial\hat{\Phi}_{(k-1)n+1}^{n}}{\partial\theta},\hskip 10.00002pt\Lambda_{(k-1)n}^{\intercal}=\Lambda_{kn}^{\intercal}\frac{\partial\hat{\Phi}_{(k-1)n+1}^{n}}{\partial\mathrm{s}_{(k-1)n+1}} (20)

where Φ^(k1)n+1n:=Φ^(k1)n+1n(s(k1)n+1;θ)\hat{\Phi}_{(k-1)n+1}^{n}:=\hat{\Phi}_{(k-1)n+1}^{n}(\mathrm{s}_{(k-1)n+1};\theta). For the SGD with momentum,

θ\displaystyle\frac{\partial\mathcal{L}}{\partial\theta} k=1KΛkn([1ηωημωμ]Φ^(k1)n+1n1θ+[η1]g(k1)n+1θ)\displaystyle\approx\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\begin{bmatrix}1-\eta\omega&-\eta\mu\\ \omega&\mu\end{bmatrix}\frac{\partial\hat{\Phi}_{(k-1)n+1}^{n-1}}{\partial\theta}+\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (21)
=k=1KΛkn([1ηωημωμ]n1Φ(k1)n+1θ+i=0n2[1ηωημωμ]i[η1]g(k1)n+1θ)\displaystyle=\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\begin{bmatrix}1-\eta\omega&-\eta\mu\\ \omega&\mu\end{bmatrix}^{n-1}\frac{\partial\Phi_{(k-1)n+1}}{\partial\theta}+\sum_{i=0}^{n-2}\begin{bmatrix}1-\eta\omega&-\eta\mu\\ \omega&\mu\end{bmatrix}^{i}\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (22)
=k=1KΛkn(i=0n1[1ηωημωμ]i[η1]g(k1)n+1θ)\displaystyle=\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\sum_{i=0}^{n-1}\begin{bmatrix}1-\eta\omega&-\eta\mu\\ \omega&\mu\end{bmatrix}^{i}\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (23)

where the first equation uses chain rule for the one estimated step, while the last terms in the equations in the parentheses are for the reused g(k1)n+1\mathrm{g}_{(k-1)n+1} in the n1n-1 steps. For the naïve SGD, i.e., ω=μ=0\omega=\mu=0,

=k=1KΛkn([1000]n1Φ(k1)n+1θ+i=0n2[1000]i[η1]g(k1)n+1θ)\displaystyle=\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\begin{bmatrix}1&0\\ 0&0\end{bmatrix}^{n-1}\frac{\partial\Phi_{(k-1)n+1}}{\partial\theta}+\sum_{i=0}^{n-2}\begin{bmatrix}1&0\\ 0&0\end{bmatrix}^{i}\begin{bmatrix}-\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (24)
=k=1KΛkn([η0]g(k1)n+1θ+[(n1)η1]g(k1)n+1θ)\displaystyle=\sum_{k=1}^{K}\Lambda_{kn}^{\intercal}\bigg{(}\begin{bmatrix}-\eta\\ 0\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}+\begin{bmatrix}-(n-1)\eta\\ 1\end{bmatrix}\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (25)
=k=1Kλkn(nηg(k1)n+1θ)\displaystyle=\sum_{k=1}^{K}\lambda_{kn}^{\intercal}\bigg{(}-n\eta\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\theta}\bigg{)} (26)

where the last equation comes from the definition of ΛT\Lambda_{T} and λ\lambda is defined as follows:

λ(k1)n=λkn(1nηg(k1)n+1ϕ(k1)n+1),λKn=ϕKn+1.\displaystyle\lambda_{(k-1)n}^{\intercal}=\lambda_{kn}^{\intercal}\big{(}1-n\eta\frac{\partial\mathrm{g}_{(k-1)n+1}}{\partial\phi_{(k-1)n+1}}\big{)},\hskip 10.00002pt\lambda_{Kn}^{\intercal}=\frac{\partial\mathcal{L}}{\partial\phi_{Kn+1}}. (27)

C Proof that multi-step estimation is not one-step with different hyper-parameters

Recall the formulas defining SGD with momentum:

vt+1\displaystyle\mathrm{v}_{t+1} =μvt+gt+ωϕt\displaystyle=\mu\mathrm{v}_{t}+\mathrm{g}_{t}+\omega\phi_{t} (28)
ϕt+1\displaystyle\phi_{t+1} =ϕtηvt+1\displaystyle=\phi_{t}-\eta\mathrm{v}_{t+1} (29)

with v1=0\mathrm{v}_{1}=0, ϕ1=ϕ\phi_{1}=\phi is given, and gt=ϕtt(ϕt)\mathrm{g}_{t}=\nabla_{\phi_{t}}\mathcal{L}_{t}(\phi_{t}).

Lemma 1.

Assuming gt=g\mathrm{g}_{t}=\mathrm{g} for all t1t\geq 1, vt\mathrm{v}_{t} and ϕt\phi_{t} can be represented as linear combinations of g\mathrm{g} and ϕ\phi. In other words, we may write

vt=btvg+ctvϕ\displaystyle\mathrm{v}_{t}=b_{t}^{\mathrm{v}}\mathrm{g}+c_{t}^{\mathrm{v}}\phi (30)
ϕt=btϕg+ctϕϕ\displaystyle\phi_{t}=b_{t}^{\phi}\mathrm{g}+c_{t}^{\phi}\phi (31)

where btv,ctv,btϕb_{t}^{\mathrm{v}},c_{t}^{\mathrm{v}},b_{t}^{\phi} and ctϕc_{t}^{\phi} can be written in terms of μ,ω\mu,\omega and η\eta. Furthermore, we have the following recurrence relations:

bt+1v\displaystyle b_{t+1}^{\mathrm{v}} =μbtv+1+ωbtϕ\displaystyle=\mu b_{t}^{\mathrm{v}}+1+\omega b_{t}^{\phi} (32)
ct+1v\displaystyle c_{t+1}^{\mathrm{v}} =μctv+ωctϕ\displaystyle=\mu c_{t}^{\mathrm{v}}+\omega c_{t}^{\phi} (33)
bt+1ϕ\displaystyle b_{t+1}^{\phi} =ημbtvη+(1ηω)btϕ\displaystyle=-\eta\mu b_{t}^{\mathrm{v}}-\eta+(1-\eta\omega)b_{t}^{\phi} (34)
ct+1ϕ\displaystyle c_{t+1}^{\phi} =ημctv+(1ηω)ctϕ\displaystyle=-\eta\mu c_{t}^{\mathrm{v}}+(1-\eta\omega)c_{t}^{\phi} (35)

for t1t\geq 1.

Proof.

By induction using (28) and (29), the first assertion is clear. Note that

bt+1vg+ct+1vϕ\displaystyle b_{t+1}^{\mathrm{v}}\mathrm{g}+c_{t+1}^{\mathrm{v}}\phi =vt+1\displaystyle=\mathrm{v}_{t+1} (36)
=μvt+g+ωϕt\displaystyle=\mu\mathrm{v}_{t}+\mathrm{g}+\omega\phi_{t} (37)
=μ(btvg+ctvϕ)+g+ω(btϕg+ctϕϕ)\displaystyle=\mu(b_{t}^{\mathrm{v}}\mathrm{g}+c_{t}^{\mathrm{v}}\phi)+\mathrm{g}+\omega(b_{t}^{\phi}\mathrm{g}+c_{t}^{\phi}\phi) (38)
=(μbtv+1+ωbtϕ)g+(μctv+ωctϕ)ϕ.\displaystyle=(\mu b_{t}^{\mathrm{v}}+1+\omega b_{t}^{\phi})\mathrm{g}+(\mu c_{t}^{\mathrm{v}}+\omega c_{t}^{\phi})\phi. (39)

Since this should hold for any g\mathrm{g} and ϕ\phi, their coefficients should coincide. Similarly,

bt+1ϕg+ct+1ϕϕ\displaystyle b_{t+1}^{\phi}\mathrm{g}+c_{t+1}^{\phi}\phi =ϕt+1\displaystyle=\phi_{t+1} (40)
=ϕtηvt+1\displaystyle=\phi_{t}-\eta\mathrm{v}_{t+1} (41)
=btϕg+ctϕϕη{(μbtv+1+ωbtϕ)g+(μctv+ωctϕ)ϕ}\displaystyle=b_{t}^{\phi}\mathrm{g}+c_{t}^{\phi}\phi-\eta\{(\mu b_{t}^{\mathrm{v}}+1+\omega b_{t}^{\phi})\mathrm{g}+(\mu c_{t}^{\mathrm{v}}+\omega c_{t}^{\phi})\phi\} (42)
={ημbtvη+(1ηω)btϕ}g+{ημctv+(1ηω)ctϕ}ϕ.\displaystyle=\{-\eta\mu b_{t}^{\mathrm{v}}-\eta+(1-\eta\omega)b_{t}^{\phi}\}\mathrm{g}+\{-\eta\mu c_{t}^{\mathrm{v}}+(1-\eta\omega)c_{t}^{\phi}\}\phi. (43)

For the next Proposition, we record first few coefficients:

b2ϕ\displaystyle b_{2}^{\phi} =η\displaystyle=-\eta (44)
c2ϕ\displaystyle c_{2}^{\phi} =1ηω\displaystyle=1-\eta\omega (45)
b3ϕ\displaystyle b_{3}^{\phi} =η(μ+2ηω)\displaystyle=-\eta(\mu+2-\eta\omega) (46)
c3ϕ\displaystyle c_{3}^{\phi} =ημω+(1ηω)2.\displaystyle=-\eta\mu\omega+(1-\eta\omega)^{2}. (47)

Proposition 1, restated. Multi-step estimation using SGD with momentum cannot be realized by one-step SGD with momentum consisting of different optimization hyper-parameters.

Proof.

Suppose not. We try to construct 2-step estimation with one-step by choosing different optimization hyper-parameters, and we claim that this is not possible.

Let us take 2-step estimation, i.e., we set g2k+2=g2k+1=ϕ2k+12k+1(ϕ2k+1)\mathrm{g}_{2k+2}=\mathrm{g}_{2k+1}=\nabla_{\phi_{2k+1}}\mathcal{L}_{2k+1}(\phi_{2k+1}) for k=0,1,k=0,1,\cdots. By judiciously choosing optimization hyper-parameters, we try to realize this with equivalent one-step SGD with momentum. To be specific, we attempt to find μ~,ω~,η~\tilde{\mu},\tilde{\omega},\tilde{\eta} such that the sequence defined by

v~t+1\displaystyle\tilde{\mathrm{v}}_{t+1} μ~v~t+g~t+ω~ϕ~t\displaystyle\leftarrow\tilde{\mu}\tilde{\mathrm{v}}_{t}+\tilde{\mathrm{g}}_{t}+\tilde{\omega}\tilde{\phi}_{t} (48)
ϕ~t+1\displaystyle\tilde{\phi}_{t+1} ϕt~η~v~t+1\displaystyle\leftarrow\tilde{\phi_{t}}-\tilde{\eta}\tilde{\mathrm{v}}_{t+1} (49)

with v1~=0,ϕ1~=ϕ\tilde{\mathrm{v}_{1}}=0,\tilde{\phi_{1}}=\phi and gt~=g2t1\tilde{\mathrm{g}_{t}}=\mathrm{g}_{2t-1} for t=1,2,t=1,2,\cdots, satisfies ϕt~=ϕ2t1\tilde{\phi_{t}}=\phi_{2t-1} for all t1t\geq 1.

For simplicity, we consider the case gt=g1=g\mathrm{g}_{t}=\mathrm{g}_{1}=\mathrm{g} for all tt (for instance, we may have t(ϕt)=cϕt\mathcal{L}_{t}(\phi_{t})=c\phi_{t} for any constant cc). Then we can apply Lemma 1 to both vt,ϕt\mathrm{v}_{t},\phi_{t} and v~t,ϕ~t\tilde{\mathrm{v}}_{t},\tilde{\phi}_{t}, and we also borrow the notations btv,ctv,btϕ,ctϕb_{t}^{\mathrm{v}},c_{t}^{\mathrm{v}},b_{t}^{\phi},c_{t}^{\phi} and b~tv,c~tv,b~tϕ,c~tϕ\tilde{b}_{t}^{\mathrm{v}},\tilde{c}_{t}^{\mathrm{v}},\tilde{b}_{t}^{\phi},\tilde{c}_{t}^{\phi}.

Note that from b~2ϕg+c~2ϕϕ=ϕ2~=ϕ3=b3ϕg+c3ϕϕ\tilde{b}_{2}^{\phi}\mathrm{g}+\tilde{c}_{2}^{\phi}\phi=\tilde{\phi_{2}}=\phi_{3}=b_{3}^{\phi}\mathrm{g}+c_{3}^{\phi}\phi, it follows that b~2ϕ=b3ϕ\tilde{b}_{2}^{\phi}=b_{3}^{\phi} and c~2ϕ=c3ϕ\tilde{c}_{2}^{\phi}=c_{3}^{\phi}. This implies that

η~=b~2ϕ=b3ϕ\displaystyle-\tilde{\eta}=\tilde{b}_{2}^{\phi}=b_{3}^{\phi} =η(μ+2ηω)\displaystyle=-\eta(\mu+2-\eta\omega) (50)
1η~ω~=c~2ϕ=c3ϕ\displaystyle 1-\tilde{\eta}\tilde{\omega}=\tilde{c}_{2}^{\phi}=c_{3}^{\phi} =ημω+(1ηω)2.\displaystyle=-\eta\mu\omega+(1-\eta\omega)^{2}. (51)

Therefore we should have

η~\displaystyle\tilde{\eta} =η(μ+2ηω)\displaystyle=\eta(\mu+2-\eta\omega) (52)
ω~\displaystyle\tilde{\omega} =ημω+(1ηω)21η~=μω+2ωηω2μ+2ηω.\displaystyle=\frac{-\eta\mu\omega+(1-\eta\omega)^{2}-1}{-\tilde{\eta}}=\frac{\mu\omega+2\omega-\eta\omega^{2}}{\mu+2-\eta\omega}. (53)

Similarly, η~(μ~+2η~ω~)=b~3ϕ=b5ϕ-\tilde{\eta}(\tilde{\mu}+2-\tilde{\eta}\tilde{\omega})=\tilde{b}_{3}^{\phi}=b_{5}^{\phi} implies that

μ~=b5ϕη~+η~ω~2=b5ϕη(μ+2ηω)+ημω(1ηω)21.\displaystyle\tilde{\mu}=-\frac{b_{5}^{\phi}}{\tilde{\eta}}+\tilde{\eta}\tilde{\omega}-2=-\frac{b_{5}^{\phi}}{\eta(\mu+2-\eta\omega)}+\eta\mu\omega-(1-\eta\omega)^{2}-1. (54)

We found all the necessary conditions for μ~,ω~,η~\tilde{\mu},\tilde{\omega},\tilde{\eta}, and from them we can calculate all b~tv,c~tv,b~tϕ,c~tϕ\tilde{b}_{t}^{\mathrm{v}},\tilde{c}_{t}^{\mathrm{v}},\tilde{b}_{t}^{\phi},\tilde{c}_{t}^{\phi} given μ,ω,η\mu,\omega,\eta. Assume μ=0.9,ω=0.0001,η=0.1\mu=0.9,\omega=0.0001,\eta=0.1. By the above arguments, we should have b~4ϕ=b7ϕ\tilde{b}_{4}^{\phi}=b_{7}^{\phi} but we can check that b~4ϕ1.88\tilde{b}_{4}^{\phi}\approx-1.88 and b7ϕ1.78b_{7}^{\phi}\approx-1.78, which is a contradiction. ∎

D Pre-processing and augmentation in the transfer learning

The pre-processing and augmentation are consistent with the previous works [36, 16]. For the small-sized inputs (32x32), the images of TinyImageNet [33] and STL-10 [35] are resized to 32x32 (CIFAR-100 [34] is already 32x32.) The image transforms for augmentation of TinyImageNet, CIFAR-100, STL-10 are the padding of 4, randomly cropping of 32x32, and randomly horizontal flipping. For testing, we take cropping the center of 32x32. For the large-sized inputs (224x224), we resize the image to 256x256, randomly cropping of 224x224, randomly horizontal flipping. For testing, we take cropping the center of 224x224. We normalize the RGB channels with means and standard deviations of (0.485, 0.456, 0.406) and (0.229, 0.224, 0.225) for each channel, respectively.

Table 6: The number of examples in each split of datasets. If there is no validation split, we randomly take a subset from training split.
Dataset Training Validation Test Total
CIFAR-100 [34] 45,000 5,000 10,000 60,000
STL-10 [35]   4,500     500   8,000 13,000
Bird [32]   4,994 1,000 5,794 11,788
Indoor [40]   4,360 1,000 1,340   6,700
Action [41]   3,600     400 5,532   9,532
Dog [42] 10,800 1,200 8,580 20,580

E Hyper-parameters in the transfer learning

The meta-optimizer is Adam [43] with the fixed learning rate of 1e-3 (for 32x32 experiments) or 1e-4 (for 224x224 experiments) and the weight decay of 1e-4. The inner-level optimizer is SGD (or Adam), with the momentum (betas) of 0.9 (0.9, 0.999) and the weight decay of 1e-4. The initial learning rate is 1e-1 (1e-3) with the cosine annealing [48] as follows:

ηt=12(1+costTπ)\displaystyle\eta_{t}=\frac{1}{2}(1+\rm{cos}\frac{t}{T}\pi) (55)

where the number of training epochs TT is 200. For the 32x32 experiments, the size of mini-batch is 128. For the 224x224 experiments, the size of mini-batch is 64. Notice that, while splitting meta-train and meta-test is a popular way in the few-shot learning, our meta-transfer learning uses the merged meta-data for the sake of efficacy as in the previous work [16]. The other hyper-parameters are fixed between them and across multiple benchmarks.

F Hyper-parameters in the few-shot learning

We use two popular benchmark datasets, Omniglot [44] and miniImageNet [5]. The Omniglot dataset contains 20 images for each character where 1623 characters from 50 different alphabets. We follow the same protocol of [11]. The miniImageNet dataset has 100 classes with each class having 600 examples. We follow the splits of [5], which consists of 64 training classes, 16 validation classes, and 20 test classes.

In all experiments, we use Adam [43] with the learning rate of 0.001 for the meta-optimizer. For the inner optimizer, we use SGD with momentum of 0.9 and weight decay of 1e-4 for MAML and Multi-step, and vanilla SGD for FO-MAML. All models are trained and evaluated with 8 update steps, and are trained for 60,000 iterations on a single NVIDIA Tesla V100 GPU. For Omniglot, we set 0.4 for the learning rate of inner-level optimizer in all cases. The 5-way models and 20-way models were trained with a meta batch-size of 32 tasks and 16 tasks, respectively. Since we do not use validation split for Omniglot, the results are reported by the models of final iteration. For miniImageNet, the learning rates of inner-level optimizer were chosen by grid-search using validation split among {0.1,0.04,0.01,0.004}\{0.1,0.04,0.01,0.004\}. For MAML and Multi-step, the chosen learning rates were 0.01 for the 1-shot model, and 0.04 for the 5-shot model. For FO-MAML, they were 0.1 and 0.04 for the 1-shot and 5-shot, respectively. Both the 1-shot and 5-shot models were trained with a meta batch-size of 4 tasks.

G Training time for the transfer learning

In Figure 3, three-time runs show that our method significantly reduces training time across multiple datasets. Here, meta-transfer learning boosts the performance of transfer learning [16]; however, the increased training time is one of the major drawbacks. For this reason, we argue that our method is a valuable approach to large-scale applications. In these experiments, Adam [43] is used for inner optimization; in the paper, the training time of Bird [32] is slightly different since that is from a different set of experiments using the momentum SGD as an inner optimizer.

Refer to caption
Figure 3: Training time (second per epoch) for each dataset using a Titan Xp. The labeled percentage shows the ratio of reduced time from our multi-step estimation. ±\pm denotes the standard deviation of three randomly-initialized models.

H Additional experiment for the transfer learning

In this section, we observe 1) the empirical choice of T=2T=2 for the number of inner steps of the transfer loss 𝚝𝚏𝚛\mathcal{L}_{\tt{tfr}} in the previous work [16], 2) competitive performance of our multi-step estimation over the number of inner steps, and 3) the limitation of the change of a learning rate to estimate the multi-step dynamics.

The task is to transfer the knowledge of ResNet-34 pretrained on the ImageNet [47] to ResNet-18 for the CUB200 dataset [32]. We follow the experimental protocol in the paper except using Adam inner optimizer for better performance. We measure the mean accuracy with three randomly-initialized models and the training time (second) for an epoch with their standard deviations.

In Figure 4, we empirically confirm the choice of T=2T=2 for its performance (Figure 4a) considering the trade-off with the training time depending on the number of inner steps (Figure 4b). The training time linearly increases as the number of inner steps increases and it is critical for high-dimensional tasks. Our multi-step estimation methods, {2,3}-step Est., competitively perform or even outperform their counterparts, {2,3}-step. When we use a doubled learning rate, 1-step 2*lr, to estimate 2-step with a single step, it deteriorates the performance compared with our 2-step Est., since it simply ignores the dynamics of inner-level. With T=0T=0, the performance plummets to 48.91±0.8348.91\pm 0.83, and the training time per epoch is 121.3±4.97121.3\pm 4.97 seconds.

Refer to caption
Figure 4: Mean accuracy and training time of the transfer learning for the Bird dataset [32].
Refer to caption
Figure 5: Memory usage and training time of the few-shot tasks for Omniglot. For a fair comparison, we use the MAML implementation code from https://github.com/dragen1860/MAML-Pytorch. Please refer to Table 4 for the accuracy of iMAML [25].