This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Model-based Adversarial
Meta-Reinforcement Learning

Zichuan Lin, Garrett Thomas, Guangwen Yang, Tengyu Ma
Tsinghua University, email: [email protected] University, email: [email protected] University, email: [email protected] University, email: [email protected]
Abstract

Meta-reinforcement learning (meta-RL) aims to learn from multiple training tasks the ability to adapt efficiently to unseen test tasks. Despite the success, existing meta-RL algorithms are known to be sensitive to the task distribution shift. When the test task distribution is different from the training task distribution, the performance may degrade significantly. To address this issue, this paper proposes Model-based Adversarial Meta-Reinforcement Learning (AdMRL), where we aim to minimize the worst-case sub-optimality gap – the difference between the optimal return and the return that the algorithm achieves after adaptation – across all tasks in a family of tasks, with a model-based approach. We propose a minimax objective and optimize it by alternating between learning the dynamics model on a fixed task and finding the adversarial task for the current model – the task for which the policy induced by the model is maximally suboptimal. Assuming the family of tasks is parameterized, we derive a formula for the gradient of the suboptimality with respect to the task parameters via the implicit function theorem, and show how the gradient estimator can be efficiently implemented by the conjugate gradient method and a novel use of the REINFORCE estimator. We evaluate our approach on several continuous control benchmarks and demonstrate its efficacy in the worst-case performance over all tasks, the generalization power to out-of-distribution tasks, and in training and test time sample efficiency, over existing state-of-the-art meta-RL algorithms111Our code is available at https://github.com/LinZichuan/AdMRL.

1 Introduction

Deep reinforcement learning (Deep RL) methods can solve difficult tasks such as Go (Silver et al., 2016), Atari games (Mnih et al., 2013), robotic control (Levine et al., 2016) successfully, but often require sampling a large amount interactions with the environment. Meta-reinforcement learning and multi-task reinforcement learning aim to improve the sample efficiency by leveraging the shared structure within a family of tasks. For example, Model Agnostic Meta Learning (MAML) (Finn et al., 2017) learns in the training time a shared policy initialization across tasks, from which in the test time it can adapt to the new tasks quickly with a small amount of samples. The more recent work PEARL (Rakelly et al., 2019) learns latent representations of the tasks in the training time, and then infers the representations of test tasks and adapts to them.

The existing meta-RL formulation and methods are largely distributional. The training tasks and the testing tasks are assumed to be drawn from the same distribution of tasks. Consequently, the existing methods are prone to the distribution shift issue, as shown in (Mehta et al., 2020) — when the tasks in the test time are not drawn from the same distribution as in the training, the performance degrades significantly. Figure 1 also confirms this issue for PEARL (Rakelly et al., 2019), a recent state-of-the-art meta-RL method, on the Ant2D-velocity tasks. PEARL can adapt to tasks with smaller goal velocities much better than tasks with larger goal velocities, in terms of the relative difference, or the sub-optimality gap, from the optimal policy of the corresponding task.222The same conclusion is still true if we measure the raw performance on the tasks. But that could be misleading because the tasks have varying optimal returns. To address this issue, Mehta et al. (2020) propose an algorithm that iteratively re-define the task distribution to focus more on the hard task.

Refer to caption
Figure 1: The performance of PEARL (Rakelly et al., 2019) on Ant2D-velocity tasks. Each task is represented by the target velocity (x,y)2(x,y)\in\mathbb{R}^{2} with which the ant should run. The training tasks are uniformly drawn in [3,3]2[-3,3]^{2}. The color of each cell shows the sub-optimality gap of the corresponding task, namely, the optimal return of that task minus the return of PEARL. Lighter means smaller sub-optimality gap and is better. High-velocity tasks tend to perform worse, which implies that if the test task distribution shift towards high-velocity tasks, the performance will degrade.

In this paper, we instead take a non-distributional perspective by formulating the adversarial meta-RL problem. Given a parametrized family of tasks, we aim to minimize the worst sub-optimality gap — the difference between the optimal return and the return the algorithm achieves after adaptation — across all tasks in the family in the test time. This can be naturally formulated mathematically as a minimax problem (or a two-player game) where the maximum is over all the tasks and the minimum is over the parameters of the algorithm (e.g., the shared policy initialization or the shared dynamics).

Our approach is model-based. We learn a shared dynamics model across the tasks in the training time, and during the test time, given a new reward function, we train a policy on the learned dynamics. The model-based methods can outperform significantly the model-free methods in sample-efficiency even in the standard single task setting (Luo et al., 2018; Dong et al., 2019; Janner et al., 2019; Wang and Ba, 2019; Chua et al., 2018; Buckman et al., 2018; Nagabandi et al., 2018c; Kurutach et al., 2018; Feinberg et al., 2018; Rajeswaran et al., 2016, 2020; Wang et al., 2019), and are particularly suitable for meta-RL settings where the optimal policies for tasks are very different, but the underlying dynamics is shared (Landolfi et al., 2019). We apply the natural adversarial training (Madry et al., 2017) on the level of tasks — we alternate between the minimizing the sub-optimality gap over the parameterized dynamics and maximizing it over the parameterized tasks.

The main technical challenge is to optimize over the task parameters in a sample-efficient way. The sub-optimality gap objective depends on the task parameters in a non-trivial way because the algorithm uses the task parameters iteratively in its adaptation phase during the test time. The naive attempt to back-propagate through the sequential updates of the adaptation algorithm is time costly, especially because the adaptation time in the model-based approach is computationally expensive (despite being sample-efficient). Inspired by a recent work on learning equilibrium models in supervised learning (Bai et al., 2019), we derive an efficient formula of the gradient w.r.t. the task parameters via the implicit function theorem. The gradient involves an inverse Hessian vector product, which can be efficiently computed by conjugate gradients and the REINFORCE estimator Williams (1992).

In summary, our contributions are:

  • 1.

    We propose a minimax formulation of model-based adversarial meta-reinforcement learning (AdMRL, pronounced like “admiral”) with an adversarial training algorithm to address the distribution shift problem.

  • 2.

    We derive an estimator of the gradient with respect to the task parameters, and show how it can be implemented efficiently in both samples and time.

  • 3.

    Our approach significantly outperforms the state-of-the-art meta-RL algorithms in the worst-case performance over all tasks, the generalization power to out-of-distribution tasks, and in training and test time sample efficiency on a set of continuous control benchmarks.

2 Related Work

The idea of learning to learn was established in a series of previous works (Utgoff, 1986; Schmidhuber, 1987; Thrun, 1996; Thrun and Pratt, 2012). These papers propose to build a base learner for each task and train a meta-learner that learns the shared structure of the base learners and outputs a base learner for a new task. Recent literature mainly instantiates this idea in two directions: (1) learning a meta-learner to predict the base learner (Wang et al., 2016; Snell et al., 2017); (2) learning to update the base learner (Hochreiter et al., 2001; Bengio et al., 1992; Finn et al., 2017). The goal of meta-reinforcement learning is to find a policy that can quickly adapt to new tasks by collecting only a few trajectories. In MAML (Finn et al., 2017), the shared structure learned at train time is a set of policy parameters. Some recent meta-RL algorithms propose to condition the policy on a latent representation of the task (Rakelly et al., 2019; Zintgraf et al., 2019; Wang et al., 2020; Humplik et al., 2019; Lan et al., 2019). Some prior work Duan et al. (2016); Wang et al. (2016) represent the reinforcement learning algorithm as a recurrent network. GMPS Mendonca et al. (2019) improves the sample efficiency during meta-training by consolidating the solutions of individual off-policy learners into a single meta-learner. VariBAD Schulze et al. meta-learns to perform approximate inference on an unknown task, and incorporate task uncertainty directly during action selection. ProMP Rothfuss et al. (2018) improves the sample-efficiency during meta-training by overcoming the issue of poor credit assignment. Some algorithms (Landolfi et al., 2019; Sæmundsson et al., 2018; Nagabandi et al., 2018a, b) also propose to share a dynamical model across tasks during meta-training and perform model-based adaptation in new tasks. These approaches are still distributional and suffers from distribution shift. We adversarially choose training tasks to address the distribution shift issue and show in the experiment section that we outperform the algorithm with randomly-chosen tasks. Unsupervised meta-RL Gupta et al. (2018) constructs a task proposal mechanism based on a mutual information objective to automatically acquire an environment-specific learning procedure. MetaGenRL Kirsch et al. (2019) proposes to meta-learn objective functions to generalize to different environments. MQL Fakoor et al. (2019) proposes ways to reuse data from the meta-training phase during meta-adaptation by employing propensity score estimation. Some recent works also attempt to mitigate the distribution shift issue. Meta-ADR Mehta et al. (2020) introduces a curriculum for meta-training tasks. MIER Mendonca et al. (2020) meta-learns a model representation and relabel meta-training experience during adaptation. Different from the method above, our method addresses the distribution shift issue in task level by taking a non-distributional perspective and meta-training on adversarial tasks.

Model-based approaches have long been recognized as a promising avenue for reducing sample complexity of RL algorithms. One popular branch in MBRL is Dyna-style algorithms (Sutton, 1990), which iterates between collecting samples for model update and improving the policy with virtual data generated by the learned model (Luo et al., 2018; Janner et al., 2019; Wang and Ba, 2019; Chua et al., 2018; Buckman et al., 2018; Kurutach et al., 2018; Feinberg et al., 2018; Rajeswaran et al., 2020). Another branch of MBRL produces policies based on model predictive control (MPC), where at each time step the model is used to perform planning over a short horizon to select actions (Chua et al., 2018; Nagabandi et al., 2018c; Dong et al., 2019; Wang and Ba, 2019).

Our approach is also related to active learning (Atlas et al., 1990; Lewis and Gale, 1994; Silberman, 1996; Settles, 2009). It aims to find the most useful or difficult data point whereas we are operating in the task space. Our method is also related to curiosity-driven learning (Pathak et al., 2017; Burda et al., 2018a, b), which defines intrinsic curiosity rewards to encourage the agent to explore in an environment. Instead of exploring in state space, our method are “exploring” in the task space. The work of Jin et al. (2020) aims to compute the near-optimal policies for any reward function by sufficient exploration, while we search for the reward function with the worst suboptimality gap.

3 Preliminaries

Reinforcement Learning. Consider a Markov Decision Process (MDP) with state space 𝒮\mathcal{S} and action space 𝒜\mathcal{A}. A policy π(|s)\pi(\cdot|s) specifies the conditional distribution over the action space given a state ss. The transition dynamics T(|s,a)T(\cdot|s,a) specifies the conditional distribution of the next state given the current state ss and aa. We will use TT^{\star} to denote the unknown true transition dynamics in this paper. A reward function r:𝒮×𝒜r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R} defines the reward at each step. We also consider a discount γ[0,1)\gamma\in[0,1) and an initial state distribution p0p_{0}. We define the value function Vπ,T:𝒮V^{\pi,T}:\mathcal{S}\rightarrow\mathbb{R} at state ss for a policy π\pi on dynamics TT: Vπ,T(s)=𝔼at,stπ,T[t=0γtr(st,at)|s0=s]V^{\pi,T}(s)=\underset{a_{t},s_{t}\sim\pi,T}{\mathbb{E}}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})|s_{0}=s\right]. The goal of RL is to seek a policy that maximizes the expected return η(π,T):=𝔼s0p0[Vπ,T(s0)]\eta(\pi,T):=\underset{s_{0}\sim p_{0}}{\mathbb{E}}\left[V^{\pi,T}(s_{0})\right].

Meta-Reinforcement Learning

In this paper, we consider a family of tasks parameterized by Ψk\Psi\subseteq\mathbb{R}^{k} and a family of polices parameterized by Θp\Theta\subseteq\mathbb{R}^{p}. The family of tasks is a family of Markov decision process (MDP) {(𝒮,𝒜,T,rψ,p0,γ)}ψΨ\{(\mathcal{S},\mathcal{A},T,r_{\psi},p_{0},\gamma)\}_{\psi\in\Psi} which all share the same dynamics but differ in the reward function. We denote the value function of a policy π\pi on a task with reward rψr_{\psi} and dynamics TT by Vψπ,TV^{\pi,T}_{\psi}, and denote the expected return for each task and dynamics by η(π,T,ψ)=𝔼[Vψπ,T(s0)]\eta(\pi,T,\psi)=\mathbb{E}[V^{\pi,T}_{\psi}(s_{0})]. For simplicity, we will use the shorthand η(θ,T,ψ):=η(πθ,T,ψ)\eta(\theta,T,\psi):=\eta(\pi_{\theta},T,\psi).

Meta-reinforcement learning leverages a shared structure across tasks. (The precise nature of this structure is algorithm-dependent.) Let Φd\Phi\subseteq\mathbb{R}^{d} denote the set of all such structures. A meta-RL training algorithm seeks to find a shared structure ϕΦ\phi\in\Phi, which is subsequently used by an adaptation algorithm A:Φ×ΨΘA:\Phi\times\Psi\rightarrow\Theta to learn quickly in new tasks. In this paper, the shared structure ϕ\phi is the learned dynamics (more below).

Model-based Reinforcement Learning

In model-based reinforcement learning (MBRL), we parameterize the transition dynamics of the model T^ϕ\widehat{T}_{\phi} (as a neural network) and learn the parameters ϕ\phi so that it approximates the true transition dynamics of TT^{\star}. In this paper, we use Stochastic Lower Bound Optimization (SLBO) (Luo et al., 2018), which is an MBRL algorithm with theoretical guarantees of monotonic improvement. SLBO interleaves policy improvement and model fitting.

4 Model-based Adversarial Meta-Reinforcement Learning

4.1 Formulation

We consider a family of tasks whose reward functions rψ(s,a)r_{\psi}(s,a) are parameterized by some parameters ψ\psi, and assume that rψ(s,a)r_{\psi}(s,a) is differentiable w.r.t. ψ\psi for every s,as,a. We assume the reward function parameterization rψ(,)r_{\psi}(\cdot,\cdot) is known throughout the paper.333It’s challenging to formulate the worst-case performance without knowing a reward family, e.g., when we only have access to randomly sampled tasks from a task distribution. Recall that the total return of policy πθ\pi_{\theta} on dynamics TT and tasks ψ\psi is denoted by η(θ,T,ψ)=𝔼τπθ,T[Rψ(τ)].\eta(\theta,T,\psi)=\underset{\tau\sim\pi_{\theta},T}{\mathbb{E}}\left[R_{\psi}(\tau)\right]. Here Rψ(τ)R_{\psi}(\tau) is the return of the trajectory under reward function rψr_{\psi}. As shorthand, we define η(θ,ψ)=η(θ,T,ψ)\eta^{\star}(\theta,\psi)=\eta(\theta,T^{\star},\psi) as the return in the real environment on tasks ψ\psi and η^ϕ(θ,ψ)=η(θ,T^ϕ,ψ)\hat{\eta}_{\phi}(\theta,\psi)=\eta(\theta,\widehat{T}_{\phi},\psi) as the return on the virtual dynamics T^ϕ\widehat{T}_{\phi} on task ψ\psi.

Given a learned dynamics T^ϕ\widehat{T}_{\phi} and test task ψ\psi, we can perform a zero-shot model-based adaptation by computing the best policy for task ψ\psi under the dynamics T^ϕ\widehat{T}_{\phi}, namely, argmaxθη^ϕ(θ,ψ)\arg\max_{\theta}\hat{\eta}_{\phi}(\theta,\psi). Let (ϕ,ψ)\mathcal{L}(\phi,\psi), formally defined in equation below, be the suboptimality gap of the T^ϕ\widehat{T}_{\phi}-optimal policy on task ψ\psi, i.e. the difference between the performance of the best policy for task ψ\psi and the performance of the policy which is best for ψ\psi according to the model T^ϕ\widehat{T}_{\phi}. Our overall aim is to find the best shared dynamics T^ϕ\widehat{T}_{\phi}, such that the worst-case sub-optimality gap (ϕ,ψ)\mathcal{L}(\phi,\psi) is minimized. This can be formally written as a minimax problem:

minϕmaxψ[maxθη(θ,ψ)η(argmaxθη^ϕ(θ,ψ),ψ)](ϕ,ψ).\min_{\phi}\max_{\psi}\underbrace{\left[\max_{\theta}\eta^{\star}(\theta,\psi)-\eta^{\star}\big{(}\arg\max_{\theta}\hat{\eta}_{\phi}(\theta,\psi),\psi\big{)}\right]}_{\triangleq\mathcal{L}(\phi,\psi)}. (1)

In the inner step (max over ψ\psi), we search for the task ψ\psi which is hardest for our current model T^ϕ\widehat{T}_{\phi}, in the sense that the policy which is optimal under dynamics T^ϕ\widehat{T}_{\phi} is most suboptimal in the real MDP. In the outer step (min over T^ϕ\widehat{T}_{\phi}), we optimize for a model with low worst-case suboptimality. We remark that, in general, other definitions of sub-optimality gap, e.g., the ratio between the optimal return and achieved return may also be used to formulate the problem.

Algorithmically, by training on the hardest task found in the inner step, we hope to obtain data that is most informative for correcting the model’s inaccuracies.

4.2 Computing Derivatives with respect to Task Parameters

To optimize Eq. (1), we will alternate between the min and max using gradient descent and ascent respectively. Fixing the task ψ\psi, minimizing (ϕ,ψ)\mathcal{L}(\phi,\psi) reduces to standard MBRL.

On the other hand, for a fixed model T^ϕ\widehat{T}_{\phi}, the inner maximization over the task parameter ψ\psi is non-trivial, and is the focus of this subsection. To perform gradient-based optimization, we need to estimate ψ\frac{\partial\mathcal{L}}{\partial\psi}. Let us define θ=argmaxθη(θ,ψ)\theta^{\star}=\arg\max_{\theta}\eta^{\star}(\theta,\psi) (the optimal policy under the true dynamics and task ψ\psi) and θ^=argmaxθη^ϕ(θ,ψ)\hat{\theta}=\arg\max_{\theta}\hat{\eta}_{\phi}(\theta,\psi) (the optimal policy under the virtual dynamics and task ψ\psi). We assume there is a unique θ^\hat{\theta} for each ψ\psi. Then,

ψ=ηψ|θ(θ^ψηθ|θ^+ηψ|θ^).\frac{\partial\mathcal{L}}{\partial\psi}=\left.\frac{\partial\eta^{\star}}{\partial\psi}\right|_{\theta^{\star}}-\left(\frac{\partial\hat{\theta}^{\top}}{\partial\psi}\left.\frac{\partial\eta^{\star}}{\partial\theta}\right|_{\hat{\theta}}+\left.\frac{\partial\eta^{\star}}{\partial\psi}\right|_{\hat{\theta}}\right). (2)

Note that the first term comes from the usual (sub)gradient rule for pointwise maxima, and the second term comes from the chain rule. Differentiation w.r.t. ψ\psi commutes with expectation over τ\tau, so

ηψ=𝔼τπθ,T[Rψ(τ)ψ]=𝔼τπθ,T[t=0γtrψ(st,at)ψ].\frac{\partial\eta^{\star}}{\partial\psi}=\underset{\tau\sim\pi_{\theta},T^{\star}}{\mathbb{E}}\left[\frac{\partial R_{\psi}(\tau)}{\partial\psi}\right]=\underset{\tau\sim\pi_{\theta},T^{\star}}{\mathbb{E}}\left[\sum_{t=0}^{\infty}\gamma^{t}\frac{\partial r_{\psi}(s_{t},a_{t})}{\partial\psi}\right]. (3)

Thus the first and last terms of the gradient of Eq. (2) can be estimated by simply rolling out πθ\pi_{\theta^{\star}} and πθ^\pi_{\hat{\theta}} and differentiating the sampled rewards. Let Aπθ^(st,at)A^{\pi_{\hat{\theta}}}(s_{t},a_{t}) be the advantage function. Then, the term ηθ|θ^\left.\frac{\partial\eta^{\star}}{\partial\theta}\right|_{\hat{\theta}} in Eq. (2) can be computed by the standard policy gradient

ηθ|θ^=𝔼τπθ^,T[t=0γtlogπθ(at|st)θ|θ^Aπθ^(st,at)].\left.\frac{\partial\eta^{\star}}{\partial\theta}\right|_{\hat{\theta}}=\underset{\tau\sim\pi_{\hat{\theta}},T^{\star}}{\mathbb{E}}\left[\sum_{t=0}^{\infty}\gamma^{t}\left.\frac{\partial\log\pi_{\theta}(a_{t}|s_{t})}{\partial\theta}\right|_{\hat{\theta}}A^{\pi_{\hat{\theta}}}(s_{t},a_{t})\right]. (4)

The complicated part left in Eq. (2) is θ^ψ\frac{\partial\hat{\theta}^{\top}}{\partial\psi}. We compute it using the implicit function theorem (Wikipedia contributors, 2020) (see Section A.1 for details):

θ^ψ=(2η^ϕθθ|θ^)12η^ϕθψ|θ^.\frac{\partial\hat{\theta}}{\partial\psi^{\top}}=-\left(\left.\frac{\partial^{2}\hat{\eta}_{\phi}}{\partial\theta\partial\theta^{\top}}\right|_{\hat{\theta}}\right)^{-1}\left.\frac{\partial^{2}\hat{\eta}_{\phi}}{\partial\theta\partial\psi^{\top}}\right|_{\hat{\theta}}. (5)

The mixed-derivative term in equation above can be computed by differentiating the policy gradient:

2η^ϕθψ|θ^=𝔼τπθ^,T^ϕ[t=0γtlogπθ(at|st)θ|θ^Aπθ^(st,at)ψ].\left.\frac{\partial^{2}\hat{\eta}_{\phi}}{\partial\theta\partial\psi^{\top}}\right|_{\hat{\theta}}=\underset{\tau\sim\pi_{\hat{\theta}},\widehat{T}_{\phi}}{\mathbb{E}}\left[\sum_{t=0}^{\infty}\gamma^{t}\left.\frac{\partial\log\pi_{\theta}(a_{t}|s_{t})}{\partial\theta}\right|_{\hat{\theta}}\frac{\partial A^{\pi_{\hat{\theta}}}(s_{t},a_{t})}{\partial\psi^{\top}}\right]. (6)

An estimator for the Hessian term in Eq. (5) can be derived by the REINFORCE estimator (Sutton et al., 2000), or the log derivative trick (see Section A.2 for a detailed derivation),

2η^ϕθθ=𝔼τπθ,T^ϕ[(logπθ(τ)θlogπθ(τ)θ+2logπθ(τ)θθ)Rψ(τ)].\displaystyle\frac{\partial^{2}\hat{\eta}_{\phi}}{\partial\theta\partial\theta^{\top}}=\underset{\tau\sim\pi_{\theta},\widehat{T}_{\phi}}{\mathbb{E}}\left[\left(\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta^{\top}}+\frac{\partial^{2}\log\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}}\right)R_{\psi}(\tau)\right]. (7)

By computing the gradient estimator using implicit function theorem, we do not need to back-propagate through the sequential updates of our adaptation algorithm, from which we can estimate the gradient w.r.t. task parameters in a sample-efficient and computationally tractable way.

4.3 AdMRL: a Practical Implementation

Algorithm 1 gives pseudo-code for our algorithm AdMRL, which alternates the updates of dynamics T^ϕ\widehat{T}_{\phi} and tasks ψ\psi. Let VirtualTraining(θ,ϕ,ψ,𝒟,n)\textup{VirtualTraining}(\theta,\phi,\psi,\mathcal{D},n) be the shorthand for the procedure of learning a dynamics ϕ\phi using data 𝒟\mathcal{D} and then optimizing a policy from initialization θ\theta on tasks ψ\psi under dynamics ϕ\phi with nn virtual steps. Here parameterized arguments of the procedure are referred to by their parameters (so that the resulting policy, dynamics, are written in θ\theta and ϕ\phi). For each training task parameterized by ψ\psi, we first initialize the policy randomly, and optimize a policy on the learned dynamics until convergence (Line 4), which we refer to as zero-shot adaptation. We then use the obtained policy πθ^\pi_{\hat{\theta}} to collect data from real environment and perform the MBRL algorithm SLBO (Luo et al., 2018) by interleaving collecting samples, updating models and optimizing policies (Line 5). After collecting samples and performing SLBO updates, we then get an nearly optimal policy πθ\pi_{\theta^{\star}}.

Then we update the task parameter by gradient ascent. With the policy πθ^\pi_{\hat{\theta}} and πθ\pi_{\theta^{\star}}, we compute each gradient component (Line 9, 10) and obtain the gradient w.r.t task parameters (Line 11) and perform gradient ascent for the task parameter ψ\psi (Line 12). Now we complete an outer-iteration. Note that for the first training task, we skip the zero-shot adaptation phase and only perform SLBO updates because our dynamical model is untrained. Moreover, because the zero-shot adaptation step is not done, we cannot technically perform our tasks update either because the tasks derivative depends on πθ^\pi_{\hat{\theta}}, the result of zero-shot adaption (Line 8).

Algorithm 1 AdMRL: Model-based Adversarial Meta-Reinforcement Learning
1:Initialize model parameter ϕ\phi, task parameter ψ\psi and dataset 𝒟\mathcal{D}\leftarrow\emptyset
2:for ntasksn_{tasks} iterations do
3:     Initialize policy parameter θ\theta randomly
4:     If 𝒟\mathcal{D}\neq\emptyset, θ^=VirtualTraining(θ,ϕ,ψ,𝒟,nzeroshot)\hat{\theta}=\textup{VirtualTraining}(\theta,\phi,\psi,\mathcal{D},n_{zeroshot}) \triangleright Zero-shot adaptation
5:     for nslbon_{slbo} iterations do \triangleright SLBO
6:         𝒟𝒟\mathcal{D}\leftarrow\mathcal{D}\cup {\{ncollectn_{collect} collected samples on the real environments TT^{\star} using πθ\pi_{\theta} with noise}\}
7:         θ\theta^{\star} = VirtualTraining(θ,ϕ,ψ,𝒟,ninner\theta,\phi,\psi,\mathcal{D},n_{inner})      
8:     if first task then randomly re-initialize ψ\psi; otherwise then
9:         Compute gradients ηψ|θ\frac{\partial\eta^{\star}}{\partial\psi}|_{\theta^{\star}} and ηψ|θ^\frac{\partial\eta^{\star}}{\partial\psi}|_{\hat{\theta}} using Eq. 3; compute ηθ|θ^\frac{\partial\eta^{\star}}{\partial\theta}|_{\hat{\theta}} using Eq. 4; compute 2η^θψ|θ^\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\psi^{\top}}|_{\hat{\theta}} using Eq. 6; compute 2η^ϕθθ\frac{\partial^{2}\hat{\eta}_{\phi}}{\partial\theta\partial\theta^{\top}} using Eq. 7.
10:         Efficiently compute θ^ψ\frac{\partial\hat{\theta}}{\partial\psi^{\top}} using conjugate gradient method. (see Section 4.3)
11:         Compute the final gradient ψ=ηψ|θ(θ^ψηθ|θ^+ηψ|θ^)\frac{\partial\mathcal{L}}{\partial\psi}=\frac{\partial\eta^{\star}}{\partial\psi}|_{\theta^{\star}}-(\frac{\partial\hat{\theta}^{\top}}{\partial\psi}\frac{\partial\eta^{\star}}{\partial\theta}|_{\hat{\theta}}+\frac{\partial\eta^{\star}}{\partial\psi}|_{\hat{\theta}})
12:         Perform task parameters projected gradient ascent ψΠΨ(ψ+αψ)\psi\leftarrow\Pi_{\Psi}(\psi+\alpha\frac{\partial\mathcal{L}}{\partial\psi})      

Implementation Details. Computing Eq. (5) for each dimension of ψ\psi involves an inverse-Hessian-vector product. We note that we can compute Eq. (5) by approximately solving the equation Ax=bAx=b, where AA is 2η^θθ|θ^\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\theta^{\top}}\right|_{\hat{\theta}} and bb is 2η^θψ|θ^\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\psi^{\top}}\right|_{\hat{\theta}}. However, in large-scale problems (e.g. θ\theta has thousands of dimensions), it is costly (in computation and memory) to form the full matrix AA. Instead, the conjugate gradient method provides a way to approximately solve the equation Ax=bAx=b without forming the full matrix of AA, provided we can compute the mapping xAxx\mapsto Ax. The corresponding Hessian-vector product can be computed as efficiently as evaluating the loss function (Pearlmutter, 1994) up to a universal multiplicative factor. Please refer to Appendix B to see how to implement it concretely. In practice, we found that the matrix of AA is always not positive-definite, which hinders the convergence of conjugate gradient method. Therefore, we turn to solve the equivalent equation AAx=AbA^{\top}Ax=A^{\top}b.

In terms of time complexity, computing the gradient w.r.t task parameters is quite efficient compared to other steps. On one hand, in each task iteration, for the MBRL algorithm, we need to collect samples for dynamical model fitting, and then rollout mm virtual samples using the learned dynamical model for policy update to solve the task, which takes O(m(dϕ+dθ))O(m(d_{\phi}+d_{\theta})) time complexity, where dϕd_{\phi} and dθd_{\theta} denote the dimensionality of ϕ\phi and θ\theta. On the other hand, we only need to update the task parameter once in each task iteration, which takes O(dψdθ)O(d_{\psi}d_{\theta}) time complexity by using conjugate gradient descent, where dψd_{\psi} denotes the dimensionality of ψ\psi. In practice, for MBRL algorithm, we often need a large amount of virtual samples mm (e.g., millions of) to solve the tasks. In the meantime, the dimension of task parameter dψd_{\psi} is a small constant and we have dθdϕd_{\theta}\ll d_{\phi}. Therefore, in our algorithm, the runtime of computing gradient w.r.t task parameters is negligible.

In terms of sample complexity, although computing the gradient estimator requires samples, in practice, however, we can reuse the samples that collected and used by the MBRL algorithm, which means we take almost no extra samples to compute the gradient w.r.t task parameters.

Relation to Meta-RL. Indeed, our method assumes the knowledge of the task parameters and is different from the standard meta-RL setting. However, we believe that our setting (a) is practically relevant and (b) provides new opportunities for more sample-efficient and robust algorithms. Handcrafted families of rewards functions are reasonable in practical applications, if not common. Moreover, if we don’t even know the family of test tasks, it’s challenging, if not impossible, to be robust to task shifts in the test time. Our more restricted setting makes it possible to be robust to worst-case task shifts. Some intermediate formulations may also be possible, e.g., it’s possible to adapt AdMRL to settings where the task family is known in the training time but the task parameters are unknown but inferred in the test time. We leave them as future work.

5 Experiments

In our experiments, we aim to study the following questions: (1) How does AdMRL perform on standard meta-RL benchmarks compared to prior state-of-the-art approaches? (2) Does AdMRL achieve better worst-case performance than distributional meta-RL methods? (3) How does AdMRL perform in environments where task parameters are high-dimensional? (4) Does AdMRL generalize better than distributional meta-RL on out-of-distribution tasks?

We evaluate our approach on a variety of continuous control tasks based on OpenAI gym (Brockman et al., 2016), which uses the MuJoCo physics simulator (Todorov et al., 2012).

Low-dimensional velocity-control tasks

Following and extending the setup of (Finn et al., 2017; Rakelly et al., 2019), we first consider a family of environments and tasks relating to controlling 2-D or 3-D velocity control tasks. We consider three popular MuJoCo environments: Hopper, Walker and Ant. For the 3-D task families, we have three task parameters ψ=(ψx,ψy,ψz)\psi=(\psi_{x},\psi_{y},\psi_{z}) which corresponds to the target xx-velocity, yy-velocity, and zz-position. Given the task parameter, the agent’s goal is to match the target xx and yy velocities and zz position as much as possible. The reward is defined as: rψ(vx,vy,z)=c1|vxψx|+c2|vyψy|+c3|hzψz|,r_{\psi}(v_{x},v_{y},z)=c_{1}|v_{x}-\psi_{x}|+c_{2}|v_{y}-\psi_{y}|+c_{3}|h_{z}-\psi_{z}|, where vxv_{x} and vyv_{y} denotes xx and yy velocities and hzh_{z} denotes zz height, and c1,c2,c3c_{1},c_{2},c_{3} are handcrafted coefficients ensuring that each reward component contributes similarly. The set of task parameters ψ\psi is a 3-D box Ψ\Psi, which can depend on the particular environment. E.g., Ant3D has Ψ=[3,3]×[3,3]×[0.4,0.6]\Psi=[-3,3]\times[-3,3]\times[0.4,0.6] and here the range for zz-position is chosen so that the target can be mostly achievable. For a 2-D task, the setup is similar except only two of these three values are targeted. We experiment with Hopper2D, Walker2D and Ant2D. Details are given in Appendix C. We note that we extend the 2-D settings in (Finn et al., 2017; Rakelly et al., 2019) to 3-D because when the tasks parameters have more degrees of freedom, the task distribution shifts become more prominent.

High-dimensional tasks

We also create a more complex family of high-dimensional tasks to test the strength of our algorithm in dealing with adversarial tasks among a large family of tasks with more degrees of freedom. Specifically, the reward function is linear in the post-transition state ss^{\prime}, parameterized by task parameter ψd\psi\in\mathbb{R}^{d} (where dd is the state dimension): rψ(s,a,s)=ψs.r_{\psi}(s,a,s^{\prime})=\psi^{\top}s^{\prime}. Here the task parameter set is Ψ=[1,1]d\Psi=[-1,1]^{d}. In other words, the agent’s goal is to take action to make ss^{\prime} most linearly correlated with some target vector ψ\psi. We use HalfCheetah where d=18d=18. Note that to ensure that each state coordinate contributes similar to the total reward, we normalize the states by sμσ\frac{s-\mu}{\sigma} before computing the reward function, where μ,σd\mu,\sigma\in\mathbb{R}^{d} are computed from all states collected by random policy from real environments. The high-dimensional task is called Cheetah-Highdim tasks. Tasks parameterized in this way are surprisingly often semantically meaningful, corresponding to rotations, jumping, etc. Appendix D shows some visualization of the trajectories.

Training

We compare our approach with previous meta-RL methods, including MAML (Finn et al., 2017) and PEARL (Rakelly et al., 2019). The training process for our algorithm is outlined in Algorithm 1. We build our algorithm based on the code that Luo et al. (2018) provides. We use the publicly available code for our baselines MAML, PEARL. Most hyper-parameters are taken directly from the supplied implementation. We list all the hyper-parameters used for all algorithms in the Appendix C. We note here that we only run our algorithm for ntasks=10n_{tasks}=10 or ntasks=20n_{tasks}=20 training tasks, whereas we allow MAML and PEARL to visit 150 tasks during the meta-training for generosity of comparison. The training process of MAML and PEARL requires 80 and 2.5 million samples respectively, while our method AdMRL only requires 0.4 or 0.8 million samples. Besides standard meta-RL methods, we also compare AdMRL with multi-task policy approaches which also leverage the task parameters explicitly. In detail, we experiment on three more baselines that use a multi-task policy π(a|s,ψ)\pi(a|s,\psi) that takes in the task parameters ψ\psi as inputs. (A) MT-joint: train multi-task policy π\pi jointly on all training tasks. (B) MAML-MT and (C) PEARL-MT: replace the policies in MAML and PEARL by a multi-task policy, respectively. We maintain the number of training samples and tasks.

Evaluation Metric

For low-dimensional tasks, we enumerate tasks in a grid. For each 2-D environment (Hopper2D, Walker2D, Ant2D) we evaluate at a grid of size 6×66\times 6. For the 3-D tasks (Ant3D), we evaluate at a box of size 4×4×34\times 4\times 3. For high-dimensional tasks, we randomly sample 20 testing tasks uniformly on the boundary. For each task ψ\psi, we compare different algorithms in: A0(ψ)A_{0}(\psi) (zero-shot adaptation performance with no samples), An(ψ)A_{n}(\psi) (adaptation performance after collecting nn samples) and Gn(ψ)A(ψ)An(ψ)G_{n}(\psi)\triangleq A^{\star}(\psi)-A_{n}(\psi) (suboptimality gap), and Gnmax=maxψΨGn(ψ)G^{\max}_{n}=\max_{\psi\in\Psi}G_{n}(\psi) (worst-case suboptimality gap). In our experiments, we compare AdMRL with MAML and PEARL in all environments with n=2000,4000,6000n=2000,4000,6000. We also compare AdMRL with distributional variants (e.g., model-based methods with uniform or gaussian task sampling distribution) in worst-case tasks, high-dimensional tasks and out-of-distribution (OOD) tasks.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2: Average of returns An(ψ)A_{n}(\psi) over all tasks of adapted policies (with 3 random seeds) from our algorithm, MAML and PEARL. Our approach substantially outperforms baselines in training and test time sample efficiency, and even with zero-shot adaptation.

5.1 Adaptation Performance Compared to Baselines

For the tasks described in section 5, we compare our algorithm against MAML and PEARL. Figure 2 shows the adaptation results on the testing tasks set. We produce the curves by: (1) running our algorithm and baseline algorithms by training on adversarially chosen tasks and uniformly sampling random tasks respectively; (2) for each test task, we first do zero-shot adaptation for our algorithm and then run our algorithm and baseline algorithms by collecting samples; (3) estimating the averaged returns of the policies by sampling new roll-outs. The curves show the return averaged across all testing tasks with three random seeds in testing time. Our approach AdMRL outperforms MAML and PEARL across all test tasks, even though our method visits much fewer tasks (7/8) and samples (2/3) than baselines during meta-training. AdMRL outperforms MAML and PEARL with even zero-shot adaptation, namely, collecting no samples.444Note that the zero-shot model-based adaptation is taking advantage of additional information (the reward function) which MAML and PEARL have no mechanism for using. We also find that the zero-shot adaptation performance of AdMRL is often very close to the performance after collecting samples. This is the result of minimizing sub-optimality gap in our method. Our results also show that AdMRL outperforms the multi-task policy baselines consistently, although it is trained on 100X fewer samples than MT-joint and MAML-MT and 3X fewer than PEARL-MT. This implies that a multi-task policy does not necessarily help MAML and PEARL. We conjecture that this is because the optimal policy is a very complex function of the task parameters that cannot necessarily be expressed by neural nets.

5.2 Comparing with Model-based Baselines in Worst-case Sub-optimality Gap

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: (a) Sub-optimality gap Gn(ψ)G_{n}(\psi) of adapted policies n=6Kn=6K for each test task ψ\psi from AdMRL, MB-Unif, and MB-Gauss. Lighter means smaller, which is better. For tasks on the boundary, AdMRL achieves much lower Gn(ψ)G_{n}(\psi) than MB-Gauss and MB-Unif, which indicates AdMRL generalizes better in the worst case. (b) The worst-case sub-optimality gap GnmaxG_{n}^{\max} in the number of adaptation samples nn. AdMRL successfully minimizes the worst-case suboptimality gap.

In this section, we aim to investigate the worst-case performance of our approach. We compare our adversarial selection method with distributional variants — using model-based training but sampling tasks with a uniform or gaussian distribution with variance 1, denoted by MB-Unif and MB-Gauss, respectively. All methods are trained on 20 tasks and then evaluated on a 6×66\times 6 grid of test tasks. We plot heatmap figures by computing the sub-optimality gap for each test task in figure 3. We find that while both MB-Gauss and MB-Unif tend to over-fit on the tasks in the center, AdMRL can generalize much better to the tasks on the boundary. Figure 3 shows adapation performance on the tasks with worst sub-optimality gap. We find that AdMRL can achieve lower sub-optimality gap in the worst cases.

Performance on high-dimensional tasks

Figure 4 shows the suboptimality gap during adaptation on high-dimensional tasks. We highlight that AdMRL performs significantly better than MB-Unif and MB-Gauss when the task parameters are high-dimensional. In the high-dimensional tasks, we find that each task has diverse optimal behavior. Thus, sampling from a given distribution of tasks during meta-training becomes less efficient — it is hard to cover all tasks with worst suboptimality gap by randomly sampling from a given distribution. On the contrary, our non-distributional adversarial selection way can search for those hardest tasks efficiently and train a model that minimizes the worst suboptimality gap.

Visualization. To understand how our algorithm works, we visualize the task parameter ψ\psi that visited during meta-training in Ant3D environment. We compare our method with MB-Unif and MB-Gauss in figure 4. We find that our method can quickly visit the hard tasks on the boundary, in the sense that we can find the most informative tasks to train our model. On the contrary, sampling randomly from uniform or gaussian distribution has much less probability to visit the tasks on the boundary.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: (a) Visualization of visited training tasks by MB-Unif, MB-Gauss and AdMRL; AdMRL can quickly visit tasks with large suboptimality gap on the boundary and train the model to minimize the worst-case suboptimality gap. (b) The worst-case suboptimality gap GnmaxG_{n}^{\max} in the number of adaptation samples nn for high-dimensional tasks. AdMRL significantly outperforms baselines in such tasks.

5.3 Out-of-distribution Performance

We evaluate our algorithm on out-of-distribution tasks in the Ant2D environment. We train agents with tasks drawn in Ψ=[3,3]2\Psi=[-3,3]^{2} while testing on OOD tasks from Ψ=[5,5]2\Psi=[-5,5]^{2}. Figure 5 shows the performance of AdMRL in comparison to MB-Unif and MB-Gauss. We find that AdMRL has much lower suboptimality gap than MB-Unif and MB-Gauss on OOD tasks, which shows the generalization power of AdMRL.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 5: (a) Sub-optimality gap Gn(ψ)G_{n}(\psi) of adapted policies n=6Kn=6K for each OOD test task ψ\psi of adapted policies from AdMRL, MB-Unif and MB-Gauss. Lighter means smaller, which is better. Training tasks are drawn from [3,3]2[-3,3]^{2} (as shown in the red box) while we only test the OOD tasks drawn from [5,5]2[-5,5]^{2} (on the boundary). Our approach AdMRL generalizes much better and achieves lower Gn(ψ)G_{n}(\psi) than MB-Unif and MB-Gauss on OOD tasks. (b) The worst-case sub-optimality gap GnmaxG_{n}^{\max} in the number of adaptation samples nn.
Refer to caption
Figure 6: Model errors.

We also evaluate the quality of learned models. We first collect samples from true dynamics from OOD tasks in the Ant2D environment and then evaluate the prediction errors of learned models by L2 loss. As shown in Figure 6, the model learned by AdMRL is more accurate than those learned by MB-Unif and MB-Gauss.

6 Conclusion

In this paper, we propose Model-based Adversarial Meta-Reinforcement Learning (AdMRL), to address the distribution shift issue of meta-RL. We formulate the adversarial meta-RL problem and propose a minimax formulation to minimize the worst sub-optimality gap. To optimize efficiently, we derive an estimator of the gradient with respect to the task parameters, and implement the estimator efficiently using the conjugate gradient method. We provide extensive results on standard benchmark environments to show the efficacy of our approach over prior meta-RL algorithms. In the future, several interesting directions lie ahead. (1) Apply AdMRL to more difficult settings such as visual domain. (2) Replace SLBO by other MBRL algorithms. (3) Apply AdMRL to cases where the parameterization of reward function is unknown.

Acknowledgement

We thank Yuping Luo for helpful discussions about the implementation details of SLBO. Zichuan was supported in part by the Tsinghua Academic Fund Graduate Overseas Studies and in part by the National Key Research & Development Plan of China (grant no. 2016YFA0602200 and 2017YFA0604500). TM acknowledges support of Google Faculty Award and Lam Research. The work is also in part supported by SDSI and SAIL.

References

  • Atlas et al. [1990] L. E. Atlas, D. A. Cohn, and R. E. Ladner. Training connectionist networks with queries and selective sampling. In Advances in neural information processing systems, pages 566–573, 1990.
  • Bai et al. [2019] S. Bai, J. Z. Kolter, and V. Koltun. Deep equilibrium models. In Advances in Neural Information Processing Systems, pages 688–699, 2019.
  • Bengio et al. [1992] S. Bengio, Y. Bengio, J. Cloutier, and J. Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, volume 2. Univ. of Texas, 1992.
  • Brockman et al. [2016] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
  • Buckman et al. [2018] J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems, pages 8224–8234, 2018.
  • Burda et al. [2018a] Y. Burda, H. Edwards, D. Pathak, A. Storkey, T. Darrell, and A. A. Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355, 2018a.
  • Burda et al. [2018b] Y. Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018b.
  • Chua et al. [2018] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pages 4754–4765, 2018.
  • Dong et al. [2019] K. Dong, Y. Luo, and T. Ma. Bootstrapping the expressivity with model-based planning. arXiv preprint arXiv:1910.05927, 2019.
  • Duan et al. [2016] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
  • Fakoor et al. [2019] R. Fakoor, P. Chaudhari, S. Soatto, and A. J. Smola. Meta-q-learning. arXiv preprint arXiv:1910.00125, 2019.
  • Feinberg et al. [2018] V. Feinberg, A. Wan, I. Stoica, M. I. Jordan, J. E. Gonzalez, and S. Levine. Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101, 2018.
  • Finn et al. [2017] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017.
  • Gupta et al. [2018] A. Gupta, B. Eysenbach, C. Finn, and S. Levine. Unsupervised meta-learning for reinforcement learning. arXiv preprint arXiv:1806.04640, 2018.
  • Hochreiter et al. [2001] S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
  • Humplik et al. [2019] J. Humplik, A. Galashov, L. Hasenclever, P. A. Ortega, Y. W. Teh, and N. Heess. Meta reinforcement learning as task inference. arXiv preprint arXiv:1905.06424, 2019.
  • Janner et al. [2019] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pages 12498–12509, 2019.
  • Jin et al. [2020] C. Jin, A. Krishnamurthy, M. Simchowitz, and T. Yu. Reward-free exploration for reinforcement learning. arXiv preprint arXiv:2002.02794, 2020.
  • Kirsch et al. [2019] L. Kirsch, S. van Steenkiste, and J. Schmidhuber. Improving generalization in meta reinforcement learning using learned objectives. arXiv preprint arXiv:1910.04098, 2019.
  • Kurutach et al. [2018] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592, 2018.
  • Lan et al. [2019] L. Lan, Z. Li, X. Guan, and P. Wang. Meta reinforcement learning with task embedding and shared policy. arXiv preprint arXiv:1905.06527, 2019.
  • Landolfi et al. [2019] N. C. Landolfi, G. Thomas, and T. Ma. A model-based approach for sample-efficient multi-task reinforcement learning. arXiv preprint arXiv:1907.04964, 2019.
  • Levine et al. [2016] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334–1373, 2016.
  • Lewis and Gale [1994] D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In SIGIR’94, pages 3–12. Springer, 1994.
  • Luo et al. [2018] Y. Luo, H. Xu, Y. Li, Y. Tian, T. Darrell, and T. Ma. Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees. arXiv preprint arXiv:1807.03858, 2018.
  • Madry et al. [2017] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  • Mehta et al. [2020] B. Mehta, T. Deleu, S. C. Raparthy, C. J. Pal, and L. Paull. Curriculum in gradient-based meta-reinforcement learning. arXiv preprint arXiv:2002.07956, 2020.
  • Mendonca et al. [2019] R. Mendonca, A. Gupta, R. Kralev, P. Abbeel, S. Levine, and C. Finn. Guided meta-policy search. In Advances in Neural Information Processing Systems, pages 9653–9664, 2019.
  • Mendonca et al. [2020] R. Mendonca, X. Geng, C. Finn, and S. Levine. Meta-reinforcement learning robust to distributional shift via model identification and experience relabeling. arXiv preprint arXiv:2006.07178, 2020.
  • Mnih et al. [2013] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  • Nagabandi et al. [2018a] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn. Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. arXiv preprint arXiv:1803.11347, 2018a.
  • Nagabandi et al. [2018b] A. Nagabandi, C. Finn, and S. Levine. Deep online learning via meta-learning: Continual adaptation for model-based rl. arXiv preprint arXiv:1812.07671, 2018b.
  • Nagabandi et al. [2018c] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7559–7566. IEEE, 2018c.
  • Pathak et al. [2017] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 16–17, 2017.
  • Pearlmutter [1994] B. A. Pearlmutter. Fast exact multiplication by the hessian. Neural computation, 6(1):147–160, 1994.
  • Rajeswaran et al. [2016] A. Rajeswaran, S. Ghotra, B. Ravindran, and S. Levine. Epopt: Learning robust neural network policies using model ensembles. arXiv preprint arXiv:1610.01283, 2016.
  • Rajeswaran et al. [2020] A. Rajeswaran, I. Mordatch, and V. Kumar. A game theoretic framework for model based reinforcement learning. arXiv preprint arXiv:2004.07804, 2020.
  • Rakelly et al. [2019] K. Rakelly, A. Zhou, D. Quillen, C. Finn, and S. Levine. Efficient off-policy meta-reinforcement learning via probabilistic context variables. arXiv preprint arXiv:1903.08254, 2019.
  • Rothfuss et al. [2018] J. Rothfuss, D. Lee, I. Clavera, T. Asfour, and P. Abbeel. Promp: Proximal meta-policy search. arXiv preprint arXiv:1810.06784, 2018.
  • Sæmundsson et al. [2018] S. Sæmundsson, K. Hofmann, and M. P. Deisenroth. Meta reinforcement learning with latent variable gaussian processes. arXiv preprint arXiv:1803.07551, 2018.
  • Schmidhuber [1987] J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-… hook. PhD thesis, Technische Universität München, 1987.
  • [42] S. Schulze, S. Whiteson, L. Zintgraf, M. Igl, Y. Gal, K. Shiarlis, and K. Hofmann. Varibad: a very good method for bayes-adaptive deep rl via meta-learning. International Conference on Learning Representations.
  • Settles [2009] B. Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
  • Silberman [1996] M. Silberman. Active Learning: 101 Strategies To Teach Any Subject. ERIC, 1996.
  • Silver et al. [2016] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
  • Snell et al. [2017] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077–4087, 2017.
  • Sutton [1990] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Machine learning proceedings 1990, pages 216–224. Elsevier, 1990.
  • Sutton et al. [2000] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
  • Thrun [1996] S. Thrun. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pages 640–646, 1996.
  • Thrun and Pratt [2012] S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 2012.
  • Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE, 2012.
  • Utgoff [1986] P. E. Utgoff. Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach, 2:107–148, 1986.
  • Wang et al. [2020] H. Wang, J. Zhou, and X. He. Learning context-aware task reasoning for efficient meta-reinforcement learning. arXiv preprint arXiv:2003.01373, 2020.
  • Wang et al. [2016] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and M. Botvinick. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763, 2016.
  • Wang and Ba [2019] T. Wang and J. Ba. Exploring model-based planning with policy networks. arXiv preprint arXiv:1906.08649, 2019.
  • Wang et al. [2019] T. Wang, X. Bao, I. Clavera, J. Hoang, Y. Wen, E. Langlois, S. Zhang, G. Zhang, P. Abbeel, and J. Ba. Benchmarking model-based reinforcement learning. arXiv preprint arXiv:1907.02057, 2019.
  • Wikipedia contributors [2020] Wikipedia contributors. Implicit function theorem — Wikipedia, the free encyclopedia, 2020. URL https://en.wikipedia.org/w/index.php?title=Implicit_function_theorem&oldid=953711659. [Online; accessed 2-June-2020].
  • Williams [1992] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
  • Zintgraf et al. [2019] L. Zintgraf, M. Igl, K. Shiarlis, A. Mahajan, K. Hofmann, and S. Whiteson. Variational task embeddings for fast adapta-tion in deep reinforcement learning. In International Conference on Learning Representations Workshop on Structure & Priors in Reinforcement Learning, 2019.

Appendix A Omitted Derivations

A.1 Jacobian of θ^\hat{\theta} with respect to ψ\psi

We begin with an observation: first-order optimality conditions for θ^\hat{\theta} necessitate that

η^θ|θ^=0\left.\frac{\partial\hat{\eta}}{\partial\theta}\right|_{\hat{\theta}}=0 (8)

Then, the implicit function theorem tells us that for sufficiently small Δψ\Delta\psi, there exists Δθ\Delta\theta as a function of Δψ\Delta\psi such that

η^θ|θ^+Δθ,ψ+Δψ=0\left.\frac{\partial\hat{\eta}}{\partial\theta}\right|_{\hat{\theta}+\Delta\theta,\psi+\Delta\psi}=0 (9)

To first order, we have

η^θ|θ^+Δθ,ψ+Δψη^θ|θ^0+2η^θθ|θ^Δθ+2η^θψ|θ^Δψ\left.\frac{\partial\hat{\eta}}{\partial\theta}\right|_{\hat{\theta}+\Delta\theta,\psi+\Delta\psi}\approx\underbrace{\left.\frac{\partial\hat{\eta}}{\partial\theta}\right|_{\hat{\theta}}}_{0}+\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\theta^{\top}}\right|_{\hat{\theta}}\Delta\theta+\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\psi^{\top}}\right|_{\hat{\theta}}\Delta\psi (10)

Thus, solving for Δθ\Delta\theta as a function of Δψ\Delta\psi and taking the limit as Δψ0\Delta\psi\to 0, we obtain

θ^ψ=(2η^θθ|θ^)12η^θψ|θ^\frac{\partial\hat{\theta}}{\partial\psi^{\top}}=-\left(\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\theta^{\top}}\right|_{\hat{\theta}}\right)^{-1}\left.\frac{\partial^{2}\hat{\eta}}{\partial\theta\partial\psi^{\top}}\right|_{\hat{\theta}} (11)

A.2 Policy Hessian

Fix dynamics TT, and let πθ(τ)\pi_{\theta}(\tau) denote the probability density of trajectory τ\tau under policy πθ\pi_{\theta}. Then we have

logπθ(τ)θ=πθ(τ)θπθ(τ)i.e.πθ(τ)θ=πθ(τ)logπθ(τ)θ\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}=\frac{\frac{\partial\pi_{\theta}(\tau)}{\partial\theta}}{\pi_{\theta}(\tau)}\hskip 28.45274pt\text{i.e.}\hskip 28.45274pt\frac{\partial\pi_{\theta}(\tau)}{\partial\theta}=\pi_{\theta}(\tau)\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta} (12)

Thus we get the basic (REINFORCE) policy gradient

ηθ=θπθ(τ)Rψ(τ)dτ=πθ(τ)θRψ(τ)dτ=𝔼τπθ,T[logπθ(τ)θRψ(τ)].\frac{\partial\eta}{\partial\theta}=\frac{\partial}{\partial\theta}\int\pi_{\theta}(\tau)R_{\psi}(\tau)\,\mathrm{d}\tau=\int\frac{\partial\pi_{\theta}(\tau)}{\partial\theta}R_{\psi}(\tau)\,\mathrm{d}\tau=\underset{\tau\sim\pi_{\theta},T}{\mathbb{E}}\left[\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}R_{\psi}(\tau)\right]. (13)

Differentiating our earlier expression for πθ(τ)θ\frac{\partial\pi_{\theta}(\tau)}{\partial\theta} once more, and then reusing that same expression again, we have

2πθ(τ)θθ\displaystyle\frac{\partial^{2}\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}} =πθ(τ)θlogπθ(τ)θ+πθ(τ)2logπθ(τ)θθ\displaystyle=\frac{\partial\pi_{\theta}(\tau)}{\partial\theta}\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta^{\top}}+\pi_{\theta}(\tau)\frac{\partial^{2}\log\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}} (14)
=πθ(τ)(logπθ(τ)θlogπθ(τ)θ+2logπθ(τ)θθ)\displaystyle=\pi_{\theta}(\tau)\left(\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta^{\top}}+\frac{\partial^{2}\log\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}}\right) (15)

Thus

2ηθθ\displaystyle\frac{\partial^{2}\eta}{\partial\theta\partial\theta^{\top}} =2θθπθ(τ)Rψ(τ)dτ\displaystyle=\frac{\partial^{2}}{\partial\theta\partial\theta^{\top}}\int\pi_{\theta}(\tau)R_{\psi}(\tau)\,\mathrm{d}\tau (16)
=2πθ(τ)θθRψ(τ)dτ\displaystyle=\int\frac{\partial^{2}\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}}R_{\psi}(\tau)\,\mathrm{d}\tau (17)
=πθ(τ)(logπθ(τ)θlogπθ(τ)θ+2logπθ(τ)θθ)Rψ(τ)dτ\displaystyle=\int\pi_{\theta}(\tau)\left(\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta^{\top}}+\frac{\partial^{2}\log\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}}\right)R_{\psi}(\tau)\,\mathrm{d}\tau (18)
=𝔼τπθ,T[(logπθ(τ)θlogπθ(τ)θ+2logπθ(τ)θθ)Rψ(τ)].\displaystyle=\underset{\tau\sim\pi_{\theta},T}{\mathbb{E}}\left[\left(\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta}\frac{\partial\log\pi_{\theta}(\tau)}{\partial\theta^{\top}}+\frac{\partial^{2}\log\pi_{\theta}(\tau)}{\partial\theta\partial\theta^{\top}}\right)R_{\psi}(\tau)\right]. (19)

Appendix B Implementation detail

The section discusses how to compute AxAx using standard automatic differentiation packages. We first define the following function:

ηh(θ1,θ2,θ3,θ,T^ϕ,ψ)=𝔼πθ,T^ϕ[(logπθ1(at|st)logπθ2(at|st)+logπθ3(at|st))Rψ(τ)],\eta_{h}(\theta_{1},\theta_{2},\theta_{3},\theta,\widehat{T}_{\phi},\psi)=\underset{\pi_{\theta},\widehat{T}_{\phi}}{\mathbb{E}}\left[(\log\pi_{\theta_{1}}(a_{t}|s_{t})\log\pi_{\theta_{2}}(a_{t}|s_{t})+\log\pi_{\theta_{3}}(a_{t}|s_{t}))R_{\psi}(\tau)\right], (20)

where θ1,θ2,θ3\theta_{1},\theta_{2},\theta_{3} are parameter copies of θ\theta. We then use Hessian-vector product to avoid directly computing the second derivatives. Specifically, we compute the two parts in Eq. (7) respectively by first differentiating ηh\eta_{h} w.r.t θ1\theta_{1} and θ2\theta_{2}^{\top}

g1=θ1(ηhθ2x)=logπθ1(at|st)θ1(logπθ2(at|st)θ2x)Rψ(τ),g_{1}=\frac{\partial}{\partial\theta_{1}}(\frac{\partial\eta_{h}}{\partial\theta_{2}^{\top}}\cdot x)=\frac{\partial\log\pi_{\theta_{1}}(a_{t}|s_{t})}{\partial\theta_{1}}\left(\frac{\partial\log\pi_{\theta_{2}}(a_{t}|s_{t})}{\partial\theta_{2}^{\top}}\cdot x\right)R_{\psi}(\tau), (21)

and then differentiate ηh\eta_{h} w.r.t θ3\theta_{3} for twice

g2=θ3(ηhθ3x)=θ3(logπθ3(at|st)θ3x)Rψ(τ),g_{2}=\frac{\partial}{\partial\theta_{3}}\left(\frac{\partial\eta_{h}}{\partial\theta_{3}^{\top}}\cdot x\right)=\frac{\partial}{\partial\theta_{3}}\left(\frac{\partial\log\pi_{\theta_{3}}(a_{t}|s_{t})}{\partial\theta_{3}^{\top}}\cdot x\right)R_{\psi}(\tau), (22)

and thus we have Ax=g1+g2Ax=g_{1}+g_{2}.

Appendix C Hyper-parameters

We experimented with the following task settings: Hopper-2D with xx velocity and zz height from Ψ=[2,2]×[1.2,2.0]\Psi=[-2,2]\times[1.2,2.0], Walker-2D with xx velocity and zz height from Ψ=[2,2]×[1.0,1.8]\Psi=[-2,2]\times[1.0,1.8], Ant-2D with xx velocity and yy velocity from Ψ=[3,3]×[3,3]\Psi=[-3,3]\times[-3,3], Ant-3D with xx velocity, yy velocity and zz height from Ψ=[3,3]×[3,3]×[0.4,0.6]\Psi=[-3,3]\times[-3,3]\times[0.4,0.6], Cheetah-Highdim with Ψ=[1,1]18\Psi=[-1,1]^{18}. We also list the coefficient of the parameterized reward functions in Table 1.

Table 1: Coefficient in parameterized reward functions
Hopper2D Walker2D Ant2D Ant3D
c1c_{1} 1 1 1 1
c2c_{2} 0 0 1 1
c3c_{3} 5 5 0 30

The hyper-parameters of MAML and PEARL are mostly taken directly from the supplied implementation of [Finn et al., 2017] and [Rakelly et al., 2019]. We run MAML for 500 training iterations: for each iteration, MAML uses a meta-batch size of 40 (the number of tasks sampled at each iteration) and a batch size of 20 (the number of rollouts used to compute the policy gradient updates). Overall, MAML requires 80 million samples during meta training. For PEARL, we first collect a batch of training tasks (150) by uniformly sampling from Ψ\Psi. We run PEARL for 500 training iterations: for each iteration, PEARL randomly sample 5 tasks and collects 1000 samples for each task from both prior (400) and posterior (600) of the context variables; for each gradient update, PEARL uses a meta-batch size of 10 and optimizes the parameters of actor, critic and context encoder by 4000 steps of gradient descent. Overall, PEARL requires 2.5 million samples during meta training.

For AdMRL, we first do zero-shot adaptation for each task by 40 virtual steps (nzeroshot=40n_{zeroshot}=40). We then perform SLBO [Luo et al., 2018] by interleaving data collection, dynamical model fitting and policy updates, where we use 3 outer iterations (nslbo=3n_{slbo}=3) and 20 inner iterations (ninner=20n_{inner}=20). Algorithm 2 shows the pseudo code of the virtual training procedure. For each inner iteration, we update model for 100 steps (nmodel=100n_{model}=100), and update policy for 20 steps (npolicy=20n_{policy}=20), each with 10000 virtual samples (ntrpo=10000n_{trpo}=10000). For the first task, we use nslbo=10n_{slbo}=10 (for Hopper2D, Walker2D) or nslbo=20n_{slbo}=20 (for Ant2D, Ant3D, Cheetah-Highdim). For all tasks, we sweep the learning rate α\alpha in {1,2,4,8,16,32} and we use α=2\alpha=2 for Hopper2D, α=8\alpha=8 for Walker2D, α=4\alpha=4 for Ant2D and Ant3D, α=16\alpha=16 for Cheetah-Highdim. To compute the gradient w.r.t the task parameters, we do 200 iterations of conjugate gradient descent.

Algorithm 2 Virtual Training in AdMRL
1:procedure VirtualTraining(θ:policy,ϕ:model,ψ:task,𝒟:data,n:virtual steps\theta:\textrm{policy},\phi:\textrm{model},\psi:\textrm{task},\mathcal{D}:\textrm{data},n:\textrm{virtual steps})
2:     for nn iterations do
3:         Optimize virtual dynamics T^ϕ\widehat{T}_{\phi} over ϕ\phi with data sampled from 𝒟\mathcal{D} by nmodeln_{model} steps
4:         for npolicyn_{policy} iterations do
5:              𝒟\mathcal{D^{\prime}}\leftarrow {collect ntrpon_{trpo} samples from the learned dynamics T^ϕ\widehat{T}_{\phi}}
6:              Optimize πθ\pi_{\theta} by running TRPO on 𝒟\mathcal{D^{\prime}}               

Appendix D Examples of high-dimensional tasks

Figure 7 shows some trajectories in the high-dimensional task Cheetah-Highdim.

Refer to caption
Figure 7: The high-dimensional tasks are surprisingly often semantically meaningful. Policies learned in these tasks can have diverse behaviors, such as front flip (top row), back flip (middle row), jumping (bottom row), etc.