This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Some Insights into Lifelong Reinforcement Learning Systems

Changjian Li
Abstract

A lifelong reinforcement learning system is a learning system that has the ability to learn through trail-and-error interaction with the environment over its lifetime. In this paper, I give some arguments to show that the traditional reinforcement learning paradigm fails to model this type of learning system. Some insights into lifelong reinforcement learning are provided, along with a simplistic prototype lifelong reinforcement learning system.

Machine Learning, ICML

1 Introduction

An agent is an abstraction of a decision-maker. At each time instance tt, it receives an observation otOo_{t}\in O, and outputs an action atAa_{t}\in A to be carried out in the environment it lives in. Here, OO is the (finite) set of possible observations the agent can receive, and AA is the (finite) set of actions the agent can choose from. An agent’s observation oto_{t} depends on the current environment state stSs_{t}\in S through an agent observation function SOS\to O, where SS is the set of possible environment states. The observation history hto=(o1,o2,ot)h^{o}_{t}=(o_{1},o_{2}...,o_{t}) is the sequence of observations the agent has received till time tt. Let HtoH^{o}_{t} be the set of possible observation histories of length tt, the policy πt:HtoA\pi_{t}:H^{o}_{t}\to A at time tt is defined as the mapping from an observation history of length tt to the action the agent will take. An agent’s behavior can thus be fully specified by its policy across all timesteps π=(π1,π2,,πt,)\pi=(\pi_{1},\pi_{2},...,\pi_{t},...). Throughout the paper, it is assumed that an agent has a finite lifespan TT.

1.1 Scalar Reward Reinforcement Learning System

We are interested in agents that can achieve some goal. In reinforcement learning, a goal is expressed by a scalar signal rtr_{t}\in\mathbb{R} called the reward. The reward is dependent on the agent’s observation history, and is assumed to be available to the agent at each timestep in addition to the observation oto_{t}. Our aim is to find policies that maximize the expected cumulative reward an agent receives over its lifetime:

maxπ𝔼[t=1Trt(hto)]\displaystyle\max_{\pi}\mathbb{E}[\sum_{t=1}^{T}{r_{t}(h^{o}_{t})}] (1)

Using the maximization of expected cumulative scalar reward to formulate the general notion of goal is a design choice in reinforcement learning, based on what is commonly known as the reward hypothesis (Sutton & Barto, 2018), In Sutton’s own words:

That all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward).

This design choice, however, is somewhat arbitrary. Among other things, the reward needs not be a scalar (e.g. multi-objective reinforcement learning (White, 1982)), nor does it have to be a quantity whose cumulative sum is to be maximized (which we will come to shortly). Leaving aside the question of whether or not all goals can be formulated by Eq. 1, I intend to show in this paper that the problem of lifelong reinforcement learning probably should not be formulated as such.

Note that in Eq. 1, I defined the reward in terms of the observation history, instead of the history of environment states as in most reinforcement learning literature. This reflects the view that reward signals are internal to the agent, as pointed out by Singh et al. (2004) in their work on intrinsic motivation. Since the observations are all that the agent has access to from the external environment, the intrinsic reward should depend on the environment state only through the agent’s observation history.

Although the above reinforcement learning formulation recognizes the reward as a signal intrinsic to an agent, it focuses on learning across different generations 111Usage of the word ‘generation’ here is only to emphasize that learning cannot be achieved within an agent’s lifespan, and does not imply that evolution algorithms need to be used. of agents, as opposed to learning within an agent’s lifespan. From an agent’s point of view, the cumulative reward is known only when it reaches its end of life, by which time no learning can be done by the ‘dying’ agent itself. The individual reward received at each timestep does not really matter, since the optimization objective is the cumulative sum (of reward). The information gathered by the agent, however, can be used to improve the policy of the next generation. In other words, with the conventional reinforcement learning formulation, learning can only happen at a level higher than the lives of individual agents (Figure 1), with the goal that an optimal agent can eventually be found — the lifetime behavior of a particular agent is not of concern.

Refer to caption
Figure 1: Architecture of a traditional reinforcement learning system. At the beginning of an agent’s life, it receives a policy πi=(π1i,π2i,πTi)\pi^{i}=(\pi^{i}_{1},\pi^{i}_{2},...\pi^{i}_{T}) from the learning algorithm that carries out a mix of exploitation and exploration, where the superscript ii indicates that the agent belongs to the iith generation. The agent receives an observation oto_{t} at each timestep tt, and act according to πti\pi^{i}_{t}. At the end of the agent’s life, the learning algorithm gathers the observation history hToh^{o}_{T} and the cumulative reward t=1Tr(hto)\sum_{t=1}^{T}{r(h^{o}_{t})} from the agent, and outputs the the next policy πi+1\pi^{i+1} to be executed. The learning algorithm does not need to optimize the performance of any particular πi\pi^{i}, as long as it is guaranteed to be able to eventually find the policies that maximize the expected cumulative reward.

1.2 Towards Lifelong Reinforcement Learning

In lifelong reinforcement learning, on the other hand, the focus is the agent’s ability to learn and adapt to the environment throughout its lifetime. Intuitively, this implies that learning component of the learning system should reside within the agent.

To shed some lights on lifelong reinforcement learning, consider the Q-learning (Watkins & Dayan, 1992) algorithm for the standard reinforcement learning problem formulated by Eq. 1. For the purpose of this example only, it is further assumed that:

  • The reward depends only on the current observation. I.e., r(hto)=r(ot)r(h^{o}_{t})=r(o_{t})

  • Observations are Markov with respect to past observations and actions. I.e., P(ot|ot1,at1,,o1,a1)=P(ot|ot1,at1)P(o_{t}|o_{t-1},a_{t-1},...,o_{1},a_{1})=P(o_{t}|o_{t-1},a_{t-1})

These assumptions are only made so that Q-learning will find the solution to Eq. 1, and are not essential for the general discussion. The (non-lifelong) learning system works as follows:

  1. 1.

    The agent receives its initial Q estimate from the past generation.

  2. 2.

    At each timestep tt, the agent takes an ϵ\epsilon-greedy action based on the current Q estimate, then does a Bellman update on the Q estimate:

    Q(ot,at):=Q(ot,at)+α(r(ot)+maxaQ(ot+1,a)Q(ot,at))\displaystyle\begin{split}&Q(o_{t},a_{t}):=\\ &Q(o_{t},a_{t})+\alpha(r(o_{t})+\max_{a}Q(o_{t+1},a)-Q(o_{t},a_{t}))\end{split} (2)
  3. 3.

    When the agent dies, pass the updated Q estimate to the next generation.

At first sight, the fact that the Q estimate is updated every timestep seems to contradict my argument that learning only happens across generations. However, for Eq. 2 to be a valid update, the timestep tt needs to be part of the observation — the observation oto_{t} here is in fact the raw observation oto^{-}_{t} augmented by time tt, i.e., ot=(ot,t)o_{t}=(o^{-}_{t},t). Since the timestep is part of the observation, no same observation will be experienced more than once throughout the agent’s lifetime, and it makes no difference to the agent whether the Q estimate is updated every timestep, or after its life ends 222The statement does not strictly hold true if function approximation is used. An update to Qθ(ot,a)Q_{\theta}(o_{t},a) can potentially affect the Q estimate of all other observations. However, this is more a side effect than a desired property..

It’s clear that for an agent to exhibit any sensible behavior, the initial Q estimate it inherits from the past generation is vital. If the agent receives a random initial Q estimate, then it’s lifelong behavior is bound to be random and meaningless. On the other side of the spectrum, if the agent receives the true Q function, then it will behave optimally. This suggests that if we care about the lifetime behaviour (which includes lifelong learning behavior) of a Q-learning agent, then Q(ot,)Q(o_{t},\cdot) is a fundamental signal the agent needs to receive in addition to the scalar reward. In a sense, if the signal represented by the scalar reward is a specification of what the goal is, then the signal represented by the Q estimate is the knowledge past generations have collected about what the goal means for this type of agent. As an analogy, the pain associated with falling to the ground could be the former signal, while the innate fear of height could be the latter.

From a computational perspective, the separation of these two signals may not be necessary. Both signals can be considered as ‘annotations’ for the observation history that the agent receives along with its observation, and can be incorporated into the concept of reward. The reward signals are no longer restricted scalars, nor are they necessarily quantities whose cumulative sum is to be maximized — they are just messages in some reward language that ‘encode’ the knowledge pertaining to an agent’s observation history — knowledge that enables the agent to learn continuously throughout its life. Such knowledge may include the goals of the agent, the subgoals that constitute these goals, the heuristics for achieving them, and so on. The reward is then ‘decoded’ by the learning algorithm, which defines how the agent responds to the reward given the observation history. The learning system should be designed such that by responding to the reward in its intended way, the agent will learn to achieve the goals implied by the reward before its end of life (Figure 2).

To be precise, the reward r(hto)Σr(h^{o}_{t})\in\Sigma now belongs to some reward space Σ\Sigma. The learning algorithm is a mapping from reward histories to policies. Denoting the set of possible reward history of length tt as HtrH^{r}_{t}, and the set of all possible policies at time tt as Πt\Pi_{t}, the learning algorithm mm can be represented by m=(m1,m2,,mt,,mT)m=(m_{1},m_{2},...,m_{t},...,m_{T}), where mt:HtrΠtm_{t}:H^{r}_{t}\to\Pi_{t}. The formulation is general, and a learning system formulated as such is not automatically a lifelong learning system. In fact, it subsumes traditional reinforcement learning: the reward space is set to the real numbers (Σ=\Sigma=\mathbb{R}), and the learning algorithm can be set to any algorithm that converges to a policy that maximizes the expected cumulative reward. Unfortunately, the reward in traditional reinforcement learning does not contain enough information for an agent to learn within its lifetime.

Refer to caption
Figure 2: Architecture of lifelong reinforcement learning system. In contrast to traditional reinforcement learning (Figure 1), the learning algorithm resides inside the agent. The internal environment of the agent can be thought of as a built-in mechanism for the agent-designer to communicate with the agent (through the reward). At each timestep, the learning algorithm receives some message (encoded in the form of reward r(hto)r(h^{o}_{t})) from the agent’s internal environment, and outputs a policy πt\pi_{t} as a response.

Viewing the reward as a general language, and the learning algorithm as the response to the reward opens up the possibilities for principled ways to embed learning bias such as guidance and intrinsic motivation into the learning system, instead of relying solely on manipulating the scalar reward on an ad-hoc basis. In the rest of the paper, my focus remains on lifelong reinforcement learning, more specifically, what lifelong reinforcement learning requires of the reward language and the corresponding learning algorithm.

1.3 Reward as Formal Language

Although the term ‘language’ used above can be understood in its colloquial sense, it can also be understood as the formal term in automata theory. To see this, consider the following deterministic finite automaton Σ,Q,δ,q0,F\langle\Sigma,Q,\delta,q_{0},F\rangle, where:

  • Σ\Sigma is the alphabet of the automaton, and is set to the reward space of the learning system. In other words, the alphabet of this automaton consists of all possible reward the agent can receive at any single timestep. A string is a sequence of symbols chosen from some alphabet. For this particular automaton, a string is in fact a sequence of reward, so the notation for reward history htrh^{r}_{t} is also used to denote a string of length tt. The set of all strings of length kk over Σ\Sigma is denoted as Σk\Sigma^{k}, and the set of all strings (of any length) is denoted as Σ\Sigma^{*}.

  • QQ is the set of states of the automaton. Each state of this automaton is a possible pair of reward history and policies till some timestep tt. For example, members of QQ include:

    ht=1r,(π1)\langle h^{r}_{t=1},\quad(\pi_{1})\rangle
    ht=2r,(π1,π2)\langle h^{r}_{t=2},\quad(\pi_{1},\pi_{2})\rangle
    ...
    ht=Tr,(π1,π2,,πT)\langle h^{r}_{t=T},\quad(\pi_{1},\pi_{2},...,\pi_{T})\rangle

    for any π1Π1\pi_{1}\in\Pi_{1}, π2Π2\pi_{2}\in\Pi_{2}, …, πTΠT\pi_{T}\in\Pi_{T}, and ht=1rΣ1h^{r}_{t=1}\in\Sigma^{1}, ht=2rΣ2h^{r}_{t=2}\in\Sigma^{2}, …, ht=TrΣTh^{r}_{t=T}\in\Sigma^{T}. In addition, Q has a special ‘empty’ member q0q_{0}, which corresponds to the initial state before any reward is received.

  • δ:(Q×Σ)Q\delta:(Q\times\Sigma)\to Q is the transition function. The transition function corresponds to the learning algorithm of the learning system, so we have δ(htr,(π1,,πt),rt+1)=ht+1r,(π1,,πt,mt+1(ht+1r))\delta(\langle h^{r}_{t},(\pi_{1},...,\pi_{t})\rangle,\quad r_{t+1})=\langle h^{r}_{t+1},\quad(\pi_{1},...,\pi_{t},m_{t+1}(h^{r}_{t+1}))\rangle, where ht+1r=(htr,rt+1)h^{r}_{t+1}=(h^{r}_{t},r_{t+1}).

  • q0q_{0} is the initial state of the automaton as explained above.

  • FQF\subset Q is the set of accepting states, which are the desired states of the automaton.

It’s not hard to see that this automaton is a model of the learning system described in Section 1.2, with its desired property specified by the accepting states FF. In this paper, the desired property is that the system be a lifelong learning system, so the accepting states FF are the set of hTr,(π1,π2,,πT)\langle h^{r}_{T},\quad(\pi_{1},\pi_{2},...,\pi_{T})\rangle pairs that correspond to a lifelong learner 333Recall that an agent’s behavior is fully decided by its policy π=(π1,π2,,πT)\pi=(\pi_{1},\pi_{2},...,\pi_{T}). Therefore given a reward history hTrh^{r}_{T}, the policy is sufficient for us to tell whether the agent is a successful lifelong learner..

To specify learning objectives, each possible reward rΣr\in\Sigma is assigned some semantics. These semantics implicitly define the set of valid reward sequences LΣL\subset\Sigma^{*}. Since LL is a subset of Σ\Sigma^{*}, it is a language over Σ\Sigma. We want to make sure that — for all reward sequences in LL, lifelong learning can be achieved by the learning system abstracted by this automaton, or equivalently, all reward sequences in LL lead to accepting states FF.

2 A Prototype Lifelong Reinforcement Learning System

Designing a lifelong reinforcement learning system involves designing the reward language and the learning algorithm holistically. Intuitively, the reward needs to contain enough information to control the relevant aspects of the learning algorithm, and the learning algorithm in turn needs to ‘interpret’ the reward signal in its intended way. In this section, I aim to provide some insights into the design process with a prototype lifelong reinforcement learning system.

2.1 Reward Language

The main reason lifelong learning is impossible in conventional reinforcement learning is that the learning objective in conventional reinforcement learning is global, in the sense that the goal of the agent is defined in terms of the observation history of its entire life. For a lifelong reinforcement learning agent, the learning objectives should instead be local, meaning that the goals should be defined only for some smaller tasks that the agent can encounter multiple times during its lifetime. Once a local goal expires, whether it is because the goal has been achieved or because the agent has failed to achieve it within a certain time limit, a new local goal (can potentially be another instantiation of the same goal) ensues. This way, the agent has the opportunity to gather knowledge for each of the goals, and improve upon them, all within one life. Local goals like this are ubiquitous for humans. For example, when a person is hungry, his main concern is probably not the global goal of being happy for the rest of his life — his goal is to have food. After the person is full, he might feel like taking a nap, which is another local goal. In fact, the local goals and the transition of them seems to embody what we mean by intrinsic motivation.

To be able to specify a series of local goals, the reward in this prototype learning system has two parts: the reward state rtsGr^{s}_{t}\in G, and the reward value rtvr^{v}_{t}\in\mathbb{R}, where GG is the set of local goals the agent may have. This form of reward is inspired by the reward machine (Icarte et al., 2018), a Mealy machine for specifying history-dependent reward, but the semantics we assign to the reward will be different. Also note that this Mealy machine bears no relation to the automaton we discussed in Section 1.3 — the reward machine models the reward, while the automaton in Section 1.3 models the learning system, and takes the reward as input. Each reward state rsr^{s} corresponds to a local goal. When a local goal (or equivalently, a reward state) expires, the agent receives a numerical reward value rvr^{v}. For all other timesteps (other than the expiration of local goals), the reward value can be considered to take a special NULL value, meaning that no reward value is received. The reward value is an evaluation of the agent’s performance in an episode of a reward state, where an episode of a reward state is defined as the time period between the expiration of the previous reward state (exclusive) and the expiration of the reward state itself (inclusive). The reward state can potentially depend on the entire observation history, while the reward value can only depend on the observation history of the episode it is assessing. Overall, the reward is specified by (rts,rtv)=r(hto)(r^{s}_{t},r^{v}_{t})=r(h^{o}_{t}).

The local goals described here are technically similar to subgoals in hierarchical reinforcement learning (Dietterich, 2000; Sutton et al., 1999; Parr & Russell, 1997). However, the term ‘subgoal’ suggests that there is some higher-level goal that the agent needs to achieve, and that the higher-level goal is the true objective the agent needs to optimize. That is not the case here — although it is totally possible that the local goals are designed in such a way that some global goal can be achieved, the agent only needs to optimize the local goals.

The reward language in this prototype system makes two assumptions on the learning algorithm. As long as the two assumptions are met, the learning algorithm is considered to ‘interpret’ the reward correctly. The first assumption is that the learning algorithm only generates policies that are episode-wise stationary, meaning that πt1=πt2\pi_{t_{1}}=\pi_{t_{2}} for any timesteps t1t_{1} and t2t_{2} in the same episode of a reward state, and that πt1:OA\pi_{t_{1}}:O\to A. This assumption is not particularly restrictive, because in cases where a local goal requires a more complex policy, we can always split the goal into multiple goals (by modifying the reward function) for which the policies are episode-wise stationary. With this assumption, we can use a single policy πrs:OA\pi_{r^{s}}:O\to A to represent the policies at all timesteps within an episode of reward state rsr^{s}. The second assumption is that the learning algorithm keeps a pool of ‘elite’ policies for each reward state: a policy that led to high reward value in some episode has the opportunity to enter the pool, and a policy that consistently leads to higher reward value eventually dominates the policy pool. The exact criterion for selection into the pool (e.g., to use the expected reward value as the criterion, or to use the probability of the reward value being higher than a certain threshold, etc.) is not enforced, and is left up to the learning algorithm.

2.2 Learning Algorithm

The learning algorithm in this prototype lifelong learning system is an evolutionary algorithm, adjusted to meet the assumptions made by the reward. The algorithm maintains a policy pool DrsD_{r^{s}} of maximum size dd for each reward state rsGr^{s}\in G. Each item in the pool is a two tuple π,rπv\langle\pi,r^{v}_{\pi}\rangle where π\pi is a policy and rπvr^{v}_{\pi} is the reward value of the last episode in which π\pi was executed. Conceptually, the algorithm consists of three steps: policy generation, policy execution, and (policy) pool update, which are described below.

Policy Generation

When an episode of reward state rsr^{s} starts, a policy πrs\pi_{r^{s}} is generated from one of the following methods with probability p1p_{1}, p2p_{2}, p3p_{3}, respectively:

  1. 1.

    Randomly sample a policy from the policy pool DrsD_{r^{s}}, and mutate the policy.

  2. 2.

    Randomly sample a policy from DrsD_{r^{s}} and keep it as is. Remove the sampled policy from DrsD_{r^{s}}. This is to re-evaluate a policy in the pool. Since the transition of observations might be stochastic, the same policy does not necessarily always result in the same reward value.

  3. 3.

    Randomly generate a new policy πrs:OA\pi_{r^{s}}:O\to A from scratch. This is to keep the diversity of the policy pool.

p1p_{1}, p2p_{2} and p3p_{3} should sum up to 11, and are hyper-parameters of the algorithm.

Policy Execution

Execute the generated policy πrs\pi_{r^{s}} until a numerical reward value rvr^{v} is received.

Pool Update

If the policy pool is not full, insert πrs,rv\langle\pi_{r^{s}},r^{v}\rangle into the pool. Otherwise compare rvr^{v} with the minimum reward value in the pool. If rvr^{v} is greater than or equal to the minimum reward value, replace the policy and reward value pair (that has the minimum reward value) with πrs,rv\langle\pi_{r^{s}},r^{v}\rangle.

2.3 Embedding Learning Bias

Learning bias in reinforcement learning systems refers to the explicit or implicit assumptions made by the learning algorithm about the policy. Our assumption that the policy is episode-wise stationary is an example of learning bias. Arguably, a good learning bias is as important as a good learning algorithm, therefore it is important that mechanisms are provided to embed learning bias into the learning system.

A straight-forward way to embed learning bias into the above lifelong learning system is through the policy generation process. This includes how existing policies are mutated, and what distribution new policies are sampled from. The learning bias provided this way does not depend on the agent’s observation and reward history, and is sometimes implicit (e.g., the learning bias introduced by using a neural network of particular architecture).

Another type of learning bias common in reinforcement learning is guidance, the essence of which can be illustrated by Figure 3. Suppose in some reward state, the agent starts from observation oo and the goal is to reach 444For sake of terminological convenience, we pretend that the observations here are environment states. observation oo^{\prime}. Prior knowledge indicates that to reach oo^{\prime}, visiting o′′o^{\prime\prime} is a good heuristic, but reaching o′′o^{\prime\prime} itself has little or no merit. In other words, we would like to encourage the agent to visit and explore around o′′o^{\prime\prime} more frequently (than other parts of the observation space) until a reliable policy to reach oo^{\prime} is found.

Refer to caption
Figure 3: A simplistic abstraction of guidance in reinforcement learning.

To provide guidance to the agent in the prototype lifelong learning system, we can utilize the property of the learning algorithm that policies leading to high reward values will enter the policy pool. Once a policy enters the pool, it has the opportunity to be sampled (possibly with mutation) and executed. Therefore, we just need to assign a higher reward value for reaching o′′o^{\prime\prime} (before the expiration of the reward state) than reaching neither oo^{\prime} nor o′′o^{\prime\prime}. Also important is the ability to control the extent to which region around o′′o^{\prime\prime} is explored. To achieve this, recall that the learning algorithm occasionally re-evaluates policies in the policy pool. If we assign a lower reward value for reaching o′′o^{\prime\prime} with some probability, we can prevent the policy pool from being overwhelmed only by policies that lead to o′′o^{\prime\prime}. In other words, the reward value for reaching o′′o^{\prime\prime} should have multiple candidates. Let rv({O{o,o′′}})r^{v}(\{O-\{o^{\prime},o^{\prime\prime}\}\}) denote the reward value for an episode where the agent reaches neither oo^{\prime} nor o′′o^{\prime\prime}, rv(o)r^{v}(o^{\prime}) denote the reward value for reaching oo^{\prime}, we can set the reward value rv(o′′)r^{v}(o^{\prime\prime}) for reaching o′′o^{\prime\prime} as:

rv(o′′):={a,with probability pb,with probability 1pr^{v}(o^{\prime\prime}):=\begin{cases}a,&\text{with probability $p$}\\ b,&\text{with probability $1-p$}\end{cases}

where b<rv({O{o,o′′}})<a<rv(o)b<r^{v}(\{O-\{o^{\prime},o^{\prime\prime}\}\})<a<r^{v}(o^{\prime}). The probability pp controls the frequency region around o′′o^{\prime\prime} is to be explored compared the other parts of the observation space 555Note that the word ‘probability’ here should be interpreted as the ‘long-run proportion’, and therefore the reward value needs not be truly stochastic. E.g., we can imagine that the reward has a third component which is the state of a pseudo-random generator..

3 Experiment

Now we evaluate the behaviour of the prototype lifelong reinforcement learning system. The source code of the experiments can be found at https://gitlab.com/lifelong-rl/lifelongRL_gridworld

3.1 Environment

Consider a gridworld agent whose life revolves around getting food and taking the food back home for consumption. The agent lives in a 1111 by 1111 gridworld shown in Figure 4. The shaded areas are barriers that the agent cannot go through. Some potential positions of interest are marked with letters: F is the food source and is assumed to have infinite supply of food; H is the agent’s home. To get to the food source from home, and to carry the food home, the agent must pass through one of the two tunnels — the tunnel on the left is marked with L and the tunnel on the right is marked with R. At each timestep, the agent observes its position in the gridworld as well as a signal indicating whether it is in one of the four positions of interest (if yes, which), and chooses from one of the four actions: UP, RIGHT, DOWN and LEFT. Each action deterministically takes the agent to the adjacent grid in the corresponding direction, unless the destination is a barrier, in which case the agent remains in its original position. The agent starts from home at the beginning of its life, and needs to go to the food source to get food. Once it reaches the food source, it needs to carry the food back home. This process repeats until the agent dies. The lifespan of the agent is assumed to be 100100 million timesteps. The agent is supposed to learn to reliably achieve these two local goals within its lifetime.

Refer to caption
Figure 4: Gridworld environment.

3.2 Learning System Setup

The reward state in this experiment is represented by the conjunction of Boolean variables. For example, if three Boolean variables AA, BB and CC are defined, then the reward state would be in the form of rs=ABCr^{s}=A\land B\land C or rs=A¬BCr^{s}=A\land\neg B\land C, etc. At the bare minimum, one Boolean variable 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} needs to be defined for this agent, where 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} being true corresponds to the local goal of going to the food source, and ¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\neg\mathtt{GET\_FOOD} corresponds to the local goal of carrying the food home. The agent receives a reward value of +1+1 if 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} is true and the agent reaches 𝙵\mathtt{F}, in which case the Boolean variable 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} transitions to false. Similarly, the agent receives a reward value of +1+1 if ¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\neg\mathtt{GET\_FOOD} is true and the agent reaches 𝙷\mathtt{H}, in which case 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} transitions to true. On top of 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD}, we define another Boolean variable 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{TIMED\_OUT}, which indicates whether the agent has exceeded a certain time limit for trying to get to the food source, or for trying to carry the food home. If the reward state is ¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\neg\mathtt{TIMED\_OUT}\land\mathtt{GET\_FOOD}, and the agent fails to reach 𝙵\mathtt{F} within the time limit, itreceives a reward value of 1-1, and the reward state transition to 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{TIMED\_OUT}\land\mathtt{GET\_FOOD}. From 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{TIMED\_OUT}\land\mathtt{GET\_FOOD}, if the agent still fails to get to 𝙵\mathtt{F} within the time limit, it receives a reward value of 0. The agent will remain in 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{TIMED\_OUT}\land\mathtt{GET\_FOOD}, until it reaches 𝙵\mathtt{F}, when the reward state transitions to ¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{GET\_FOOD} (and receive a +1+1 reward value as already mentioned). For the case when 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} is false, the reward transition is defined similarly. Throughout the experiments, the time limit is set to 2424, which is enough for the agent to accomplish any of the local goals. We refer to this reward design as the base case.

Unfortunately, even for a toy problem like this, learning can be difficult if no proper learning bias is provided. Since there are 44 actions and 7474 possible positions, the number of possible episode-wise stationary policies is 4744^{74} for each reward state. Among those policies, very few can achieve the local goals. If the policy generation and mutation is purely random, it will take a long time for the agent to find a good policy.

Biased Policy

The first learning bias we consider is biased policy, which is in contrast to the unbiased policy case where the policy generation and mutation is purely random. More specifically, we make the policy generation process biased towards policies that take the same action for similar observations. This would encourage policies that head consistently in one direction, and discourage those that indefinitely roam around between adjacent positions.

Progress-Based Guidance

The second learning bias we consider is guidance based on the agent’s progress. Different from the base case where the agent always receives a 0 (if 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{TIMED\_OUT} is true) or 1-1 (if 𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{TIMED\_OUT} is false) reward value when it fails to achieve the local goal within the time limit, the agent now has some probability p=0.8p=0.8 of receiving a reward value proportional to the Manhattan distance dd it has traveled since the beginning of the episode. To be precise:

rv:={0.01dwith probability psame as the base casewith probability 1pr^{v}:=\begin{cases}0.01d&\text{with probability $p$}\\ \text{same as the base case}&\text{with probability $1-p$}\end{cases}

This way, policies leading to more progress (albeit not necessary towards the local goal) will be encouraged.

Sub-Optimal Guidance

Finally, we consider a case of sub-optimal guidance that encourages the agent to explore a sub-optimal trajectory. As we have mentioned, both reaching the food source from home and carrying the food home require the agent to go through one of the two tunnels. However, if the agent goes through the left tunnel, it has to travel more distance. Suppose that we prefer the agent to take the shorter route, but we only know the route that goes through the left tunnel; and as a result, we sub-optimally encourage the agent to explore the left tunnel. To guide the agent to take the left tunnel, Boolean variable 𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{VISITED\_LEFT} is introduced as an indicator of whether L has been visited since the last visitation of F or H. Now we have 23=92^{3}=9 elements in the reward space, corresponding to 99 possible local goals. The reward transition is different from the base case in that if the agent has already visited L when the local goal 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} or ¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\neg\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} times out, 𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{VISITED\_LEFT} becomes true, and the agent will receive a reward value of +0.6+0.6 with 0.80.8 probability, and 0.2-0.2 with 0.20.2 probability. To express our preference for the shorter route, the agent receives a reward value of +0.8+0.8 (instead of +1+1) when it reaches F (when 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} is true) or H (when 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳\mathtt{GET\_FOOD} is false) through the left tunnel.

3.3 Results

Refer to caption
(a) unbiased policy, progressed-based guidance
Refer to caption
(b) biased policy, progressed-based guidance
Figure 5: Learning curve for unbiased/biased policy with progress-based guidance, averaged over 2020 runs.

Figure 5 shows the learning curves for reward state 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT} with progress-based guidance. The xx-axis is the timesteps (in million), and the yy-axis is the percentage of times the agent transitions into a particular next reward state starting from 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}. A next reward state of ¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\neg\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT} means that the agent successfully reached F within the time limit, and a next reward state of 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃\mathtt{GET\_FOOD}\land\mathtt{TIMED\_OUT} means that the agent failed to do so. As we can see, with unbiased policy, it took the agent around 2525 million timesteps to achieve 100%100\% success rate; while with biased policy, this only took around 88 million timesteps.

Refer to caption
(a) unbiased policy, sub-optimal guidance
Refer to caption
(b) biased policy, sub-optimal guidance
Figure 6: Learning curve for unbiased/biased policy with sub-optimal guidance, averaged over 2020 runs.

Figure 6 shows the learning curves for reward state 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} with the sub-optimal guidance described in Section 3.2. Similar to Figure 5, the xx-axis is the timesteps (in million), and the yy-axis is the percentage of times the agent transitioned into a particular next reward state starting from 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT}. A next reward state of ¬𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\neg\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} means that the agent successfully reached F within the time limit; a next reward state of 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\mathtt{TIMED\_OUT}\land\mathtt{VISITED\_LEFT} means that the agent failed to reach the food source, but was able to find a way to the left tunnel; and a next reward state of 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} means that the agent was neither able to reach the left tunnel nor the food source within the time limit. As we can see, for both unbiased and biased policy, learning is much slower than progress-based guidance. This is likely due to the much sparser guidance signal — the agent receives guidance only when it reaches the left tunnel. For the unbiased policy case, 100%100\% success rate was not achieved within 100100 million timesteps, but we can clearly see that exploration around the left tunnel was encouraged as intended. For the biased policy case, the agent was able to reach 100%100\% success rate after 5050 million timesteps. But was the agent able to figure out the optimal route, or did it only learn to take the sub-optimal route as guided? Recall that the agent receives a reward value of +1+1 if it takes the optimal route, and a reward value of +0.8+0.8 if it takes the sub-optimal route. As shown in Figure 7, although the agent was taking the sub-optimal route by 5050 million timesteps when it just learned to reach the food source reliably, it was eventually able to figure out the optimal route by 9090 million timesteps.

Refer to caption
Figure 7: Reward value for 𝙶𝙴𝚃_𝙵𝙾𝙾𝙳¬𝚃𝙸𝙼𝙴𝙳_𝙾𝚄𝚃¬𝚅𝙸𝚂𝙸𝚃𝙴𝙳_𝙻𝙴𝙵𝚃\mathtt{GET\_FOOD}\land\neg\mathtt{TIMED\_OUT}\land\neg\mathtt{VISITED\_LEFT} (biased policy with sub-optimal guidance, averaged over 2020 runs).

4 Conclusions

Lifelong reinforcement learning is sometimes viewed as a multi-task reinforcement learning problem (Abel et al., 2018), where the agent must learn to solve tasks sampled from some distribution 𝒟\mathcal{D}. The agent is expected to (explicitly or implicitly) discover the relation between tasks, and generalize its policy to unseen tasks from 𝒟\mathcal{D}. The focus is therefore on the transfer learning (Taylor & Stone, 2009) and continual learning (Ring, 1998) aspects of lifelong reinforcement learning.

In this paper, I provided a systems view on lifelong reinforcement learning. In particular, I showed that the reward in a lifelong reinforcement learning system can be a general language, and that the language needs to be designed holistically with the learning algorithm. A prototype lifelong reinforcement learning system was given, with an emphasize on how learning bias can be embedded into the learning system through the synergy of the reward language and the learning algorithm.

Acknowledgements

The author would like to thank Gaurav Sharma (Borealis AI) for his comments on a draft of the paper.

References

  • Abel et al. (2018) Abel, D., Arumugam, D., Lehnert, L., and Littman, M. L. State abstractions for lifelong reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp.  10–19, 2018.
  • Dietterich (2000) Dietterich, T. G. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res., 13:227–303, 2000.
  • Icarte et al. (2018) Icarte, R. T., Klassen, T. Q., Valenzano, R. A., and McIlraith, S. A. Using reward machines for high-level task specification and decomposition in reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp.  2112–2121, 2018.
  • Parr & Russell (1997) Parr, R. and Russell, S. J. Reinforcement learning with hierarchies of machines. In Advances in Neural Information Processing Systems 10, [NIPS Conference, Denver, Colorado, USA, 1997], pp.  1043–1049, 1997.
  • Ring (1998) Ring, M. B. Child: A first step towards continual learning. In Learning to Learn, pp.  261–292. 1998. doi: 10.1007/978-1-4615-5529-2“˙11. URL https://doi.org/10.1007/978-1-4615-5529-2_11.
  • Singh et al. (2004) Singh, S. P., Barto, A. G., and Chentanez, N. Intrinsically motivated reinforcement learning. In Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems, NIPS 2004, December 13-18, 2004, Vancouver, British Columbia, Canada], pp.  1281–1288, 2004.
  • Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018.
  • Sutton et al. (1999) Sutton, R. S., Precup, D., and Singh, S. P. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 112(1-2):181–211, 1999.
  • Taylor & Stone (2009) Taylor, M. E. and Stone, P. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633–1685, 2009.
  • Watkins & Dayan (1992) Watkins, C. J. C. H. and Dayan, P. Technical note q-learning. Machine Learning, 8:279–292, 1992.
  • White (1982) White, D. Multi-objective infinite-horizon discounted markov decision processes. Journal of Mathematical Analysis and Applications, 89(2):639 – 647, 1982. ISSN 0022-247X.