Provable Reset-free Reinforcement Learning by No-Regret Reduction
Abstract
Reinforcement learning (RL) so far has limited real-world applications. One key challenge is that typical RL algorithms heavily rely on a reset mechanism to sample proper initial states; these reset mechanisms, in practice, are expensive to implement due to the need for human intervention or heavily engineered environments. To make learning more practical, we propose a generic no-regret reduction to systematically design reset-free RL algorithms. Our reduction turns the reset-free RL problem into a two-player game. We show that achieving sublinear regret in this two-player game would imply learning a policy that has both sublinear performance regret and sublinear total number of resets in the original RL problem. This means that the agent eventually learns to perform optimally and avoid resets. To demonstrate the effectiveness of this reduction, we design an instantiation for linear Markov decision processes, which is the first provably correct reset-free RL algorithm.
1 Introduction
Reinforcement learning (RL) enables an artificial agent to learn problem-solving skills directly through interactions. However, RL is notorious for its sample inefficiency. Successful stories of RL so far are mostly limited to applications where a fast and accurate simulator of the world is available for collecting large amounts of samples (like in games). Real-world RL, such as robot learning, remains a challenging open question.
One key obstacle to scaling up data collection in real-world RL problems is the need for resetting the agent. The ability to reset the agent to proper initial states plays an important role in typical RL algorithms, as it affects which region the agent can explore and whether the agent can recover from its past mistakes (Kakade & Langford, 2002). In most settings, completely avoiding resets without prior knowledge of the reset states or environment is impossible. In the absence of a reset mechanism, agents may get stuck in absorbing states, such as those where it has damaged itself or irreparably altered the learning environment. For instance, a robot learning to walk would inevitably fall before perfecting the skill, and timely intervention is needed to prevent damaging the hardware and to return the robot to a walkable configuration. Another example is a robot manipulator learning to stack three blocks on top of each other. Unrecoverable states would include the robot knocking a block off the table, or the robot smashing itself forcefully into the table. Reset would then reconfigure the scene to a meaningful initial state that is good for the robot to learn from.
Although resetting is necessary to real-world RL, it is non-trivial to implement. Unlike in simulation, a real-world agent (e.g., a robot) cannot be reset to an arbitrary initial state with a click of a button. Resetting in the real world is expensive as it usually requires constant human monitoring and intervention. Normally, a person would need to oversee the entire learning process and manually reset the agent (e.g., a robot) to a meaningful starting state before it enters an unrecoverable state where the problem can no longer be solved. Sometimes automatic resetting can be implemented by cleverly engineering the physical learning environment (Gupta et al., 2021), but it is not always feasible.
An approach we can take to make real-world RL more cost-efficient is through reset-free RL. The goal of reset-free RL is not to completely remove resets, but to have an agent learn how to perform well while minimizing the amount of external resets required. Some examples of problems that have been approached in reset-free RL include agents learning dexterity skills, such as picking up an item or inserting a pipe, and learning how to walk (Ha et al., 2020; Gupta et al., 2021). While there has been numerous works proposing reset-free RL algorithms using approaches such as multi-task learning (Gupta et al., 2021; Ha et al., 2020), learning a reset policy (Eysenbach et al., 2018; Sharma et al., 2022), and skill-space planning (Lu et al., 2020), to our knowledge, there has not been any work with provable guarantees.
In this work, we take the first step by providing a provably correct framework to design reset-free RL algorithms. Our framework is based on the idea of a no-regret reduction. First, we reduce the reset-free RL problem to a sequence of constrained Markov decision processes (CMDPs) with an adaptive initial state sequence. Using the special structure of reset-free RL, we establish the existence of a common saddle-point for these CMDPs. Interestingly, we show such a saddle-point exists without using the typical Slater’s condition for strong duality and despite the fact that CMDPs with different initial states in general do not share a common Markovian optimal policy. Then, we derive our main no-regret reduction, which further turns this sequence into a two-player game between a primal player (updating the Markovian policy) and a dual player (updating the Lagrange multiplier function of the CMDPs) to solve for the common saddle-point. We show that if no regret is achieved in this game, then the regret of the original RL problem and the total number of required resets are both provably sublinear. This means the agent eventually learns to perform optimally and avoid resets. Using this reduction, we design a reset-free RL algorithm instantiation under the linear MDP assumption, using learning with upper confidence bound as the baseline algorithm for the primal player and projected gradient descent for the dual player. Our algorithm achieves regret and resets with high probability, where is the feature dimension, is the length of an episode, and is the total number of episodes.111 In the tabular MDP setting, our bounds on the regret and total resets become , where , denote the size of the state and action spaces, respectively.
2 Related Work
Reset-free RL, despite being a promising avenue to tackle real-world RL, is a relatively new concept in the literature. The work thus far, to our knowledge, has been limited to approaches without theoretical guarantees but with only empirical verification. One such work takes the approach of learning a reset policy (Eysenbach et al., 2018; Sharma et al., 2022). The idea is to learn two policies concurrently: one to maximize reward, and one to bring the agent back to a reset-free initial state if they encounter a reset state (a state which normally requires human intervention). This approach prevents the need for manual resets; however, it requires the knowledge of the reset states (Eysenbach et al., 2018). Sharma et al. (2022) avoid this assumption but assume given demonstrations on how to accomplish the goal and a fixed initial state distribution.
Using multi-task learning is another way to perform resets without human intervention. Here, the agent learns to solve multiple tasks instead of just maximizing the reward of one task. The hope is that a combination of the learned tasks can achieve the main goal, and some tasks can perform natural resets for others. This approach breaks down the reset process and (possibly) makes it easier to learn. However, the order in which tasks should be learned needs to be provided manually (Gupta et al., 2021; Ha et al., 2020).
A related problem is infinite-horizon non-episodic RL with provable guarantees (see Wei et al. (2020, 2019); Dong et al. (2019) and the references within) as this problem is also motivated by not using resets. In this setting, there is only one episode that goes on indefinitely. The objective is to maximize cumulative reward, and progress is usually measured in terms of regret with the comparator being an optimal policy. However, compared with the reset-free RL setting we study here, extra assumptions, such as the absence or knowledge of absorbing states, are usually required to achieve sublinear regret. In addition, the objective does not necessarily lead to a minimization of resets as the agent can leverage reset transitions to maximize reward. In reset-free RL, minimizing/avoiding resets is a priority. Learning in infinite-horizon CMDPs (where one possible constraint could be minimizing resets) has been studied (Zheng & Ratliff, 2020; Jain et al., 2022), but to our knowledge, all such works make strong assumptions such as a fixed initial state distribution or known dynamics. None of these assumptions are feasible for most real-world RL settings. Enforcing a fixed initial state distribution or removing absorbing states in theory oftentimes requires physically resetting a real-world agent to satisfy those desired mathematical conditions. In this paper, we focus on an episodic setting of reset-free RL (see Section 3); a non-episodic formulation of reset-free RL could be an interesting one for further research.
Another related problem is safe RL, which involves solving the standard RL problem while adhering to some safety constraints. We can think of reset states in reset-free RL as unsafe states in safe RL. There has been a lot of work in safe RL, with approaches such as utilizing a baseline safe (but not optimal) policy (Huang et al., 2022; Garcia Polo & Fernandez Rebollo, 2011), pessimism (Amani & Yang, 2022), and shielding (Alshiekh et al., 2018; Wagener et al., 2021). These works have promising empirical results but usually require extra assumptions such as a given baseline policy or knowledge of unsafe states. There are also provable safe RL algorithms. To our knowledge, all involve framing safe RL as a CMDP. Here, the safety constraints are modeled as a cost, and the overall goal is to maximize performance while keeping the cost below a threshold. Some of the works explicitly study safe RL while others study learning in CMDPs more generally. They commonly have provable guarantees of either sublinear regret and constraint violations, or sublinear regret with zero constraint violation (Wei et al., 2021; HasanzadeZonuzy et al., 2021; Qiu et al., 2020; Wachi & Sui, 2020; Efroni et al., 2020; Ghosh et al., 2022; Ding et al., 2021). However, most works (including all the aforementioned ones), consider the episodic case where the initial state distribution of each episode is fixed. This prevents a natural extension to reset-free learning as human intervention would be required to reset the environment at the end of each episode. In technical terms, this is the difference between solving a sequence of the same CMDP versus solving a sequence of different CMDPs. Works that allow for arbitrary initial states require fairly strong assumptions, such as knowledge (and the existence) of safe actions from each state (Amani et al., 2021). In our work, we utilize some techniques from provable safe RL for reset-free RL. However, it is important to note that safe RL and reset-free RL are fundamentally different, albeit related, problems. Safe RL aims to ensure the safety of an agent. On the other hand, reset-free RL aims to avoid reset states, which encompass not only unsafe states but all undesirable ones, for a varying sequence of initial states as the agent cannot be reset freely.
We weaken typical assumptions of current approaches in empirical reset-free RL, infinite-horizon RL, and safe RL by dropping any requirements on knowledge of undesirable states or for demonstrations, and by allowing arbitrary initial state sequences that admit reset-free policies. Considering arbitrary initial state sequences where initial states potentially are correlated with past behaviors is not only necessary to the reset-free RL setting, but also allows for extensions to both lifelong and multi-task learning. We achieve this important relaxation on the initial state sequence with a key observation that identifies a shared Markovian policy saddle-point across CMDPs where the constraint imposes zero resets. This observation is new to our knowledge, and it is derived from the particular structure of reset-free RL; we note that generally, CMDPs with different initial states do not admit shared Markovian policy saddle-points. Finally, by the analogy between safe states and reset states, on the technical side, our framework and algorithm can also be viewed as the first provable safe RL algorithm that allows for arbitrary initial state sequences without strong assumptions.
While our main contribution is a generic reduction technique to design reset-free RL algorithms, we also instantiate the framework and achieve regret and constraint violation bounds that are still comparable to the above works when specialized to their setting. Under the linear MDP assumption, our algorithm achieves regret and violation (equivalently, the number of resets in reset-free RL), which is asymptotically equivalent to Ghosh et al. (2022) and comparable to the bounds of from Ding et al. (2021) for a fixed initial state.
In summary, our contributions are as follows.
-
1.
We create a framework to design provably correct reset-free RL algorithms via a reduction first to a sequence of CMDPs with an adaptive initial state sequence, and then to a two-player game. We prove that achieving sublinear regret in this two-player game implies learning a policy that achieves sublinear performance regret and sublinear number of resets in the original problem.
-
2.
On the technical side, we show that such a reduction can be constructed without using the typical Slater’s condition for strong duality and despite the fact that CMDPs with different initial states in general do not share a common Markovian optimal policy.
-
3.
We instantiate the framework under the linear MDP setting as a proof of concept, creating the first provably correct reset-free RL algorithm to our knowledge that achieves sublinear regret and resets.
3 Preliminary
We consider episodic reset-free RL: in each episode, the agent aims to optimize for a fixed-horizon return starting from the last state of the previous episode or some state that the agent was reset to if reset occurred.
Problem Setup and Notation
Formally, we can define episodic reset-free RL as a Markov decision process (MDP), , where is the state space, is the action space, is the transition dynamics, is the reward function, and is the task horizon. We assume and are unknown. We allow to be large or continuous but assume is relatively small so that can be performed. We designate the set of reset states, or any states that human intervention normally would have been required, as ; we do not assume that the agent has knowledge of . We also do not assume that there is a reset-free action at each state, as opposed to Amani et al. (2021). Therefore, the agent needs to plan for the long-term to avoid resets. We assume , and for simplicity, we assume is deterministic. However, we note that it would be easy to extend this to the setting where rewards are stochastic.
The agent interacts with the environment for total episodes. Following the convention of episodic problems, we suppose the state space is layered, and a state at time is factored into two components where denotes the time-invariant part. Reset happens at some time if the time-invariant part of , . The initial state of the next episode will be where is sampled from an unknown state distribution. In a given episode, if reset happens, we designate the time step this occurs as . If there is no reset in an episode, the initial state of the next episode is the last state of the current episode, i.e., for episode , if in episode .222This setup covers reset-free multi-task or lifelong RL problems that are modeled as contextual MDPs. We can treat each state as , where denotes the context that stays constant within an episode. If no reset happens, the initial state of episode is if in episode , where the new context may depend on the current context . Therefore, the initial state sequence is adaptive. This sequence is necessary to consider since in reset-free RL, we want to avoid resetting, including after each episode.
We denote the set of Markovian policies as , and a policy as . We define the state value function and the state-action value function under as
(1) | ||||
where , and we recall is the time step when the agent enters (if at all).
Objective
The overall goal is for the agent to learn a Markovian policy to maximize its cumulative reward while avoiding resets. Therefore, our performance measures are as follows (we seek to minimize both quantities):
(2) | ||||
(3) |
where denotes the set of Markovian policies that avoid resets for all episodes, and is the policy used by the agent in episode . Note that by the reset mechanism .
Notice that the initial states in our regret and reset measures are determined by the learner. Given the motivation behind reset-free RL (see Section 1), we can expect that the initial states here are meaningful for performance comparison by construction. Otherwise, a reset would have occurred to set the learner to a meaningful state. Note that this means by the reset mechanism, all “bad” absorbing states are in , and hence, the agent cannot hide in a “bad” absorbing state to achieve small regret. In addition, since the learner does not receive any reward within an episode after being reset, the learner cannot leverage resets to gain an advantage over the optimal policy (which never resets) to minimize or even achieve negative regret.
To make the problem feasible, we assume achieving no resets is possible. We state this assumption formally below.
Assumption 3.1.
For any sequence , the set is not empty. That is, there is a Markovian policy such that .
This assumption is simply stating that every episode of learning admits a reset-free policy. As discussed in Section 2, this assumption is weaker than existing assumptions in the literature. We note that an alternate assumption that only (i.e., the initial state of the first episode) admits a reset-free policy is insufficient to make reset-free RL feasible; since the transition dynamics are unknown to the agent, under this assumption alone, for any algorithm, there is a problem333Consider a state space which can be separated into a reset-free state , a reset state , and a subset . The subset contains reset states and reset-free states. For all actions taken at , the agent will land at the reset state . Given a learning algorithm, let be a subset of actions that the learning algorithm has constant probability of taking at (the very first initial state). Since the learning agent has no knowledge of the MDP, without violating the assumption that admits a reset-free policy, we can construct an MDP such that taking actions in would lead to a reset state and a reset mechanism which resets the agent to the reset-free state whenever the agent enters a reset state. As a result, the learning agent has a constant probability of incurring a linear number of resets over the total number of episodes. such that the number of resets must be linear.
We highlight that 3.1 is a reasonable assumption in practice. It does not require a fixed initial state. In addition, if reset happens, in practice, the agent is usually set to a state where it can continue to operate in without reset; if the agent is at a state where no such reset-free policy exists, reset should happen. This assumption is similar to the assumption on the existence of a perfectly safe policy in safe RL literature, which is a common and relatively weak assumption (Ghosh et al., 2022; Ding et al., 2021). If there were to be initial states that inevitably lead to a reset, the problem would be infeasible and does not follow from the motivation of reset-free RL.
4 A No-Regret Reduction for Reset-Free RL
In this section, we present our main reduction of reset-free RL to regret minimization in a two-player game. In the following, we first show that reset-free RL can be framed as a sequence of CMDPs with an adaptive initial state sequence. Then we design a two-player game based on a primal-dual analysis of this sequence of CMDPs. Finally, we show achieving sublinear regret in this two-player game implies sublinear regret and resets in the original reset-free RL problem in (2), and we discuss how to leverage this framework to systematically design reset-free RL algorithms.
Our reduction differs from standard reductions involving bounding the Nash gap with the sum of two players’ regret in the literature of constrained optimization and online learning. First, we show a reduction for a sequence of saddle-point problems instead of for a single fixed saddle-point problem. There are CMDP methods (e.g. (Ghosh et al., 2022) that implicitly use the two players’ regret to bound the Nash gap of a CMDP in their analysis. However, those proofs are applicable to only a single CMDP with a fixed initial state distribution. And importantly, they fundamentally rely on Slater’s condition assumption (i.e., requiring a strictly feasible policy), which does not hold for the CMDPs considered here. Additionally, unlike the typical bound for a convex-concave saddle-point problem in the optimization literature, (Wang & Li, 2020; Kovalev & Gasnikov, 2022; Boyd & Vandenberghe, 2014) the saddle-point problem of a CMDP is non-concave in terms of the policy (and convex in terms of the dual function ). In this paper, we take a different analysis to bypass the interplaying complexities due to CMDP sequences, non-concavity, and the lack of Slater’s condition. The complete proofs for this section are in Section A.1.
4.1 Reset-free RL as a Sequence of CMDPs
The first step of our reduction is to cast the reset-free RL problem in Section 3 to a sequence of CMDP problems which share the same rewards, constraints, and dynamics, but have different initial states. Each problem instance in this sequence corresponds to an episode of the reset-free RL problem, and its constraint describes the probability of the agent entering a state that requires reset.
Specifically, we denote these constrained MDPsas : in episode , the CMDP problem is defined as
(4) |
where we define the cost as
and , defined similarly to (1), is the state value function with respect to the cost . We note that the initial state, , depends on the past behaviors of the agent, and that 3.1 ensures each CMDP in (4) is a feasible problem (i.e., there is a Markovian policy satisfying the constraint). The objective of the agent in each CMDP is to be reward-maximizing while adhering to the constraint, namely that the number of resets should be . Requiring zero constraint violation instead of a small constraint violation will be crucial for our reduction.
Since CMDPs are typically defined without early episode termination unlike (1), with abuse of notation, we extend the definitions of , , , as follows so that the CMDP definition above is consistent with the common literature. We introduce a fictitious absorbing state denoted as in , where and ; once the agent enters , it stays there until the end of the episode. We extend the definition such that, after the agent is in a state , any action it takes brings it to in the next time step. In this way, we can write the value function, e.g. for reward, as in terms of this extended dynamics. We note that these two formulations are mathematically the same for the purpose of learning; when the agent enters , it means that the agent is reset in the episode.
By the construction above, we can write
which is the same as the number of total constraint violations across the CMDPs. Because we do not make any assumptions about the agent’s knowledge of the constraint function (e.g., the agent does not know states ), we allow the agent to reset during learning while minimizing the total number of resets over all episodes.
4.2 Reduction to Two-Player Game
From the problem formulation above, we see that the major difficulty of reset-free RL is the coupling between an adaptive initial state sequence and the constraint on reset probability. If we were to remove either of them, we could use standard algorithms, since the problem would become a single CMDP problem (Altman, 1999) or an episodic RL problem with varying initial states (Jin et al., 2019).
We propose a reduction to systematically design algorithms for this sequence of CMDPs and therefore for reset-free RL. The main idea is to approximately solve the saddle-point problem of each CMDP in (4), i.e.,
(5) |
where denotes the dual variable (i.e., the Lagrange multiplier). Each CMDP can be framed as a linear program (Hernández-Lerma & Lasserre, 2002) whose primal and dual optimal values match (see section in Hazan et al. (2016)). Therefore, for each CMDP, .
While using a primal-dual algorithm to solve for the saddle-point of a single CMDP is known, using this approach for a sequence of CMDPs is not obvious. This is because in general, each CMDP’s optimal policy (and Lagrange multiplier) can be a function of its initial state distribution (Altman, 1999). An easy way to see this is that an optimal policy satisfying a constraint on an initial state may be infeasible for a constraint on another initial state , if, e.g, running this policy starting from does not reach and therefore can have arbitrary behavior at . This behavior is different from that of unconstrained MDPs, where there is an optimal Markovian policy across all the initial states.
Therefore, in general, a common saddle-point of Markovian polices and Lagrange multipliers does not necessarily exist for a sequence of CMDPs with varying initial states.444A shared saddle-point with a non-Markovian policy always exists on the other hand. As a result, it is unclear if there exists a primal-dual algorithm to solve this sequence, especially given that the initial states here are adaptively chosen.
Existence of a Shared Saddle-Point
Fortunately, we show that there is a shared saddle-point with a Markovian policy across all the CMDPs considered in (4) due to the special structure of reset-free RL. It is a proof that does not use Slater’s condition for strong duality, unlike similar literature, but attains the desired property. Instead, it follows from the fact that we impose a constraint that requires zero constraint violations. In addition, we also use 3.1 and the fact that the cost c is non-negative. We formalize this below.
Theorem 4.1.
There exists a function where for each ,
and a Markovian policy , such that is a saddle-point to the CMDPs
for all initial states such that the CMDP is feasible. That is, for all , , and ,
(6) |
Corollary 4.2.
For in Theorem 4.1, it holds that .
We prove for ease of construction that the pair where is also a saddle-point.
Corollary 4.3.
Therefore, the pair in Corollary 4.3 is a saddle-point to all the CMDPs the agent faces. This makes potentially designing a two-player game reduction possible. Now we give the details of our construction.
Two-Player Game
Our two-player game proceeds iteratively in the following manner: in episode , a dual player determines a state value function , and then, a primal player determines a policy which can depend on . The primal and dual players then receive losses and , respectively, where is a Lagrangian function defined as
(7) |
The regret of these two players are defined as follows.
Definition 4.4.
Let and be comparators. The regret of the primal and the dual players are
We present our main reduction theorem below.
Theorem 4.5.
Under 3.1, for any sequences and , it holds that
where is the saddle-point defined in Corollary 4.3.
Designing Reset-free RL Algorithms by Reduction
By Theorem 4.5, if both players have sublinear regret in the two-player game, then the resulting policy sequence will have sublinear performance regret and a sublinear number of resets in the original RL problem. Minimization of regret in a two-player game and solving saddle-point problems have been very well studied (see, e.g., (Ho-Nguyen & Kılınç-Karzan, 2018; Benzi et al., 2005; Hazan et al., 2016)). Therefore, our reduction gives a constructive and provably correct template for designing reset-free RL algorithms. It should be noted however, that the resultant two-player game here is slightly different from the classical full-information online learning problems. Unlike in these problems, the learning agent needs to actively explore the environment in each episode to collect information about the unknown dynamics, reward, and cost (i.e., which states are reset-free). Specifically Theorem 4.5 needs the primal (policy) player to solve an online MDP sequence (with the same transition dynamics but varying reward functions due to changing ), and the dual player to solve an online linear problem. In the next section, we will give an example algorithm under this reduction for linear MDPs.
4.3 Proof Sketches
Proof Sketch of Theorem 4.1
Let and . We define in Theorem 4.1 as the optimal policy to the following MDP: , where we define a state-dependent action space as
By definition, is non-empty for all .
We also define a shorthand notation: we write if . We have the following lemma, which is an application of the performance difference lemma (see Lemma in (Kakade & Langford, 2002) and Lemma A.1 in (Cheng et al., 2021)).
Lemma 4.6.
For any such that and any , it is true that if and only if .
We prove our main claim, (4.1), below. Because , the first inequality is trivial: .
Proof Sketch of Theorem 4.5
We first establish the following intermediate result that will help us with our decomposition.
Lemma 4.7.
For any primal-dual sequence , , where is the saddle-point defined in either Theorem 4.1 or Corollary 4.3.
Then we upper bound Regret() and Resets() by and for suitable comparators. This decomposition is inspired by the techniques used in Ho-Nguyen & Kılınç-Karzan (2018). We first bound Resets().
Lemma 4.8.
For any primal-dual sequence , , where is the saddle-point defined in Corollary 4.3.
Proof.
Notice where is the saddle-point defined in Theorem 4.1. By (4.1), and adding and subtracting , we can bound this difference by
Using Lemma 4.7 and Definition 4.4 to upper bound the above, we get the desired result. ∎
Lastly, we bound Regret with the lemma below and Corollary 4.2.
Lemma 4.9.
For any primal-dual sequence , , where is the saddle-point defined in Corollary 4.3.
Proof.
Note that since for all . Since by definition, for any , , we have the following:
where the last inequality follows from Lemma 4.7 and Definition 4.4. ∎
5 Reset-Free Learning for Linear MDP
To demonstrate the utility of our reduction, we design a provably correct reset-free algorithm instantiation for linear MDP. This example serves to ground our abstract framework and to illustrate concretely how an algorithm instantiating our framework might operate. We also aim to show that despite generality, our framework does not lose any optimality in the rates of regret and total number of resets when compared to similar but specialized works.
We consider a linear MDP setting, which is common in the RL theory literature (Jin et al., 2019; Amani et al., 2021; Ghosh et al., 2022).
Assumption 5.1.
We assume is linear with a known feature map : for any , there exists unknown signed measures over such that for any , we have
and there exists unknown vectors such that for any ,
We assume, for all , , and .
Note that the above assumption on the cost function does not imply knowledge of the reset states nor some hidden structure of the reset states. At the high level, it merely asks that the feature is expressive enough to separate the reset states and reset-free states in a classification problem.
In addition, we make a linearity assumption on the function defined in Theorem 4.1.
Assumption 5.2.
We assume the knowledge of a feature such that , and for some unknown vector . In addition, we assume the knowledge of a convex set555Such a set can be constructed by upper bounding the values using scaling and ensuring non-negativity by, e.g., sum of squares. such that and , and . 666 From the previous section, we can see that the optimal function for the dual player is not necessarily unique. So, we assume bounds on at least one optimal function that we designate as .
For a linear CMDP with a fixed initial state distribution, it is standard to assume that the optimal Lagrange multiplier (i.e., ) is bounded and non-negative. In this regard, our assumption is a mild and natural linear extension of this boundedness assumption for varying initial states, and is true when the feature map is expressive enough.
5.1 Algorithm
The basis of our algorithm lies between the interaction between the primal and dual players. We adopt the common no-regret plus best-response approach (Wang et al., 2021) to the no-regret two-player game in designing algorithms for these two players. We let the dual player perform (no-regret) projected gradient descent and the primal player update policies based on upper confidence bound (UCB) with knowledge of the dual player’s decision (i.e., the best-response scheme to the dual player).
As discussed above, the environment is unknown to the agent. Therefore, as discussed in Section 4, we need to modify the actions of the primal player as compared to in the generic saddle-point problem to instantiate our framework. We do this using UCB, where the agent views unknown actions/states in a more positive (“optimistic”)777Note that the term “optimism” here refers to the use of overestimation in exploration, which is different from the usage in, e.g., optimistic mirror descent (Mertikopoulos et al., 2018). way than the ones that have been observed to drive exploration in unknown environments.
Specifically, in each episode, upon receiving the initial state, we execute actions according to the policy based on a softmax (lines 5-8). Then, we perform the dual update through projected gradient descent. The dual player plays for the next round, , after observing its loss after the primal player plays for the current round, . The projection is to a ball containing (lines 9-11). Finally, we perform the update of the primal player by computing the -functions for both the reward and cost with a bonus to encourage exploration (lines 12-20).
This algorithm uses Ghosh et al. (2022) as the baseline for the primal player. However, we emphasize that our algorithm can handle the adaptive initial state sequence that is seen in reset-free RL Theorems 4.1 and 4.5.
5.2 Analysis
We show below that our algorithm achieves regret and number of resets that are sublinear in the total number of time steps, , using Theorem 4.5. This result is asymptotically equivalent to Ghosh et al. (2022) and comparable to the bounds of from Ding et al. (2021). Therefore, we do not make sacrifices in terms of the regret and total number of resets when specializing our abstract framework.
Proof Sketch of Theorem 5.3
We provide a proof sketch here and defer the complete proof to Section A.2. We first bound the regret of and and then use this to prove the bounds on our algorithm’s regret and number of resets with Theorem 4.5.
We first bound the regret of .
Lemma 5.4.
Consider for some . Then it holds that .
Proof.
We notice first an equality.
We observe that the first term is an online linear problem for (the parameter of ). In episode , is played, and then the loss is revealed. Since the space of is convex, we use standard results (see Lemma in Hazan et al. (2016)) to show that updating through projected gradient descent results in an upper bound for . ∎
We now bound the regret of .
Lemma 5.5.
Consider any . With high probability,
Proof.
First we expand the regret into two terms.
To bound the first term, we use Lemma from Ghosh et al. (2022), which characterizes the property of upper confidence bound. ∎
Lastly, we derive a bound on , which directly implies our final upper bound on Regret() and Resets() in Theorem 5.3 by Theorem 4.5. Combining the upper bounds in Lemma 5.4 and Lemma 5.5, we have a high-probability upper bound of
where the last term is the overestimation error due to optimism. Note that for all , and are as defined in Algorithm 1 and are optimistic estimates of and . To bound this term, we use Lemma from (Ghosh et al., 2022).
5.3 Other Possible Instantiations
We demonstrated above that our general framework can be instantiated to achieve sublinear regret and total number of resets. Importantly, our algorithm serves as an example of how our general framework can be used to systematically design new algorithms for reset-free RL. We can leverage the multitude of existing algorithms that aim to minimize regret in a two-player game. An example of a different strategy is using a no-regret algorithm like optimistic mirror descent (Mertikopoulos et al., 2018) for the dual player. We can also replace UCB for the primal player with an online MDP no-regret algorithm such as a variation of POLITEX (Abbasi-Yadkori et al., 2019). Further studying different combinations of baseline algorithms is an interesting future research direction, which perhaps could even improve existing algorithms for more specialized settings.
6 Conclusion
We propose a generic no-regret reduction for designing provable reset-free RL algorithms. Our reduction casts reset-free RL into the regret minimization problem of a two-player game, for which many existing no-regret algorithms are available. As a result, we can reuse these techniques, and future better techniques, to systematically build new reset-free RL algorithms. In particular, we design a reset-free RL algorithm for linear MDPs using our new reduction techniques, taking the first step towards designing provable reset-free RL algorithms. Extending these techniques to nonlinear function approximators and verifying their effectiveness empirically are important future research directions.
Acknowledgements
Part of this work was done during Hoai-An Nguyen’s internship at Microsoft Research.
References
- Abbasi-Yadkori et al. (2019) Abbasi-Yadkori, Y., Bartlett, P. L., Bhatia, K., Lazic,́ N., Szepesvaŕi, C., and Weisz, G. POLITEX: regret bounds for policy iteration using expert prediction. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 3692–3702. PMLR, 2019. URL http://proceedings.mlr.press/v97/lazic19a.html.
- Alshiekh et al. (2018) Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., and Topcu, U. Safe reinforcement learning via shielding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
- Altman (1999) Altman, E. Constrained Markov decision processes: stochastic modeling. Routledge, 1999.
- Amani & Yang (2022) Amani, S. and Yang, L. F. Doubly pessimistic algorithms for strictly safe off-policy optimization. In Annual Conference on Information Sciences and Systems, pp. 113–118, 2022. doi: 10.1109/CISS53076.2022.9751158.
- Amani et al. (2021) Amani, S., Thrampoulidis, C., and Yang, L. Safe reinforcement learning with linear function approximation. In International Conference on Machine Learning, pp. 243–253. PMLR, 2021.
- Benzi et al. (2005) Benzi, M., Golub, G. H., and Liesen, J. Numerical solution of saddle point problems. Acta Numerica, 14:1–137, 2005. doi: 10.1017/S0962492904000212.
- Boyd & Vandenberghe (2014) Boyd, S. P. and Vandenberghe, L. Convex Optimization. Cambridge University Press, 2014. ISBN 978-0-521-83378-3. doi: 10.1017/CBO9780511804441. URL https://web.stanford.edu/%7Eboyd/cvxbook/.
- Cheng et al. (2021) Cheng, C.-A., Kolobov, A., and Swaminathan, A. Heuristic-guided reinforcement learning. Advances in Neural Information Processing Systems, 34:13550–13563, 2021.
- Ding et al. (2021) Ding, D., Wei, X., Yang, Z., Wang, Z., and Jovanovic, M. Provably efficient safe exploration via primal-dual policy optimization. In International Conference on Artificial Intelligence and Statistics, pp. 3304–3312. PMLR, 2021.
- Dong et al. (2019) Dong, K., Wang, Y., Chen, X., and Wang, L. Q-learning with UCB exploration is sample efficient for infinite-horizon MDP. CoRR, abs/1901.09311, 2019. URL http://arxiv.org/abs/1901.09311.
- Efroni et al. (2020) Efroni, Y., Mannor, S., and Pirotta, M. Exploration-exploitation in constrained mdps. arXiv preprint arXiv:2003.02189, 2020.
- Eysenbach et al. (2018) Eysenbach, B., Gu, S., Ibarz, J., and Levine, S. Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=S1vuO-bCW.
- Garcia Polo & Fernandez Rebollo (2011) Garcia Polo, F. J. and Fernandez Rebollo, F. Safe reinforcement learning in high-risk tasks through policy improvement. In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 76–83, 2011. doi: 10.1109/ADPRL.2011.5967356.
- Ghosh et al. (2022) Ghosh, A., Zhou, X., and Shroff, N. Provably efficient model-free constrained rl with linear function approximation. arXiv preprint arXiv:2206.11889, 2022.
- Gupta et al. (2021) Gupta, A., Yu, J., Zhao, T. Z., Kumar, V., Rovinsky, A., Xu, K., Devlin, T., and Levine, S. Reset-free reinforcement learning via multi-task learning: Learning dexterous manipulation behaviors without human intervention. In IEEE International Conference on Robotics and Automation, pp. 6664–6671. IEEE, 2021.
- Ha et al. (2020) Ha, S., Xu, P., Tan, Z., Levine, S., and Tan, J. Learning to walk in the real world with minimal human effort. Conference on Robot Learning, 2020.
- HasanzadeZonuzy et al. (2021) HasanzadeZonuzy, A., Bura, A., Kalathil, D., and Shakkottai, S. Learning with safety constraints: Sample complexity of reinforcement learning for constrained mdps. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 7667–7674, 2021.
- Hazan et al. (2016) Hazan, E. et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157–325, 2016.
- Hernández-Lerma & Lasserre (2002) Hernández-Lerma, O. and Lasserre, J. B. The Linear Programming Approach, pp. 377–407. Springer US, Boston, MA, 2002. ISBN 978-1-4615-0805-2. doi: 10.1007/978-1-4615-0805-2˙12. URL https://doi.org/10.1007/978-1-4615-0805-2_12.
- Ho-Nguyen & Kılınç-Karzan (2018) Ho-Nguyen, N. and Kılınç-Karzan, F. Primal–dual algorithms for convex optimization via regret minimization. IEEE Control Systems Letters, 2(2):284–289, 2018. doi: 10.1109/LCSYS.2018.2831721.
- Huang et al. (2022) Huang, R., Yang, J., and Liang, Y. Safe exploration incurs nearly no additional sample complexity for reward-free rl, 2022. URL https://arxiv.org/abs/2206.14057.
- Jain et al. (2022) Jain, A., Vaswani, S., Babanezhad, R., Szepesvári, C., and Precup, D. Towards painless policy optimization for constrained MDPs. In Cussens, J. and Zhang, K. (eds.), Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, volume 180 of Proceedings of Machine Learning Research, pp. 895–905. PMLR, 01–05 Aug 2022. URL https://proceedings.mlr.press/v180/jain22a.html.
- Jin et al. (2019) Jin, C., Yang, Z., Wang, Z., and Jordan, M. I. Provably efficient reinforcement learning with linear function approximation. CoRR, abs/1907.05388, 2019. URL http://arxiv.org/abs/1907.05388.
- Kakade & Langford (2002) Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning. Citeseer, 2002.
- Kovalev & Gasnikov (2022) Kovalev, D. and Gasnikov, A. The first optimal algorithm for smooth and strongly-convex-strongly-concave minimax optimization. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=pD5Pl5hen_g.
- Lu et al. (2020) Lu, K., Grover, A., Abbeel, P., and Mordatch, I. Reset-free lifelong learning with skill-space planning. arXiv preprint arXiv:2012.03548, 2020.
- Mertikopoulos et al. (2018) Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C.-S., Chandrasekhar, V., and Piliouras, G. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile. arXiv preprint arXiv:1807.02629, 2018.
- Qiu et al. (2020) Qiu, S., Wei, X., Yang, Z., Ye, J., and Wang, Z. Upper confidence primal-dual optimization: Stochastically constrained markov decision processes with adversarial losses and unknown transitions. arXiv preprint arXiv:2003.00660, 2020.
- Sharma et al. (2022) Sharma, A., Ahmad, R., and Finn, C. A state-distribution matching approach to non-episodic reinforcement learning. arXiv preprint arXiv:2205.05212, 2022.
- Wachi & Sui (2020) Wachi, A. and Sui, Y. Safe reinforcement learning in constrained markov decision processes. In International Conference on Machine Learning, pp. 9797–9806. PMLR, 2020.
- Wagener et al. (2021) Wagener, N. C., Boots, B., and Cheng, C.-A. Safe reinforcement learning using advantage-based intervention. In International Conference on Machine Learning, pp. 10630–10640. PMLR, 2021.
- Wang et al. (2021) Wang, J.-K., Abernethy, J., and Levy, K. Y. No-regret dynamics in the fenchel game: A unified framework for algorithmic convex optimization. arXiv preprint arXiv:2111.11309, 2021.
- Wang & Li (2020) Wang, Y. and Li, J. Improved algorithms for convex-concave minimax optimization, 2020.
- Wei et al. (2019) Wei, C., Jafarnia-Jahromi, M., Luo, H., Sharma, H., and Jain, R. Model-free reinforcement learning in infinite-horizon average-reward markov decision processes. CoRR, abs/1910.07072, 2019. URL http://arxiv.org/abs/1910.07072.
- Wei et al. (2020) Wei, C., Jafarnia-Jahromi, M., Luo, H., and Jain, R. Learning infinite-horizon average-reward mdps with linear function approximation. CoRR, abs/2007.11849, 2020. URL https://arxiv.org/abs/2007.11849.
- Wei et al. (2021) Wei, H., Liu, X., and Ying, L. A provably-efficient model-free algorithm for constrained markov decision processes. arXiv preprint arXiv:2106.01577, 2021.
- Zheng & Ratliff (2020) Zheng, L. and Ratliff, L. Constrained upper confidence reinforcement learning. In Bayen, A. M., Jadbabaie, A., Pappas, G., Parrilo, P. A., Recht, B., Tomlin, C., and Zeilinger, M. (eds.), Proceedings of the 2nd Conference on Learning for Dynamics and Control, volume 120 of Proceedings of Machine Learning Research, pp. 620–629. PMLR, 10–11 Jun 2020. URL https://proceedings.mlr.press/v120/zheng20a.html.
Appendix A Appendix
A.1 Missing Proofs for Section 4
A.1.1 Proof of Theorem 4.1
See 4.1
For policy , we define it by the following construction (we ignore writing out the time dependency for simplicity): first, we define a cost-based MDP . Let and be the optimal values, where we recall and are the state and state-action values under policy with respect to the cost. Now we construct another reward-based MDP , where we define the state-dependent action space as
By definition, is non-empty for all . We define a shorthand notation: we write if . Then we have the following lemma, which is a straightforward application of the performance difference lemma. See 4.6
Proof.
By performance difference lemma (Kakade & Langford, 2002), we can write
If for some , , then , which implies . But since is optimal, . On the other hand, suppose . It implies since . Because by definition of optimality , this implies . ∎
We set our candidate policy as the optimal policy of this . By Lemma 4.6, we have , so it is also an optimal policy to . We prove our main claim of Theorem 4.1 below:
A.1.2 Proof of LABEL:{lm:regret_equivalence}
See 4.2
Proof.
To prove , it suffices to prove . By Lemma 4.6 and under 3.1, we notice that . This is equal to by the definition of in the proof of Theorem 4.1. ∎
A.1.3 Proof of Corollary 4.3
See 4.3
Proof.
We prove that Theorem 4.1 holds for , that is
Because (for an initial state such that the CMDP is feasible), the first inequality is trivial:
For the second inequality, we use Theorem 4.1:
where the first step is because by definition is in and , and the second step is by Theorem 4.1. ∎
A.1.4 Proof of Theorem 4.5
See 4.5 We first establish the following intermediate result that will help us with our decomposition. See 4.7
Proof.
We derive this lemma by Theorem 4.1 and Corollary 4.3. First notice by Theorem 4.1 and Corollary 4.3 that for ,
Then we can derive
which finishes the proof. ∎
Then we upper bound Regret() and Resets() by and for suitable comparators. This decomposition is inspired by the techniques used in Ho-Nguyen & Kılınç-Karzan (2018).
We first bound Resets(). See 4.8
Proof.
Notice where is the saddle-point defined in Theorem 4.1. This is because, as defined, . Therefore, we bound the RHS. We have
where second inequality is because by Theorem 4.1, and the first inequality follows from Lemma 4.7 and Definition 4.4. ∎
Lastly, we bound Regret with the lemma below and Corollary 4.2. See 4.9
Proof.
Note that since for all . Since by definition, for any , , we have the following:
where the last inequality follows from Lemma 4.7 and Definition 4.4. ∎
A.2 Missing Proofs for Section 5
A.2.1 Proof of Theorem 5.3
See 5.3
We first bound the regret of and , and then use this to prove the bounds on our algorithm’s regret and number of resets with Theorem 4.5.
We first bound the regret of . See 5.4
Proof.
We notice first an equality.
We observe that the first term is an online linear problem for (the parameter of ). In episode , is played, and then the loss is revealed. Since the space of is convex, we use standard results (Lemma (Hazan et al., 2016)) to show that updating through projected gradient descent results in an upper bound for . We restate the lemma here.
Lemma A.1 (Lemma (Hazan et al., 2016)).
Let be a bounded convex and closed set in Euclidean space. Denote as an upper bound on the diameter of , and as an upper bound on the norm of the subgradients of convex cost functions over . Using online projected gradient descent to generate sequence with step sizes guarantees, for all :
We now bound the regret of . See 5.5
Proof.
First we expand the regret into two terms.
To bound the first term, we use Lemma from Ghosh et al. (2022), which characterize the property of upper confidence bound. For completeness, we re-write the lemma here. 888Note that Ghosh et al. (2022) use a utility function rather than a cost function to denote the constraint on the MDP (cost is just utility). Also note that their Lemma is proved for an arbitrary initial state sequence and for any comparator (which includes ).
Lemma A.2 (Lemma (Ghosh et al., 2022)).
With probability , it holds that . Hence, for , where is such that .
In our problem setting, we can set in the lemma above. Therefore, the first term is bounded by . ∎
Lastly, we derive a bound on , which directly implies our final upper bound on Regret() and Resets() in Theorem 5.3 by Theorem 4.5.
Lemma A.3.
For any and such that , we have with probability , where .
Proof.
Combining the upper bounds in Lemma 5.4 and Lemma 5.5, we have an upper bound of
where the last term is the overestimation error due to optimism. To bound this term, we use Lemma from Ghosh et al. (2022). We re-write the lemma here.
Lemma A.4 (Lemma (Ghosh et al., 2022)).
WIth probability at least , for any , where .
Since we have a bound on all of for all , we have a bound of . ∎