Near-optimal Regret Bounds for Stochastic Shortest Path
Abstract
Stochastic shortest path (SSP) is a well-known problem in planning and control, in which an agent has to reach a goal state in minimum total expected cost. In the learning formulation of the problem, the agent is unaware of the environment dynamics (i.e., the transition function) and has to repeatedly play for a given number of episodes, while learning the problem’s optimal solution. Unlike other well-studied models in reinforcement learning (RL), the length of an episode is not predetermined (or bounded) and is influenced by the agent’s actions. Recently, Tarbouriech et al. (2019) studied this problem in the context of regret minimization, and provided an algorithm whose regret bound is inversely proportional to the square root of the minimum instantaneous cost. In this work we remove this dependence on the minimum cost—we give an algorithm that guarantees a regret bound of , where is an upper bound on the expected cost of the optimal policy, is the set of states, is the set of actions and is the number of episodes. We additionally show that any learning algorithm must have at least regret in the worst case.
1 Introduction
Stochastic shortest path (SSP) is one of the most basic models in reinforcement learning (RL). It includes the discounted return model and the finite-horizon model as special cases. In SSP the goal of the agent is to reach a predefined goal state in minimum expected cost. This setting captures a wide variety of realistic scenarios, such as car navigation, game playing and drone flying; i.e., tasks carried out in episodes that eventually terminate.
The focus of this work is on regret minimization in SSP. It builds on extensive literature on theoretical aspects of online RL, and in particular on the copious works about regret minimization in either the average cost model or the finite-horizon model. A major contribution to this literature is the UCRL2 algorithm Jaksch et al. (2010) that gives a general framework to achieve optimism in face of uncertainty for these settings. The main methodology is to define a confidence set that includes the true model parameters with high probability. The algorithm periodically computes an optimistic policy that minimizes the overall expected cost simultaneously over all policies and over all parameters within the confidence set, and proceeds to play this policy.
The only regret minimization algorithm specifically designed for SSP is that of Tarbouriech et al. (2019) that assumes that all costs are bounded away from zero (i.e., there is a such that all costs are in the range ). They show a regret bound that scales as where is the minimum expected time of reaching the goal state from any state, is the set of states, is the set of actions and is the number of episodes. In addition, they show that the algorithm’s regret is when the costs are arbitrary (namely, may be zero).
Here we improve upon the work of Tarbouriech et al. (2019) in several important aspects. First, we remove the dependency on and allow for zero costs while maintaining regret of . Second, we give a much simpler algorithm in which the computation of the optimistic policy has a simple solution. Our main regret term is where is an upper bound on the expected cost of the optimal policy (note that ). We show that this is almost optimal by giving a lower bound of
Our technical contribution is as follows. We start by assuming that the costs are lower bounded by and give an algorithm that is simple to analyze and achieves a regret bound of . Note that this bound is comparable to the one of Tarbouriech et al. (2019), yet our algorithm and its analysis are significantly simpler and more intuitive. We subsequently improve our algorithm by utilizing better confidence sets based on the Bernstein concentration inequality (Azar et al., 2017). This algorithm is even simpler than our first one mainly since picking the parameters of the optimistic model is particularly easy. The analysis, however, is somewhat more delicate. We achieve our final bound by perturbing the instantaneous costs to be at least . The additional cost due to this perturbation has a small effect since the dependency of our regret on is additive and does not multiply any term depending on .
1.1 Related work.
Early work by Bertsekas and Tsitsiklis (1991) studied the problem of planning in SSPs, that is, computing the optimal strategy efficiently in a known SSP instance. They established that, under certain assumptions, the optimal strategy is a deterministic stationary policy (a mapping from states to actions) and can be computed efficiently using standard planning algorithms, e.g., Value Iteration or Policy Iteration.
The extensive literature about regret minimization in RL focuses on the the average-cost infinite-horizon model Bartlett and Tewari (2009); Jaksch et al. (2010) and on the finite-horizon model Osband et al. (2016); Azar et al. (2017); Dann et al. (2017); Zanette and Brunskill (2019). These recent works give algorithms with near-optimal regret bounds using Bernstein-type concentration bounds.
2 Preliminaries and Main Results
An instance of the SSP problem is a Markov decision process (MDP) where is the state space and is the action space. The agent begins at the initial state , and ends her interaction with by arriving at the goal state (where ). Whenever she plays action in state , she pays a cost and the next state is chosen with probability . Note that to simplify the presentation we avoid addressing the goal state explicitly – we assume that the probability of reaching the goal state by playing action at state is .
We now review planning in a known SSP instance. Under certain assumptions that we shall briefly discuss, the optimal behaviour of the agent, i.e., the policy that minimizes the expected total cost of reaching the goal state from any state, is a stationary, deterministic and proper policy. A stationary and deterministic policy is a mapping that selects action whenever the agent is at state . A proper policy is defined as follows.
Definition 1 (Proper and Improper Policies).
A policy is proper if playing reaches the goal state with probability when starting from any state. A policy is improper if it is not proper.
Any policy induces a cost-to-go function defined as where the expectation is taken w.r.t the random sequence of states generated by playing according to when the initial state is . For a proper policy , since the number of states is finite, it follows that is finite for all . However, note that may be finite even if is improper. We additionally denote by the expected time it takes for to reach starting at ; in particular, if is proper then is finite for all , and if is improper there must exist some such that . In this work we assume the following about the SSP model.
Assumption 1.
There exists at least one proper policy.
With 1, we have the following important properties of proper policies. In particular, the first result shows that a policy is proper if and only if its cost-to-go function satisfies the Bellman equations. The second result proves that a policy is optimal if and only if it satisfies the Bellman optimality criterion. Note that they assume that every improper policy has high cost.
Lemma 2.1 (Bertsekas and Tsitsiklis, 1991, Lemma 1).
Suppose that 1 holds and that for every improper policy there exists at least one state such that . Let be any policy, then
-
(i)
If there exists some such that for all , then is proper. Moreover, it holds that
-
(ii)
If is proper then is the unique solution to the equations for all .
Lemma 2.2 (Bertsekas and Tsitsiklis, 1991, Proposition 2).
Under the conditions of Lemma 2.1 the optimal policy is stationary, deterministic, and proper. Moreover, a policy is optimal if and only if it satisfies the Bellman optimality equations for all :
(1) | |||||
In this work we are not interested in approximating the optimal policy overall, but rather the best proper policy. In this case the second requirement in the lemmas above, that for every improper policy there exists some state such that , can be circumvented in the following way (Bertsekas and Yu, 2013). First, note that this requirement is trivially satisfied when all instantaneous costs are strictly positive. Then, one can perturb the instantaneous costs by adding a small positive cost , i.e., the new cost function is . After this perturbation, all proper policies remain proper, and every improper policy has infinite cost-to-go from some state (as all costs are positive). In the modified MDP, we apply Lemma 2.2 and obtain an optimal policy that is stationary, deterministic and proper and has a cost-to-go function . Taking the limit as , we have that and , where is the optimal proper policy in the original model that is also stationary and deterministic, and denotes its cost-to-go function. We use this observation to obtain Corollaries 2.5 and 2.6 below that only require 1 to hold.
Learning formulation.
We assume that the costs are deterministic and known to the learner, and the transition probabilities are fixed but unknown to the learner. The learner interacts with the model in episodes: each episode starts at the initial state , and ends when the learner reaches the goal state (note that she might never reach the goal state). Success is measured by the learner’s regret over such episodes, that is the difference between her total cost over the episodes and the total expected cost of the optimal proper policy:
where is the time it takes the learner to complete episode (which may be infinite), is the set of all stationary, deterministic and proper policies (that is not empty by 1), and is the -th state-action pair at episode . In the case that is infinite for some , we define .
We denote the optimal proper policy by , i.e., for all . Moreover, let be an upper bound on the values of and let be an upper bound on the times , i.e., and .
2.1 Summary of our results
In Section 3 we present our Hoeffding-based algorithms (Algorithms 1 and 3) and their analysis. In Section 4 we show our Bernstein-based algorithm (Algorithm 2) for which we prove improved regret bounds. In addition, we give a lower bound on the learner’s regret showing that Algorithm 2 is near-optimal (see Appendix C).
The learner must reach the goal state otherwise she has infinite regret. Therefore, she has to trade-off two objectives, one is to reach the goal state and the other is to minimize the cost. Under the following assumption, the two objectives essentially coincide.
Assumption 2.
All costs are positive, i.e., there exists such that for every .
This assumption allows us to upper bound the running time of the algorithm by its total cost up to a factor of . In particular, it guarantees that any policy that does not reach the goal state has infinite cost, so any bounded regret algorithm has to reach the goal state. We eventually relax 2 by a technique similar to that of Bertsekas and Yu (2013). We add a small positive perturbation to the instantaneous costs and run our algorithms on the model with the perturbed costs. This provides a regret bound that scales with the expected running time of the optimal policy.
We now summarize our results. For ease of comparison, we first present our regret bounds for both the Hoeffding and Bernstein-based algorithms for when 2 holds, and subsequently show the regret bounds of both algorithms for the general case. In order to simplify the presentation of our results, we assume that , and throughout. In addition, we denote . The complete proof of all statements is found in the supplementary material.
Positive costs.
The following results hold when 2 holds (recall that we always assume 1). In particular, when this assumption holds the optimal policy overall is proper (Lemma 2.2) hence the regret bounds below are with respect to the best overall policy.
Theorem 2.3.
Suppose that 2 holds. With probability at least the regret of Algorithm 3 is bounded as follows:
The main issue with the regret bound in Theorem 2.3 is that it scales with which cannot be avoided regardless of how large is with respect to . This problem is alleviated in Algorithm 2 that uses the tighter Bernstein-based confidence bounds.
Theorem 2.4.
Assume that 2 holds. With probability at least the regret of Algorithm 2 is bounded as follows:
Note that when , the regret bound above scales as thus obtaining a near-optimal rate.
Arbitrary costs. Recall that in this case we can no longer assume that the optimal policy is proper. Therefore, the regret bounds below are with comparison to the best proper policy. 2 can be easily alleviated by adding a small fixed cost to the cost of all state-action pairs. Following the perturbation of the costs, we obtain regret bounds from Theorems 2.3 and 2.4 with and , and the learner also suffers an additional cost of due to the misspecification of the model caused by the perturbation. By picking to balance these terms we get the following corollaries (letting ).
Corollary 2.5.
Running Algorithm 3 using costs for gives the following regret bound with probability at least :
Corollary 2.6.
Running Algorithm 2 using costs for gives the following regret bound with probability at least :
Moreover, when the algorithm knows and , then choosing gets a near-optimal regret bound of .
Lower bound. In Appendix C we show that Corollary 2.6 is nearly-tight using the following theorem.
Theorem 2.7.
There exists an SSP problem instance in which for all , , , , , and for all , such the expected regret of any learner after episodes satisfies
3 Hoeffding-type Confidence Bounds
We start with a simpler case in which is known to the learner. In Section 3.2 we alleviate this assumption with a penalty of an additional log-factor in the regret bound. For now, we prove the following bound on the learner’s regret.
Theorem 3.1.
Suppose that 2 holds. With probability at least the regret of Algorithm 1 is bounded as follows:
Our algorithm follows the known concept of optimism in face of uncertainty. That is, it maintains confidence sets that contain the true transition function with high probability and picks an optimistic optimal policy—a policy that minimizes the expected cost over all policies and all transition functions in the current confidence set. The computation of the optimistic optimal policy can be done efficiently as shown by Tarbouriech et al. (2019). Construct an augmented MDP whose states are and its action set consists of tuples where and is any transition function such that
(2) |
where is the empirical estimate of . It can be shown that the optimistic policy and the optimistic model, i.e., those that minimize the expected total cost over all policies and feasible transition functions, correspond to the optimal policy of the augmented MDP.
To ensure that the algorithm reaches the goal state in every episode, we define a state-action pair as known if the number of visits to this pair is at least and as unknown otherwise. We show with high probability the optimistic policy chosen by the algorithm will be proper once all state-action pairs are known. However, when some pairs are still unknown, our chosen policies may be improper. This implies that the strategy of keeping the policy fixed throughout an episode, as done usually in episodic RL, will fail. Consequently, our algorithm changes policies at the start of every episode and also every time we reach an unknown state-action pair.
Formally, we split the time into intervals. The first interval begins at the first time step, and every interval ends by reaching the goal state or a state such that is unknown (where is the current policy followed by the learner). Recall that once all state-action pairs are known, the optimistic policy will eventually reach the goal state. Therefore, recomputing the optimistic policy at the end of every interval ensures that the algorithm will eventually reach the goal state with high probability. Note that the total number of intervals is at most the number of visits to an unknown state-action pair plus the number of episodes.
Observation 3.2.
The total number of intervals, , is
3.1 Analysis
The proof of Theorem 3.1 begins by defining the “good event” in which our confidence sets contain the true transition function and the total cost in every interval is bounded. This in turn implies that all episodes end in finite time. We prove that the good event holds with high probability.
Then, independently, we give a high-probability bound on the regret of the algorithm when the good event holds. To do so, recall that at the beginning of every interval , the learner computes an optimistic policy by minimizing over all policies and over all transition functions within the current confidence set. We denote the chosen policy by and let be the minimizing transition function (i.e., the optimistic model). A key observation is that by the definition of our confidence sets, is such that there is always some positive probability to transition to the goal state directly from any state-action. This implies that all policies are proper in the optimistic model and that the cost-to-go function of defined with respect to , and denoted by , is finite. By Lemma 2.1, the following Bellman optimality equations hold for all ,
(3) |
High probability events.
For every interval , we let denote the event that the confidence set for interval contains the true transition function . Formally, let denote the empirical estimate of the transition function at the beginning of interval , let denote the number of visits to state-action pair up to interval (not including), and let be the number of visits to during interval . Then we say that holds if for all , we have ()
(4) |
In the following lemma we show that, with high probability, the events hold and that the total cost in each interval is bounded. Combining this with 3.2 we get that all episodes terminate within a finite number of steps, with high probability.
Lemma 3.3.
With probability at least , for all intervals simultaneously, we have that holds and that , where denotes the length of interval , is the observed state at time of interval and is the chosen action. This implies that the total number of steps of the algorithm is
Proof sketch..
The events hold with high probability due to standard concentration inequalities, and thus it remains to address the high probability bound on the total cost within each interval.
This proof consists of three parts. In the first, we show that when occurs we have that for all due to the optimistic nature of the computation of . In the second part, we postulate that had all state-action pairs been known, then having hold implies that for all . That is, when all state-action pairs are known, not only is proper in the true model, but its expected cumulative cost is at most .
The third part of the proof deals with the general case when not all state-action pairs are known. Fix some interval . Since the interval ends when we reach an unknown state-action, it must be that all but the first state-action pair visited during the interval are known. For this unknown first state-action pair, it follows from the Bellman equations (Eq. 3) and from for all that never picks an action whose instantaneous cost is larger than . Therefore, the cost of this first unknown state-action pair is at most , and we focus on bounding the total cost in the remaining time steps with high probability.
To that end, we define the following modified MDP in which every state such that is unknown is contracted to the goal state. Let be the transition function induced in by , and let be the cost-to-go of in w.r.t . Similarly, define as the transition function induced in by , and as the cost-to-go of in w.r.t . It is clear that for every from whence . Moreover, since all states for which is unknown were contracted to the goal state, in all remaining states-action pairs are known. Therefore, by the second part of the proof, for all . Note that reaching the goal state in is equivalent to reaching either the goal state or an unknown state-action pair in the true model hence the latter argument shows that the total expected cost in doing so is at most . We further obtain the high probability bound by a probabilistic amplification argument using the Markov property of the MDP. ∎
Regret analysis.
In what follows, instead of bounding , we bound , where is the indicator function. Note that according to Lemma 3.3, we have that with high probability.
The definition of allows the analysis to disentangle two dependent probabilistic events. The first is the intersection of the events which is dealt with in Lemma 3.3. The second holds when, for a fixed policy, the costs suffered by the learner do not deviate significantly from their expectation. In the following lemma we bound .
Lemma 3.4.
With probability at least , we have
Here we only explain how to interpret the resulting bound. The term (1) bounds the total cost spent in intervals that ended in unknown state-action pairs (it does not depend on ). The term is at most when Lemma 3.3 holds, and then the dominant term in Lemma 3.4 becomes . Theorem 3.1 is finally obtained by applying a union bound on Lemmas 3.3 and 3.4 and using Lemma 3.3 to bound .
3.2 Unknown Cost Bound
In this section we relax the assumption that is known to the learner. Instead, we keep an estimate that is initialized to and doubles every time the cost in interval (denoted as ) reaches . By Lemma 3.3, with high probability, . We end an interval as before (once the goal state is reached or an unknown state-action pair is reached), but also when is doubled. The algorithm for this case is presented in Appendix A (Algorithm 3). Since changes, every state-action pair can become known once for every different value of .
Observation 3.5.
When is unknown to the learner, the number of times a state-action pair can become known is at most . The number of intervals is
Lemma 3.6.
When is unknown, with probability at least , for all intervals simultaneously, we have that holds and that . This implies that the total number of steps of the algorithm is
The analysis follows that of Algorithm 1. In particular, Lemma 3.4 still holds (with instead of ), and jointly with Lemma 3.6 imply Theorem 2.3.
4 Bernstein-type Confidence Bounds
Algorithm 1 has two drawbacks. The first one is the use of Hoeffding-style confidence bounds which we improve with Bernstein-style confidence bounds. The second is the number of times the optimistic optimal policy is computed. In this section we propose to compute it in a way similar to UCRL2, i.e., once the number of visits to some state-action pair is doubled. Note that this change also eliminates the need to know or to estimate .
The algorithm is presented in Algorithm 2. It consists of epochs. The first epoch starts at the first time step, and each epoch ends once the number of visits to some state-action pair is doubled. An optimistic policy is computed at the end of every epoch using (empirical) Bernstein confidence bounds. In contrast to Algorithm 1, Algorithm 2 defines a confidence range for each state, action, and next state, separately, around its empirical estimate (i.e., we use an “ball” rather than an “ball” around the empirical estimates). This allows us to disentangle the computation of the optimistic policy from the computation of the optimistic model. Indeed, the computation of the optimistic model becomes very easy: one simply has to maximize the probability of transition directly to the goal state at every state-action pair which means minimizing the probability of transition to all other states and setting them at the lowest possible value of their confidence range. This results in the following formula for :
(5) |
where . The optimistic policy is then the optimal policy in the SSP model defined by the transition function .
4.1 Analysis
In this section we prove Theorem 2.4. We start by showing that our new confidence sets contain with high probability which implies that each episode ends in finite time with high probability. Consequently, we are able to bound the regret through summation of our confidence bounds.
We once again distinguish between known and unknown state-action pairs similarly to Algorithm 1. A state-action pair becomes known at the end of an epoch if the total number of visits to has passed at some time step during the epoch (for some constant ). Note that at the end of the epoch, the visit count of may be strictly larger than but at most twice as much by the definition of our algorithm. Furthermore, we split each epoch into intervals similar to what did in Section 3. The first interval starts at the first time step and each interval ends once (1) the total cost in the interval accumulates to at least ; (2) an unknown state-action pair is reached; (3) the current episode ends; or (4) the current epoch ends. We have the following observation.
Observation 4.1.
Let denote the cost of the learner after intervals. Observe that the total cost in each interval is at least unless the interval ends in the goal state, in an unknown state-action pair or the epoch ends. Thus the total number of intervals satisfies
and the total time satisfies
Recall that in the analysis of Algorithm 1 we show that once all state-action pairs are known, the optimistic policies generated by the algorithm are proper in the true MDP. The same holds true for Algorithm 2, yet we never prove this directly. Instead, our proof goes as follows.111We neglect low order terms here. We prove that , the cost accumulated by the learner during the first intervals, is at most with high probability as long as no more than episodes have been completed during these intervals. We notice that once all state-action pairs are known, the total cost in each interval is at least (ignoring intervals that end with the end of an epoch or an episode), which implies that the total number of intervals is bounded by . This allows us to get a bound on that is independent of the number of intervals by solving the inequality . From this, and since the instantaneous costs are strictly positive (by 2), it must be that the learner eventually completes all episodes; i.e., there must be a time from which Algorithm 2 generates only proper policies.
Notation.
The epoch that interval belongs to is denoted by , other notations are as in Section 3.1. Note that since the optimistic policy is computed at the end of an epoch and not at the end of an interval, it follows that and . The trajectory visited in interval is denoted by , where is the action taken in , and is the length of the interval. In addition, the concatenation of the trajectories of the intervals up to and including interval is denoted by , that is .
High probability events.
Throughout the analysis we denote . For every interval we let denote the event that the confidence set for epoch contains the actual transition function . Formally, if holds then for all , we have (denote , )
(6) |
In the following lemma we show that the events hold with high probability.
Lemma 4.2.
With probability at least , holds for all intervals simultaneously.
Regret analysis.
In the following section, instead of bounding , we bound for any number of intervals . This implies Theorem 2.4 by the following argument. Lemma 4.2 implies that with high probability for any number of intervals ( is the true regret within the first intervals). In particular, when is the number of intervals in which the first episodes elapse, this implies Theorem 2.4 (we show that the learner indeed completes these episodes).
To bound , we use the next lemma to decompose into two terms which we bound independently.
Lemma 4.3.
It holds that where
The lemma breaks down into two terms. The first term accounts for the number of times in which the learner changes her policy in the middle of an episode which is at most the number of epochs. The second term sums the errors between the cost-to-go of the observed next state and its estimated expectation.
Indeed, is related to the total number of epochs which is at most due to the following lemma.
Lemma 4.4.
It holds that
The next lemma shows that does not deviate from significantly.
Lemma 4.5.
With probability at least ,
The key property of the lemma is that the deviations between and its corresponding expectation is of order and do not scale with .
To prove the lemma, we recall that an interval ends at most at the first time step in which the accumulated cost in the interval surpasses . We show in our analysis that for all due to the optimistic computation of . Therefore, never picks an action whose instantaneous cost is more than . This implies that the total cost within each interval is at most . Then, we use the Bellman equations to bound by order of the total cost in the interval, and the lemma follows by an application of Azuma’s concentration inequality.
Lemma 4.6 below bounds for every interval by a sum of the confidence bounds used in Algorithm 2.
Lemma 4.6.
For every interval ,
(7) |
where is the empirical variance defined as and .
The next step is the part of our proof in which our analysis departs from that of Algorithm 1. Note that when holds, . Using this bound for each time step separately will result in a bound similar to that of Theorem 2.3. However, this bound is loose due to the following intuitive argument. Suppose that we replace with the true cost-to-go function of , , in the definition of . Note that from the Bellman equations (Eq. 1) we have in expectation on consecutive time steps and hence we surmise that in expectation would also decrease on consecutive time steps. A similar argument holds when in reality we use because all-but-one of the state-action pairs in the interval are known, and is a “close enough” approximation of on known state-action pairs since they have been sampled sufficiently many times. Indeed, in Lemma 4.7 we use the technique of Azar et al. (2017) to show that (up to a constant) bounds the expected sum of the variances over the time steps of an interval.
Lemma 4.7.
Armed with Lemma 4.7, we upper bound by applying some algebraic manipulation on Eq. 7, and summing over all intervals which gives the next lemma.
Lemma 4.8.
With probability at least ,
Theorem 2.4 is obtained by first applying a union bound on Lemmas 4.2, 4.8 and 4.5, plugging in the bounds of Lemmas 4.4, 4.5 and 4.8 into Lemma 4.3, and bounding and using 4.1. This results in a quadratic inequality in and solving it yields the theorem.
Acknowledgements
HK is supported by the Israeli Science Foundation (ISF) grant 1595/19.
References
- Auer et al. (2002) Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48–77, 2002.
- Azar et al. (2017) Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 263–272. JMLR. org, 2017.
- Bartlett and Tewari (2009) Peter L Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. AUAI Press, 2009.
- Bertsekas and Tsitsiklis (1991) Dimitri P Bertsekas and John N Tsitsiklis. An analysis of stochastic shortest path problems. Mathematics of Operations Research, 16(3):580–595, 1991.
- Bertsekas and Yu (2013) Dimitri P Bertsekas and Huizhen Yu. Stochastic shortest path problems under weak conditions. Lab. for Information and Decision Systems Report LIDS-P-2909, MIT, 2013.
- Cesa-Bianchi and Lugosi (2006) Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
- Dann et al. (2017) Christoph Dann, Tor Lattimore, and Emma Brunskill. Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning. In Advances in Neural Information Processing Systems, pages 5713–5723, 2017.
- Jaksch et al. (2010) Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563–1600, 2010.
- Neu et al. (2010) Gergely Neu, András György, and Csaba Szepesvári. The online loop-free stochastic shortest-path problem. In COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010, pages 231–243, 2010.
- Neu et al. (2012) Gergely Neu, Andras Gyorgy, and Csaba Szepesvári. The adversarial stochastic shortest path problem with unknown transition probabilities. In Artificial Intelligence and Statistics, pages 805–813, 2012.
- Osband et al. (2016) Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. In International Conference on Machine Learning, pages 2377–2386, 2016.
- Rosenberg and Mansour (2019a) Aviv Rosenberg and Yishay Mansour. Online stochastic shortest path with bandit feedback and unknown transition function. In Advances in Neural Information Processing Systems, pages 2209–2218, 2019a.
- Rosenberg and Mansour (2019b) Aviv Rosenberg and Yishay Mansour. Online convex optimization in adversarial markov decision processes. In International Conference on Machine Learning, pages 5478–5486, 2019b.
- Tarbouriech et al. (2019) Jean Tarbouriech, Evrard Garcelon, Michal Valko, Matteo Pirotta, and Alessandro Lazaric. No-regret exploration in goal-oriented reinforcement learning, 2019.
- Weissman et al. (2003) Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, and Marcelo J Weinberger. Inequalities for the l1 deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003.
- Zanette and Brunskill (2019) Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds. In International Conference on Machine Learning, pages 7304–7312, 2019.
- Zimin and Neu (2013) Alexander Zimin and Gergely Neu. Online learning in episodic markovian decision processes by relative entropy policy search. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 1583–1591, 2013.
Appendix A Algorithm
Appendix B Proofs
B.1 Proofs for Section 3.1
B.1.1 Proof of Lemma 3.3
Lemma (restatement of Lemma 3.3).
With probability at least , holds and for all intervals simultaneously. This implies that the total number of steps of the algorithm is
Lemma B.1.
The event holds for all intervals simultaneously with probability at least .
Proof.
Fix a state and an action . Consider an infinite sequence of draws from the distribution . By Theorem D.2 we get that for a prefix of length of this sequence (that is )
holds with probability , where is the empirical distribution defined by the draws . We repeat this argument for every prefix of and for every state-action pair, with . Then from the union bound we get that holds for all intervals simultaneously with probability at least . ∎
Lemma B.2.
Let be an interval. If holds then for every .
Proof.
Tarbouriech et al. (2019) show that all the transition functions in the confidence set of Eq. 4 can be combined into a single augmented MDP. The optimal policy of the augmented MDP can be found efficiently, e.g., with Extended Value Iteration. The optimistic policy is the optimal policy in the augmented MDP. It minimizes over all policies and feasible transition functions, for all states simultaneously (following Bertsekas and Tsitsiklis, 1991). Since holds, it follows that the real transition function is in the confidence set therefore it is also considered in the minimization. Thus for all . Finally, by the definition of . ∎
Lemma B.3.
Let be an interval and be a known state-action pair. If holds then
Proof.
By the definition of the confidence set
where the last inequality follows because is decreasing, and since is known. Similarly, since holds we also have that
and the lemma follows by the triangle inequality. ∎
Lemma B.4.
Let be a policy and be a transition function. Denote the cost-to-go of with respect to by . Assume that for every , and that
Then, is proper (with respect to ), and it holds that for every .
Proof.
Consider the Bellman equations of with respect to transition function at some state (see Lemma 2.1), defined as
(8) |
Notice that by our assumptions and using Hölder inequality,
Plugging this into Eq. 8, we obtain
Therefore, defining , then for all . The statement now follows by Lemma 2.1. ∎
Lemma B.5.
Let be a proper policy such that for some , for every . Then, the probability that the cost of to reach the goal state from any state is more than , is at most for all . Note that a cost of at most implies that the number of steps is at most .
Proof.
By Markov inequality, the probability that accumulates cost of more than before reaching the goal state is at most . Iterating this argument, we get that the probability that accumulates cost of more than before reaching the goal state is at most for every integer . In general, for any , the probability that suffers a cost of more than is at most ∎
For the next lemma we will need the following definitions. The trajectory visited in interval is denoted by where is the action taken in , and is the length of the interval. In addition, the concatenation of the trajectories in the intervals up to and including interval is denoted by .
Lemma B.6.
Let be an interval. For all , we have that
Proof.
Note that is determined given , and suppose that holds otherwise is . Also assume that or else the statement holds trivially.
Define the MDP in which every state such that is unknown is contracted into the goal state. Let be the transition function induced in by , and let be the cost-to-go of in with respect to . Similarly, define as the transition function induced in by , and as the cost-to-go of in with respect to . It is clear that for every , so by Lemma B.2, . Moreover, since all the states for which is unknown were contracted to the goal state, we can use Lemma B.3 to obtain for all :
(9) |
We can apply Lemma B.4 in and obtain that for every . Notice that reaching the goal state in is equivalent to reaching the goal state or an unknown state-action pair in , and also recall that all state-action pairs in the interval are known except for the first one. Thus, from Lemma B.5,
Since , our algorithm will never select an action whose instantaneous cost is larger than . Since the first state-action in the interval might not be known, its cost is at most , and therefore
Proof of Lemma 3.3.
From Lemma B.6, with probability at least , , and by the union bound this holds for all intervals simultaneously with probability at least . By Lemma B.1, with probability , holds for all intervals . Combining these two facts again by a union bound, we get that both holds and the cost of interval is at most simultaneously to all intervals with probability at least .
If the cost of all intervals is bounded (and therefore so is the length of the interval), we can use the bound on the number of intervals in 3.2 to conclude that
B.1.2 Proof of Lemma 3.4
Lemma (restatement of Lemma 3.4).
With probability at least , we have
To analyze , we begin by plugging in the Bellman optimality equation of with respect to into . This allows us to decompose into three terms as follows.
(10) | ||||
(11) | ||||
(12) |
Eq. 10 is a bound on the cost suffered from switching policies each time we visit an unknown state-action pair and is bounded by the following lemma.
Lemma B.7.
Proof.
Note that per interval is a telescopic sum which equals . Furthermore, for every two consecutive intervals one of the following occurs:
-
(i)
If interval ended in the goal state then and Thus, using Lemma B.2 for the last inequality,
This happens at most times.
-
(ii)
If interval ended in an unknown state then
This happens at most times. ∎
Lemma B.8.
It holds that
Proof.
Lemma B.9.
With probability at least ,
Proof.
Consider the infinite sequence of random variables
where is the interval containing time , and is the index of time step within interval . Notice that this is a martingale difference sequence, and by Lemma B.2. Now, we apply anytime Azuma’s inequality (Theorem D.1) to any prefix of the sequence . Thus, with probability at least , for every :
B.1.3 Proof of Theorem 3.1
Theorem (restatement of Theorem 3.1).
Suppose that 2 holds. With probability at least the regret of Algorithm 1 is bounded as follows:
Lemma B.10.
Assume that the number of steps in every interval is is at most . Then for every and ,
Proof.
We claim that, by the assumption of the lemma, for every interval we have that . Indeed, if is unknown then and since the claim follows. If is known then and by our assumption the length of the interval, and in particular , is at most . Our statement then follows by Jaksch et al. (2010, Lemma 19). ∎
Proof of Theorem 3.1.
With probability at least , both Lemmas 3.3 and B.9 hold. Lemma 3.3 states that the length of every interval is at most , and Lemma B.10 obtains
(13) |
where the last inequality follows from Jensen’s inequality and the fact that . Next, we sum the bounds of Lemmas B.7, B.8 and B.9 and use Eq. 13 to obtain
To finish the proof use Lemma 3.3 to bound . ∎
B.2 Proofs for Section 4.1
B.2.1 Proof of Lemma 4.2
Lemma (restatement of Lemma 4.2).
With probability at least , holds for all intervals simultaneously.
Proof.
Fix a triplet . Consider an infinite sequence of draws from the distribution and let . We apply Eq. 25 of Theorem D.3 with to a prefix of length of the sequence . Then divide Eq. 25 by and obtain that, after simplifying using the assumptions that and , Eq. 6 holds with probability . We repeat this argument for every prefix of and for every state-action-state triplet. Then from the union bound we get that holds for all intervals simultaneously with probability at least . ∎
B.2.2 Proof of Lemma 4.3
Lemma (restatement of Lemma 4.3).
It holds that
Lemma B.11.
Let be an interval. If holds then satisfies the Bellman equations in the optimistic model:
Proof.
Note that the Bellman equations hold in the optimistic model since as we defined this model, there is a nonzero probability of transition to the goal state by any action from every state. Thus in the optimistic model every policy is a proper policy and in particular Lemma 2.2 holds. ∎
Proof of Lemma 4.3.
By Lemma B.11, we can use the Bellman equations in the optimistic model to have the following interpretation of the costs for every interval and time :
(14) |
We now write and substitute for each cost using Eq. 14 to get the lemma. ∎
B.2.3 Proof of Lemma 4.4
Lemma (restatement of Lemma 4.4).
Lemma B.12.
Let be an interval. If holds then for every .
Proof.
Denote by the transition function computed by Algorithm 2 at the beginning of epoch , and by the cost-to-go with respect to . We claim that for every proper policy and state , . Then, the lemma follows easily since .
Indeed, let and consider the Bellman equations of with respect to :
where the inequality follows because for every . This holds since is in the confidence set of Eq. 6 (as holds), and by the way is computed in Algorithm 2. Therefore, by Lemma 2.1 we obtain that for every as required. ∎
Proof of Lemma 4.4.
For every two consecutive intervals , denoting , we have one of the following:
- (i)
-
(ii)
If interval ended in an unknown state-action pair or since the cost reached , and we stay in the same epoch, then and . Thus
-
(iii)
If interval ended by doubling the visit count to some state-action pair, then we start a new epoch. Thus by Lemma B.12,
This happens at most times.
To conclude, we have
B.2.4 Proof of Lemma 4.5
Lemma (restatement of Lemma 4.5).
With probability at least , the following holds for all simultaneously.
Proof.
Consider the following martingale difference sequence defined by
The Bellman optimality equations of with respect to (Lemma B.11) obtains
where the inequality follows from Lemma B.12 and the fact that the total cost within each interval at most by construction. Therefore, we use anytime Azuma’s inequality (Theorem D.1) to obtain that with probability at least :
B.2.5 Proof of Lemma 4.6
Lemma (restatement of Lemma 4.6).
For every interval and time , denote Then,
where is the empirical variance defined as
The next lemma gives a different interpretation to the confidence bounds of Eq. 6, and will be useful in the proofs that follow.
Lemma B.13.
Denote . When holds we have for any :
Proof.
Since holds we have for all that
This is a quadratic inequality in . Using the fact that implies with and , we have
where we used the inequality that holds for any and . Substituting back into Eq. 6 obtains
From a similar argument
Using the triangle inequality finishes the proof. ∎
Proof of Lemma 4.6.
Denote , and . Think of the interval as an infinite continuous stochastic process, and note that, conditioned on , is a martingale difference sequence w.r.t , where is the trajectory of the learner from the beginning of the interval and up to and including time . This holds since, by conditioning on , is determined and is independent of the randomness generated during the interval. Note that is a stopping time with respect to which is bounded by . Hence by the optional stopping theorem which gets us
Furthermore, we have
where the first equality follows since , and and are probability distributions over whence does not depend on . The first inequality follows from Lemma B.13, and the second inequality from Jensen’s inequality, Lemma B.12, , and the definition of . ∎
B.2.6 Proof of Lemma 4.7
Lemma (restatement of Lemma 4.7).
For any interval ,
Lemma B.14.
Let be an interval and be a known state-action pair. If holds then for every
Proof.
By Lemma B.13 we have that
which gives the required bound because is decreasing, and is a known state-action pair so . ∎
Proof of Lemma 4.7.
Note that the first state-action pair in the subinterval, , might be unknown and that all state-action pairs that appear afterwards are known. Thus, we bound
The first summand is trivially bounded by (Lemma B.12). We now upper bound Denote , and think of the interval as an infinite continuous stochastic process. Note that, conditioned on , is a martingale difference sequence w.r.t , where is the trajectory of the learner from the beginning of the interval and up to time and including. This holds since, by conditioning on , is determined and is independent of the randomness generated during the interval. Note that is a stopping time with respect to which is bounded by . Therefore, applying Lemma B.15 found below obtains
(15) |
We now proceed by bounding when occurs. Therefore,
(16) | ||||
(17) | ||||
(18) |
where Eq. 18 is given as and are probability distributions over , is constant w.r.t , and .
We now bound each of the three terms above individually. Eq. 16 is a telescopic sum that is at most on (Lemma B.12). For Eq. 17, we use the Bellman equations for on the optimistic model defined by the transitions (Lemma B.11) thus it is at most (see text following Lemma 4.5). For Eq. 18, recall that all states-action pairs at times are known by definition of . Hence by Lemma B.14,
(by Jensen’s inequality, , ) |
and again by Jensen’s inequality and that the total cost throughout the interval is at most , we have on
(Jensen’s inequality) | ||||
Plugging these bounds back into Eq. 15 gets us
where the last inequality is by the elementary inequality . Rearranging gets us , and the lemma follows. ∎
Lemma B.15.
Let be a martingale difference sequence adapted to the filtration . Let . Then is a martingale, and in particular if is a stopping time such that almost surely, then .
Proof.
We first show that is a martingale. Indeed,
() | ||||
We would now like to show that using the optional stopping theorem. The latter holds since almost surely and ∎
B.2.7 Proof of Lemma 4.8
Lemma (restatement of Lemma 4.8).
With probability at least ,
Proof.
Recall the following definitions:
From Lemma 4.6 we have that
Moreover, by applying the Cauchy-Schwartz inequality twice, we get that
(Lemma 4.7) |
We sum over all intervals to obtain
where the last inequality follows from Jensen’s inequality. We finish the proof using Lemma B.16 below. ∎
Lemma B.16.
With probability at least , the following holds for simultaneously.
Proof.
Define the infinite sequence of random variables: for which due to Lemma B.17 below. We apply Eq. 26 of Lemma D.4 to obtain with probability at least , for all simultaneously
Now, we bound the sum over by rewriting it as a sum over epochs:
where is the last epoch. Finally, from Lemma B.18 below we have that for every ,
We now plugin the resulting bound for and simplify the acquired expression by using . ∎
Lemma B.17.
For any interval ,
Proof.
Note that all state-action pairs (except the first one ) are known. Hence, for , . Therefore, since is decreasing and since and by assumption,
( by definition.) | ||||
Lemma B.18.
For any sequence of integers with and , it holds that
Proof.
We use the inequality for every to obtain
B.2.8 Proof of Theorem 2.4
Theorem (restatement of Theorem 2.4).
Assume that 2 holds. With probability at least the regret of Algorithm 2 is bounded as follows:
Proof.
Let denote the cost of the learner after intervals. First, with probability at least , we have Lemmas 4.2, 4.8 and 4.5 via a union bound. Now, as hold for all intervals, we have for any number of intervals . Plugging in the bounds of Lemmas 4.4, 4.5 and 4.8 into Lemma 4.3, we have that for any number of intervals :
We now plug in the bounds on and from 4.1 into the bound above. First, we plug in the bound on . As long as the episodes have not elapsed we have that . This gets after using the subadditivity of the square root to simplify the resulting expression,
From which, by solving for (using that implies for and ), and simplifying the resulting expression by applying and our assumptions that , , , we get that
(19) |
Note that in particular, by simplifying the bound above, we have Next we combine this with the fact, stated in 4.1 that . Isolating gets and plugging this bound back into Eq. 19 and simplifying gets us
Finally, we note that the bound above holds for any number of intervals as long as episodes do not elapse. As the instantaneous costs in the model are positive, this means that the learner must eventually finish the episodes from which we derive the bound for claimed by the theroem. ∎
Appendix C Lower Bound
In this section we prove Theorem 2.7. At first glance, it is tempting to try and use the lower bound of Jaksch et al. (2010, Theorem 5) on the regret suffered against learning average-reward MDPs by reducing any problem instance from an average-reward MDP to an instance of SSP. However, it is unclear to us if such a reduction is possible, and if it is, how to perform it.222Even though a reduction in the reverse direction is fairly straight-forward in the unit-cost case (Tarbouriech et al., 2019). We consequently prove the theorem here directly.
By Yao’s minimax principle, in order to derive a lower bound on the learner’s regret, it suffices to show a distribution over MDP instances that forces any deterministic learner to suffer a regret of in expectation.
To simplify our arguments, let us first consider the following simpler problem before considering the problem in its full generality. Think of a simple MDP with two states: the initial state and a goal state. The set of actions has a special action chosen uniformly at random a-priori. Upon choosing the special action, the learner transitions to the goal state with probability and remains at with the remaining probability. Concretely and , and for any other action we have and for some .333For ease of notation and since there is only one state other than , we do not write this state as the origin state in the definition of the transition function. The costs of all actions equal 1; i.e., for all . Clearly, the optimal policy constantly plays and therefore .
Fix any deterministic learning algorithm, we shall now quantify the regret of the learner during a single episode in terms of the number of times that it chooses . Let denote the number of steps that the learner spends in during episode , and let be the number of times the learner plays at during the episode. Note that is also the total cost that the learning algorithm suffered during episode . We have the following lemma.
Lemma C.1.
Proof.
Let us denote by and the sequences of states and actions observed by the learner during the episode. We have,
Rearranging using gives the Lemma’s statement. ∎
By Lemma C.1 the overall regret of the learner over episodes is: where and . In words, the regret of the learner is times the expected number of visits to in which the learner did not play .
In the remainder of the proof we lower bound in expectation and upper bound the expected value of . To upper bound , we use standard techniques from lower bounds of multi-armed bandits (Auer et al., 2002) that bound the total variation distance between the distribution of the sequence of states traversed by the learner in the original MDP and that generated in a “uniform MDP” in which all actions are identical. However, we cannot apply this argument directly since it requires to be bounded almost surely, yet here depends on the total length of all episodes which is unbounded in general. We fix this issue by looking only on the first steps (where is to be determined) and showing that the regret is large even in these steps.
Formally, we view the run of the episodes as a continuous process in which when the learner reaches the goal state we transfer it to (at no cost) and let it restart from there. Furthermore, we cap the learning process to consist of exactly steps as follows. If the episodes are completed before steps are elapsed, the learner remains in (until completing steps) without suffering any additional cost, and otherwise we stop the learner after steps before it completes its episodes. In this capped process, we denote the number of visits in by and the number of times the learner played in by . We have
(20) |
The number of visits to under this capping is lower bounded by the following lemma.
Lemma C.2.
For any deterministic learner, if then we have that
Proof.
If the capped learner finished its episodes then . Otherwise, it visits the goal state less than times and therefore . Hence Since , the lemma will follow if we show that with probability at least . We lower bound the probability that by the probability of staying at for steps and picking in the first steps. Indeed, using for , we get that ∎
We now introduce an additional distribution of the transitions which call . is identical to as defined above, except that for all actions We denote expectations over by . The following lemma uses standard lower bound techniques used for multi-armed bandits (see, e.g., Jaksch et al., 2010, Theorem 13) to bound the difference in the expectation of when the learner plays in compared to when it plays in .
Lemma C.3.
For any deterministic learner we have that
Proof.
Fix any deterministic learner. Let us denote by the sequence of states observed by the learner up to time and including. Now, as and the fact that is a function of , and Pinsker’s inequality yields
(21) |
Next, the chain rule of the KL divergence obtains
Observe that at any time, since the learning algorithm is deterministic, the learner chooses an action given regardless of whether was generated under or under . Thus, the is zero if , and otherwise
(using for all ) |
Plugging the above back into Eq. 21 and using gives the lemma. ∎
In the following result, we combine the lemma above with standard techniques from lower bounds of multi-armed bandits (see Jaksch et al., 2010, Thm. 5 for example).
Theorem C.4.
Suppose that , and . For the problem described above we have that
Proof of Theorem C.4.
Note that as under the transition distributions are identical for all actions, we have that
(22) |
Suppose that is sampled uniformly at random before the game starts. Denote the probability and expectation with respect to the distribution induced by a specific choice of by and respectively. Then for ,
(Eq. 20) | ||||
(Lemmas C.2 and C.3) | ||||
(Jensen’s inequality) | ||||
(Eq. 22) | ||||
The theorem follows from and by rearranging. ∎
Proof of Theorem 2.7.
Consider the following MDP. Let be the set of states disregarding . The initial state is sampled uniformly at random from . Each has its own special action . The transition distributions are defined and for any other action .
Note that for each , the learner is faced with a simple problem as the one described above from which it cannot learn about from other states . Therefore, we can apply Theorem C.4 for each separately and lower bound the learner’s expected regret the sum of the regrets suffered at each , which would depend on the number of times is drawn as the initial state. Since the states are chosen uniformly at random there are many states (constant fraction) that are chosen times. Summing the regret bounds of Theorem C.4 over only these states and choosing appropriately gives the sought-after bound.
Denote by the number of episodes that start in each state .
(23) |
Taking expectation over the initial states and applying Cauchy-Schwartz inequality gives
where we have used the expectation and variance formulas of the Binomial distribution. The lower bound is now given by applying the inequality above in Eq. 23 and choosing . ∎
Appendix D Concentration inequalities
Theorem D.1 (Anytime Azuma).
Let be a martingale difference sequence with respect to the filtration such that almost surely. Then with probability at least ,
Theorem D.2 (Weissman et al., 2003).
Let be a distribution over elements, and let be the empirical distribution defined by iid samples from . Then, with probability at least ,
Theorem D.3 (Anytime Bernstein).
Let be a sequence of i.i.d. random variables with expectation . Suppose that almost surely. Then with probability at least , the following holds for all simultaneously:
(24) | ||||
(25) |
Proof.
Lemma D.4.
Let be a sequence of random variables with expectation adapted to the filtration . Suppose that almost surely. Then with probability at least , the following holds for all simultaneously:
(26) |
Proof.
For all , we have
( for all ) | ||||
() | ||||
( for all ) |
Hence, fix some , then
(by repeating the last argument inductively.) |
Therefore,
(Markov inequality) | ||||
Hence the above holds for all via a union bound which provides the lemma. ∎