This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\savesymbol

comment

Reinforcement Learning with LTL and ω\omega-Regular Objectives via Optimality-Preserving Translation to Average Rewards

Xuan-Bach Le1∗  Dominik Wagner1∗
Leon Witzman1Alexander Rabinovich2Luke Ong1
1NTU Singapore  2Tel Aviv University
{bach.le,dominik.wagner,luke.ong}@ntu.edu.sg
[email protected]
  [email protected]
Abstract

Linear temporal logic (LTL) and, more generally, ω\omega-regular objectives are alternatives to the traditional discount sum and average reward objectives in reinforcement learning (RL), offering the advantage of greater comprehensibility and hence explainability. In this work, we study the relationship between these objectives. Our main result is that each RL problem for ω\omega-regular objectives can be reduced to a limit-average reward problem in an optimality-preserving fashion, via (finite-memory) reward machines. Furthermore, we demonstrate the efficacy of this approach by showing that optimal policies for limit-average problems can be found asymptotically by solving a sequence of discount-sum problems approximately. Consequently, we resolve an open problem: optimal policies for LTL and ω\omega-regular objectives can be learned asymptotically.

**footnotetext: These authors contributed equally to this work.

1 Introduction

Reinforcement learning (RL) is a machine learning paradigm whereby an agent aims to accomplish a task in a generally unknown environment [36]. Traditionally, tasks are specified via a scalar reward signal obtained continuously through interactions with the environment. These rewards are aggregated over entire trajectories either through averaging or by summing the exponentially decayed rewards. However, in many applications, there are no reward signals that can naturally be extracted from the environment. Moreover, reward signals that are supplied by the user are prone to error in that the chosen low-level rewards often fail to accurately capture high-level objectives. Generally, policies derived from local rewards-based specifications are hard to understand because it is difficult to express or explain their global intent.

As a remedy, it has been proposed to specify tasks using formulas in Linear Temporal Logic (LTL) [40, 29, 9, 37, 15, 33, 14] or ω\omega-regular languages more generally [29]. In this framework, the aim is to maximise the probability of satisfying a logical specification. LTL can precisely express a wide range of high-level behavioural properties such as liveness (infinitely often PP), safety (always PP), stability (eventually always PP), and priority (PP then QQ then TT).

Motivated by this, a growing body of literature study learning algorithms for RL with LTL and ω\omega-regular objectives (e.g. [40, 15, 29, 7, 31, 20, 21, 16]). However, to the best of our knowledge, all of these approaches may fail to learn provably optimal policies without prior knowledge of a generally unknown parameter. Moreover, it is known that neither LTL nor (limit) average reward objectives are PAC (probably approximately correct) learnable [3]. Consequently, approximately optimal policies can only possibly be found asymptotically but not in bounded time. 111Formally, for some ϵ,δ>0\epsilon,\delta>0 it is impossible to learn ϵ\epsilon-approximately optimal policies with probability 1δ1-\delta in finite time. \comm@parse@datecomm@commentloCitation? \comm@parse@datecomm@commentdwThis is immediate from the definition of PAC learnability. What do you want a reference for?

In this work, we pursue a different strategy: rather than solving the RL problem directly, we study optimality-preserving translations [3] from ω\omega-regular objectives to more traditional rewards, in particular, limit-average rewards. This method offers a significant advantage: it enables the learning of optimal policies for ω\omega-regular objectives by solving a single more standard problem, for which we can leverage existing off-the-shelf algorithms (e.g. [25, 15, 29]). In this way, all future advances—in both theory and practice—for these much more widely studied problems carry over directly, whilst still enjoying significantly more explainable and comprehensible specifications. It is well-known that such a translation from LTL to discounted rewards is impossible [3]. Intuitively, this is because the latter cannot capture infinite horizon tasks such as reachability or safety [3, 41, 19]. Hence, we instead investigate translations to limit-average rewards in this paper.

Contributions

We study reinforcement learning of ω\omega-regular and LTL objectives in Markov decision processes (MDPs) with unknown probability transitions, translations to limit-average reward objectives and learning algorithms for the latter. In detail:

  1. 1.

    We prove a negative result (Proposition 4): in general it is not possible to translate ω\omega-regular objectives to limit average objectives in an optimality-preserving manner if rewards are memoryless \changed[lo](i.e., independent of previously performed actions, sometimes called history-free \changed[dw]or Markovian).

  2. 2.

    On the other hand, our main result (Theorem 12) resolves Open Problem 1 in [3]: such an optimality-preserving translation is possible if the reward assignment may use finite memory as formalised by reward machines [23, 24].

  3. 3.

    To underpin the efficacy of our reduction approach, we provide the first convergence proof (Theorem 16) of an RL algorithm (Algorithm 1) for average rewards. To the best of our knowledge (and as indicated by [13]), this is the first proof without assumptions on the induced Markov chains. In particular, the result applies to multichain MDPs, which our translation generally produces, with unknown probability transitions. Consequently, we also resolve Open Problem 4 of [3]: RL for ω\omega-regular and LTL objectives can be learned in the limit (Theorem 18).

Outline.

We start by reviewing the problem setup in Section 2. Motivated by the impossibility result for simple reward functions, we define reward machines (Section 3). In Section 4 we build intuition for the proof of our main result in Section 5. Thereafter, we demonstrate that RL with limit-average, ω\omega-regular and LTL objectives can be learned asymptotically (Section 6). Finally, we review related work and conclude in Section 7.

2 Background

Recall that a Markov Decision Process (MDP) is a tuple =(S,A,s0,P)\mathcal{M}=(S,A,s_{0},P) where SS is a finite set of states, s0Ss_{0}\in S is the initial state, AA is the finite set of actions and P:S×A×S[0,1]P:S\times A\times S\rightarrow[0,1] is the probability transition function such that sSP(s,a,s)=1\sum_{s^{\prime}\in S}P(s,a,s^{\prime})=1 for every sSs\in S and aAa\in A. MDPs may be graphically represented; see e.g. Fig. 1(a). We let Runsfi(S,A)=S×(A×S)\mathrm{Runs}_{\mathrm{fi}}(S,A)=S\times(A\times S)^{*} and Runs(S,A)=(S×A)ω\mathrm{Runs}(S,A)=(S\times A)^{\omega} denote the set of finite runs and the set of infinite runs in \mathcal{M} respectively.

A policy π:Runsfi(S,A)𝒟(A)\pi:\mathrm{Runs}_{\mathrm{fi}}(S,A)\rightarrow\mathcal{D}(A) maps finite runs to distributions over actions. We let Π(S,A)\Pi(S,A) denote the set of all such policies. A policy π\pi is memoryless if π(s0a0sn)=π(s0a0sm)\pi(s_{0}a_{0}\ldots s_{n})=\pi(s^{\prime}_{0}a^{\prime}_{0}\ldots s^{\prime}_{m}) for all finite runs s0a0sns_{0}a_{0}\ldots s_{n} and s0a0sms^{\prime}_{0}a^{\prime}_{0}\ldots s^{\prime}_{m} such that sn=sms_{n}=s^{\prime}_{m}. For each MDP \mathcal{M} and policy π\pi, there is a natural induced probability measure 𝒟π\mathcal{D}_{\pi}^{\mathcal{M}} on its runs.

The desirability of policies for a given MDP \mathcal{M} can be expressed as a function 𝒥:Π(S,A)\mathcal{J}:\Pi(S,A)\to\mathbb{R}. Much of the RL literature focuses on discounted-sum 𝒥γ\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}} and limit-average reward objectives 𝒥avg\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}, which lift a reward function :S×A×S\mathcal{R}:S\times A\times S\to\mathbb{R} for single transitions to runs ρ=s0a0s1a1\rho=s_{0}a_{0}s_{1}a_{1}\ldots as follows:

𝒥γ(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}}(\pi) :=𝔼ρ𝒟π[i=0γiri]\displaystyle:=\mathbb{E}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\sum_{i=0}^{\infty}~{}\gamma^{i}\cdot r_{i}\right] 𝒥avg(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi) :=lim inft𝔼ρ𝒟π[1ti=0t1ri]\displaystyle:=\liminf_{t\rightarrow\infty}{\mathbb{E}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\frac{1}{t}\cdot\sum_{i=0}^{t-1}~{}r_{i}\right]}

where ri=(si,ai,si+1)r_{i}=\mathcal{R}(s_{i},a_{i},s_{i+1}) and γ(0,1)\gamma\in(0,1) is the discount factor.

s0s_{0}starts1s_{1}aabbbb
(a) An MDP where all transitions occur with probability 11, λ(s0,b,s1)={p}\lambda(s_{0},b,s_{1})=\{p\} and the rest are labeled with \emptyset.
q0q_{0}startq1q_{1}q2q_{2}\emptyset{p}\{p\}\emptyset{p}\{p\}*
(b) A DRA, where F:={({q1},)}F:=\{(\{q_{1}\},\emptyset)\}, for the objective to visit the petrol station pp exactly once.
Figure 1: Examples of an MDP and DRA.

ω\omega-Regular Objectives.

ω\omega-regular objectives (which subsume LTL objectives) are an alternative to these traditional objectives. Henceforth, we fix an alphabet 𝒜𝒫\mathcal{AP} and a label function λ:S×A×S2𝒜𝒫\lambda:S\times A\times S\rightarrow 2^{\mathcal{AP}} for transitions, where 2X2^{X} is the power set of a set XX. Each run ρ=s0a0s1a1s2\rho=s_{0}a_{0}s_{1}a_{1}s_{2}\ldots induces a sequence of labels λ(ρ)=λ(s0,a0,s1)λ(s1,a1,s2)\lambda(\rho)=\lambda(s_{0},a_{0},s_{1})\lambda(s_{1},a_{1},s_{2})\ldots. Thus, for a set L(2𝒜𝒫)ωL\subseteq(2^{\mathcal{AP}})^{\omega} of “desirable” label sequences we can consider the probability of a run’s labels being in that set: ρ𝒟π[λ(ρ)L]\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\lambda(\rho)\in L].

Example 1.

For instance, an autonomous car may want to “visit a petrol station exactly once” to conserve resources (e.g. time or petrol). Consider the MDP in Fig. 1(a) where the state s1s_{1} represents a petrol station. We let 𝒜𝒫={p}\mathcal{AP}=\{p\} (pp for petrol), λ(s0,b,s1)={p}\lambda(s_{0},b,s_{1})=\{p\}, and the rest are labeled with \emptyset. The desirable label sequences are L={λ1λ2 for exactly one i,λi={p}}L=\{\lambda_{1}\lambda_{2}\cdots\mid\text{ for exactly one }i\in\mathbb{N},\lambda_{i}=\{p\}\}.

In this work, we focus on LL which are ω\omega-regular languages. It is well known that ω\omega-regular languages are precisely the languages recognised by Deterministic Rabin Automata (DRA) [26, 28]:

Definition 2.

A DRA is a tuple 𝒜=(Q,2𝒜𝒫,q0,δ,F)\mathcal{A}=(Q,2^{\mathcal{AP}},q_{0},\delta,F) where QQ is a finite state set, 2𝒜𝒫2^{\mathcal{AP}} is the alphabet, q0Qq_{0}\in Q is the initial state, δ:Q×2𝒜𝒫Q\delta:Q\times 2^{\mathcal{AP}}\rightarrow Q is the transition function, and F={(A1,R1),,(An,Rn)}F=\{(A_{1},R_{1}),\ldots,(A_{n},R_{n})\}, where Ai,RiQA_{i},R_{i}\subseteq Q, is the accepting condition. Let ρ(2𝒜𝒫)ω\rho\in(2^{\mathcal{AP}})^{\omega} be an infinite run and InfS(ρ)\operatorname{InfS}(\rho) the set of states visited infinitely often by ρ\rho. We say ρ\rho is accepted by 𝒜\mathcal{A} if there exists some (Ai,Ri)F(A_{i},R_{i})\in F such that ρ\rho visits some state in AiA_{i} infinitely often while visiting every states in RiR_{i} finitely often, i.e. InfS(ρ)Ai\operatorname{InfS}(\rho)\cap A_{i}\neq\emptyset and InfS(ρ)Ri=\operatorname{InfS}(\rho)\cap R_{i}=\emptyset.

For example, the objective in Example 1 may be represented by the DRA in Fig. 1(b).

Thus, the desirability of π\pi is the probability of π\pi generating an accepting sequence in the DRA 𝒜\mathcal{A}:

𝒥𝒜(π):=ρ𝒟π[λ(ρ) is accepted by the automaton 𝒜]\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)~{}:=~{}\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\lambda(\rho)\text{ is accepted by the automaton }\mathcal{A}] (1)

Remarks.

The class of ω\omega-regular languages subsumes languages expressed by Linear Temporal Logic (LTL, see e.g. [5, Ch. 5]), a logical framework in which e.g. reachability (eventually PP, P\Diamond P), safety (always PP, P\Box P) and reach-avoid (eventually PP whilst avoiding QQ, (¬Q)𝒰P(\neg Q)\,\mathcal{U}\,P) properties can be expressed concisely and intuitively. The specification of our running Example 1 to visit the petrol station exactly once can be expressed as the LTL formula (¬p)𝒰(p¬p)(\neg p)\,\mathcal{U}\,(p\land\medcirc\,\Box\neg p), where Q\medcirc Q denotes “QQ holds at the next step”. Furthermore, our label function λ\lambda, which maps transitions to labels, is more general than other definitions (e.g. [40, 15, 29]) \changed[dw]instead mapping states to labels. \comm@parse@datecomm@commentloConfusing to use λ\lambda then λ\lambda^{\prime} in the preceding. Typo? \comm@parse@datecomm@commentdwbetter? As a result, we are able to articulate properties that involve actions, such as “to reach the state ss while avoiding taking the action aa”.

Optimality-Preserving Specification Translations.

Rather than solving the problem of synthesising optimal policies for Eq. 1 directly, we are interested in reducing it to more traditional RL problems and applying off-the-shelf RL algorithms to find optimal policies. To achieve this, the reduction needs to be optimality preserving222This definition makes sense both for the case of reward functions (S,A,λ,𝒜):S×A×S\mathcal{R}_{(S,A,\lambda,\mathcal{A})}:S\times A\times S\to\mathbb{R} and reward machines (introduced in the subsequent section).:

\comm@parse@date

comm@commentdwWhat do you think of this modified definition? \comm@parse@datecomm@commentloOK. But note that we have not defined reward machines yet, and the type of (S,A,λ,𝒜)\mathcal{R}_{(S,A,\lambda,\mathcal{A})} is S×A×SS\times A\times S\to\mathbb{R}, which rules out memoryless reward machines. \comm@parse@datecomm@commentdwIs the clarifying footnote helpful?

Definition 3 ([3]).

An optimality-preserving specification translation from ω\omega-regular objectives to limit-average rewards is a computable function mapping each tuple (S,A,λ,𝒜)(S,A,\lambda,\mathcal{A}) to (S,A,λ,𝒜)\mathcal{R}_{(S,A,\lambda,\mathcal{A})} s.t.

policies maximising 𝒥avg\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}} also maximise 𝒥𝒜\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}, where :=(S,A,λ,𝒜)\mathcal{R}:=\mathcal{R}_{(S,A,\lambda,\mathcal{A})}

for every MDP =(S,A,s0,P)\mathcal{M}=(S,A,s_{0},P), label function λ:S×A×S2𝒜𝒫\lambda:S\times A\times S\to 2^{\mathcal{AP}} and DRA 𝒜\mathcal{A}.

We stress that since the probability transition function PP is generally not known, the specification translation may not depend on it.

3 Negative Result and Reward Machines

Reward functions emit rewards purely based on the transition being taken without being able to take the past into account, whilst DRAs have finite memory. Therefore, there cannot generally be optimality-preserving translations from ω\omega-regular objectives to limit average rewards provided by reward functions:

Proposition 4.

There is an MDP \mathcal{M} and an ω\omega-regular language LL for which it is impossible to find a reward function :S×A×S\mathcal{R}:S\times A\times S\rightarrow\mathbb{R} such that every 𝒥avg\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}-optimal policy of \mathcal{M} also maximises the probability of membership in LL.

Remarkably, this rules out optimality-preserving specification translations even if transition probabilities are fully known333In Appendix A we show another negative result (Proposition 19): even for a strict subset of ω\omega-regular specifications such translations are impossible..

Proof.

Consider the deterministic MDP in Fig. 1(a) and the objective of Example 1 “to visit s1s_{1} exactly once” expressed by the DRA 𝒜\mathcal{A} in Fig. 1(b). Assume towards contradiction there exists a reward function :S×A×S\mathcal{R}:S\times A\times S\to\mathbb{R} such that optimal policies w.r.t. 𝒥avg\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}} maximise acceptance by 𝒜\mathcal{A}. Note that every policy π\pi^{*} maximising acceptance by the DRA induces the run s0(as0)nbs1bs0(as0)ωs_{0}(as_{0})^{n}bs_{1}bs_{0}(as_{0})^{\omega} for some nn\in\mathbb{N}, and 𝒥𝒜(π)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi^{*})=1. Thus, its limit-average reward is 𝒥avg(π)=(s0,a,s0)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{*})=\mathcal{R}(s_{0},a,s_{0}). Now, consider the policy π\pi always selecting action aa with probability 11. As the run induced by π\pi is s0(as0)ωs_{0}(as_{0})^{\omega}, we deduce that 𝒥𝒜(π)=0\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)=0 and 𝒥avg(π)=(s0,a,s0)=𝒥avg(π)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)=\mathcal{R}(s_{0},a,s_{0})=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{*}), which is a contradiction since π\pi is not 𝒥𝒜\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}-optimal. ∎

Since simple reward functions lack the expressiveness to capture ω\omega-regular objectives, we employ a generalisation, reward machines [23, 24], whereby rewards may also depend on an internal state:

Definition 5.

A reward machine (RM) is a tuple =(U,u0,δu,δr)\mathcal{R}=(U,u_{0},\delta_{u},\delta_{r}) where UU is a finite set of states, u0Uu_{0}\in U is the initial state, δr:U×(S×A×S)\delta_{r}:U\times(S\times A\times S)\rightarrow\mathbb{R} is the reward function, and δu:U×(S×A×S)U\delta_{u}:U\times(S\times A\times S)\rightarrow U is the update function.

Intuitively, a RM \mathcal{R} utilises the current transition to update its states through δu\delta_{u} and assigns the rewards through δr\delta_{r}. For example, Fig. 2(a) depicts a reward machine for the MDP of Fig. 1(a), where the states count the number of visits to s1s_{1} (0 times, once, more than once).

Let ρ=s0a0s1\rho=s_{0}a_{0}s_{1}\cdots be an infinite run. Since δu\delta_{u} is deterministic, it induces a sequence u0u1u_{0}u_{1}\ldots of states in \mathcal{R}, where ei=(si,ai,si+1)e_{i}=(s_{i},a_{i},s_{i+1}) and ui+1=δu(ui,ei)u_{i+1}=\delta_{u}(u_{i},e_{i}). The limit-average reward of a policy π\pi is defined as:

𝒥avg(π):=lim inft𝔼ρ𝒟π[1ti=0t1δr(ui,ei)]\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)~{}:=~{}\liminf_{t\rightarrow\infty}{\mathbb{E}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\frac{1}{t}\sum_{i=0}^{t-1}~{}\delta_{r}(u_{i},e_{i})\right]}

It is seen that limit-average optimal policies π\pi^{*} for the MDP in Fig. 1(a) and the RM in Fig. 2(a) eventually select action bb exactly once in state s0s_{0} to achieve 𝒥avg(π)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{*})=1.

u0u_{0}startu1u_{1}u2u_{2}(s0,a,s0)/0(s_{0},a,s_{0})/0(s1,b,s0)/0(s_{1},b,s_{0})/0(s0,b,s1)/0(s_{0},b,s_{1})/0(s0,a,s0)/1(s_{0},a,s_{0})/1(s1,b,s0)/0(s_{1},b,s_{0})/0(s0,b,s1)/0(s_{0},b,s_{1})/0/0*/0
(a) A reward machine for the objective of visiting the petrol station exactly once. (The rewards are given following “/”.)
(s0,q0)(s_{0},q_{0})start(s0,q1)(s_{0},q_{1})(s0,q2)(s_{0},q_{2})(s1,q0)(s_{1},q_{0})(s1,q1)(s_{1},q_{1})(s1,q2)(s_{1},q_{2})bbaabbbbaabbbba,ba,b
(b) Product MDP for Fig. 1, where all transitions have probability 11 and F:={({(s0,q1),(s1,q1)},)}F_{\mathcal{M}}:=\{(\{(s_{0},q_{1}),(s_{1},q_{1})\},\emptyset)\}.
Figure 2: A reward machine and the product MDP for the running Example 1.

In the following two sections, we present a general translation from ω\omega-regular languages to limit-average reward machines, and we show that our translation is optimality-preserving (Theorem 12).

Remarks.

Our definition of RM is more general than the one presented in [23, 24], where δu:U[S×A×S]\delta^{\prime}_{u}:U\rightarrow[S\times A\times S\rightarrow\mathbb{R}] and δr:U×2𝒜𝒫U\delta^{\prime}_{r}:U\times 2^{\mathcal{AP}}\rightarrow U. Note that (δu,δr)(\delta^{\prime}_{u},\delta^{\prime}_{r}) can be reduced to (δu,δr)(\delta_{u},\delta_{r}) by expanding the state space of the RM to include the previous state and utilising the inverse label function λ1\lambda^{-1}. It is worth pointing out that Theorem 12 does not contradict a negative result in [3] regarding the non-existence of an optimality-preserving translation from LTL constraints to abstract limit-average reward machines (where only the label of transitions is provided to δu\delta_{u} and δr\delta_{r}).

4 Warm-Up: Transitions with Positive Probability are Known

To help the reader gain intuition about our construction, we first explore the situation where the support {(s,a,s)S×A×SP(s,a,s)>0}\{(s,a,s^{\prime})\in S\times A\times S\mid P(s,a,s^{\prime})>0\} of the MDP’s transition function is known. Crucially, we do not assume that the magnitude of these (non-zero) probabilities are known. Subsequently, in Section 5, we fully eliminate this assumption.

This assumption allows us to draw connections between our problem and a familiar scenario in probabilistic model checking [5, Ch. 10], where the acceptance problem for ω\omega-regular objectives can be transformed into a reachability problem. Intuitively, our reward machine monitors the state of the DRA and provides reward 11 if the MDP and the DRA are in certain “good” states (0 otherwise).

For the rest of this section, we fix an MDP without transition function (S,A,s0)(S,A,s_{0}), a set of possible transitions ES×A×SE\subseteq S\times A\times S, a label function λ:S×A×S2𝒜𝒫\lambda:S\times A\times S\to 2^{\mathcal{AP}} and a DRA 𝒜=(Q,2𝒜𝒫,q0,δ,F)\mathcal{A}=(Q,2^{\mathcal{AP}},q_{0},\delta,F). Our aim is to find a reward machine \mathcal{R} such that for every transition function PP compatible with EE (formally: E={(s,a,s)P(s,a,s)>0}E=\{(s,a,s^{\prime})\mid P(s,a,s^{\prime})>0\}), optimal policies for limit-average rewards are also optimal for the acceptance probability of the DRA 𝒜\mathcal{A}.

4.1 Product MDP and End Components

First, we form the product MDP 𝒜\mathcal{M}\otimes\mathcal{A} (e.g. [40, 15]), which synchronises the dynamics of the MDP \mathcal{M} with the DRA 𝒜\mathcal{A}. Formally, 𝒜=(V,A,v0,Δ,F)\mathcal{M}\otimes\mathcal{A}=(V,A,v_{0},\Delta,F_{\mathcal{M}}) where V=S×QV=S\times Q is the set of states, AA is the set of actions, v0=(s0,q0)v_{0}=(s_{0},q_{0}) is the initial state. The transition probability function Δ:V×A×V[0,1]\Delta:V\times A\times V\rightarrow[0,1] satisfies Δ(v,a,v)=P(s,a,s)\Delta(v,a,v^{\prime})=P(s,a,s^{\prime}) given that v=(s,q)v=(s,q), v=(s,q)v^{\prime}=(s^{\prime},q^{\prime}), and δ(q,λ(s,a,s))=q\delta(q,\lambda(s,a,s^{\prime}))=q^{\prime}. The accepting condition is F={(A1,R1),(A2,R2),}F_{\mathcal{M}}=\{(A_{1}^{\prime},R^{\prime}_{1}),(A_{2}^{\prime},R^{\prime}_{2}),\ldots\} where Ai=S×AiA^{\prime}_{i}=S\times A_{i}, Bi=S×BiB^{\prime}_{i}=S\times B_{i}, and (Ai,Bi)F(A_{i},B_{i})\in F. A run ρ=(s0,q0)a0\rho=(s_{0},q_{0})a_{0}\cdots is accepted by 𝒜\mathcal{M}\otimes\mathcal{A} if there exists some (Ai,Ri)F(A^{\prime}_{i},R^{\prime}_{i})\in F_{\mathcal{M}} such that InfV(ρ)Ai\operatorname{InfV}(\rho)\cap A^{\prime}_{i}\neq\emptyset and InfV(ρ)Ri=\operatorname{InfV}(\rho)\cap R^{\prime}_{i}=\emptyset, where InfV\operatorname{InfV} is the set of states (s,v)(s,v) in the product MDP visited infinitely often by ρ\rho.

Note that product MDPs have characteristics of both MDPs and DRAs which neither possesses in isolation: transitions are generally probabilistic and there is a notation of acceptance of runs. For example, the product MDP for Fig. 1 is shown in Fig. 2(b). Due to the deterministic nature of the DRA 𝒜\mathcal{A}, every run ρ\rho in \mathcal{M} gives rise to a unique run ρ\rho^{\otimes} in 𝒜\mathcal{M}\otimes\mathcal{A}. Crucially, for every policy π\pi,

ρ𝒟π[ρ is accepted by 𝒜]=ρ𝒟π[ρ is accepted by 𝒜]\displaystyle\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho\text{ is accepted by }\mathcal{A}]=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ is accepted by }\mathcal{M}\otimes\mathcal{A}] (2)

We make use of well-known almost-sure characterisation of accepting runs via the notion of accepting end components:

Definition 6.

An end component (EC) of 𝒜=(V,A,v0,Δ,F)\mathcal{M}\otimes\mathcal{A}=(V,A,v_{0},\Delta,F_{\mathcal{M}}) is a pair (T,Act)(T,\operatorname{Act}) where TVT\subseteq V and Act:T2A\operatorname{Act}:T\rightarrow 2^{A} satisfies the following conditions

  1. 1.

    For every vTv\in T and aAct(v)a\in\operatorname{Act}(v), we have vTΔ(v,a,v)=1\sum_{v^{\prime}\in T}\Delta(v,a,v^{\prime})=1, and

  2. 2.

    The graph (T,Act)(T,\rightarrow_{\operatorname{Act}}) is strongly connected, where vActvv\rightarrow_{\operatorname{Act}}v^{\prime} iff Δ(v,a,v)>0\Delta(v,a,v^{\prime})>~{}0 for some aAct(v)a\in\operatorname{Act}(v).

(T,Act)(T,\operatorname{Act}) is an accepting EC (AEC) if TAiT\cap A^{\prime}_{i}\neq\emptyset and TBi=T\cap B^{\prime}_{i}=\emptyset for some (Ai,Bi)F(A^{\prime}_{i},B^{\prime}_{i})\in F_{\mathcal{M}}.

Intuitively, an EC is a strongly connected sub-MDP. For instance, for the product MDP in Fig. 2(b) there are five end components, ({(s0,q0)},(s0,q0){a})(\{(s_{0},q_{0})\},(s_{0},q_{0})\mapsto\{a\}), ({(s0,q1)},(s0,q1){a})(\{(s_{0},q_{1})\},(s_{0},q_{1})\mapsto\{a\}), ({(s0,q2)},(s0,q2){a})(\{(s_{0},q_{2})\},(s_{0},q_{2})\mapsto\{a\}), ({(s0,q2)},(s0,q2){b})(\{(s_{0},q_{2})\},(s_{0},q_{2})\mapsto\{b\}) and ({(s0,q2)},(s0,q2){a,b})(\{(s_{0},q_{2})\},(s_{0},q_{2})\mapsto\{a,b\}). ({(s0,q1)},(s0,q1){a})(\{(s_{0},q_{1})\},(s_{0},q_{1})\mapsto\{a\}) is its only accepting end component.

It turns out that, almost surely, a run is accepted iff it enters an accepting end component and never leaves it [2]. Therefore, a natural idea for a reward machine is to use its state to keep track of the state qQq\in Q the DRA is in and give reward 1 to transitions (s,a,s)(s,a,s^{\prime}) if (s,q)(s,q) is in some AEC (and 0 otherwise). Unfortunately, this approach falls short since the AEC may contain non-accepting ECs, thus assigning maximal reward to sub-optimal policies.444To illustrate this point, consider the product MDP ({s0,s1},{a,b},s0,P,F)(\{s_{0},s_{1}\},\{a,b\},s_{0},P,F) where P(s0,b,s0)=P(s0,a,s1)=P(s1,a,s0)=1P(s_{0},b,s_{0})=P(s_{0},a,s_{1})=P(s_{1},a,s_{0})=1 and F={({s1},)}F=\{(\{s_{1}\},\emptyset)\}, i.e. the objective is to visit s1s_{1} infinitely often. As a remedy, we introduce a notion of minimal AEC, and ensure that only runs eventually committing to one such minimal AEC get a limit-average reward of 1.

Definition 7.

An AEC (T,Act)(T,\operatorname{Act}) is an accepting simple EC (ASEC) if |Act(v)|=1|\operatorname{Act}(v)|=1 for every vTv\in T.

Let 𝒞1=(T1,Act1),,𝒞n=(Tn,Actn)\mathcal{C}_{1}=(T_{1},\operatorname{Act}_{1}),\ldots,\mathcal{C}_{n}=(T_{n},\operatorname{Act}_{n}) be a collection of ASECs covering all states in ASECs, i.e. if (s,q)(s,q) is in some ASEC then (s,q)T1Tn(s,q)\in{T_{1}\cup{\cdots}\cup T_{n}}. In particular, n|S×Q|n\leq|S\times Q| is sufficient.

We can prove that every AEC contains an ASEC (see Lemma 20 in Appendix B). Consequently,

Lemma 8.

Almost surely, if ρ\rho is accepted by 𝒜\mathcal{A} then ρ\rho^{\otimes} reaches a state in some ASEC 𝒞i\mathcal{C}_{i} of 𝒜\mathcal{M}\otimes\mathcal{A}.

4.2 Reward Machine and Correctness

Next, to ensure that runs eventually commit to one such ASEC we introduce the following notational shorthand: for (s,q)T1Tn(s,q)\in{T_{1}\cup\cdots\cup T_{n}}, let 𝒞(s,q)=(T(s,q),Act(s,q))\mathcal{C}_{(s,q)}=(T_{(s,q)},\operatorname{Act}_{(s,q)}) be the 𝒞i\mathcal{C}_{i} with minimal ii containing (s,q)(s,q), i.e. C(s,q):=Cmin{1in(s,q)Ti}C_{(s,q)}:=C_{\min\{1\leq i\leq n\mid(s,q)\in T_{i}\}}.

Intuitively, we give a reward of 1 if (s,q)(s,q) is in one of the 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n}. However, once an action is performed which deviates from Act(s,q)\operatorname{Act}_{(s,q)} no rewards are given thereafter, thus resulting in a limit average reward of 0.

A state in the reward machine has the form qQq\in Q, keeping track of the state in the DRA, or \bot, which is a sink state signifying that in a state in 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n} we have previously deviated from Act(s,q)\operatorname{Act}_{(s,q)}.

Finally, we are ready to formally define the reward machine =(S,A,λ,𝒜)\mathcal{R}=\mathcal{R}_{(S,A,\lambda,\mathcal{A})} exhibiting our specification translation as (Q{},q0,δu,δr)(Q\cup\{\bot\},q_{0},\delta_{u},\delta_{r}), where

δu(u,(s,a,s))\displaystyle\delta_{u}(u,(s,a,s^{\prime})) :={if u= or((s,u)T1Tn and aAct(s,u)(s,u))δ(u,λ(s,a,s))otherwise\displaystyle:=\begin{cases}\bot&\text{if }u=\bot\text{ or}\\ &\left((s,u)\in T_{1}\cup\cdots\cup T_{n}\text{ and }a\not\in\operatorname{Act}_{(s,u)}(s,u)\right)\\ \delta(u,\lambda(s,a,s^{\prime}))&\text{otherwise}\end{cases}
δr(u,(s,a,s))\displaystyle\delta_{r}(u,(s,a,s^{\prime})) :={1if u and (s,u)T1Tn0otherwise\displaystyle:=\begin{cases}1&\text{if }u\neq\bot\text{ and }(s,u)\in T_{1}\cup\cdots\cup T_{n}\\ 0&\text{otherwise}\end{cases}

For our running example, this construction essentially yields the reward machine in Fig. 2(a) (with some inconsequential modifications cf. Fig. 4 in Appendix B).

Theorem 9.

For all transition probability functions PP with support EE, policies maximising the limit-average reward w.r.t.  \mathcal{R} also maximise the acceptance probability of the DRA 𝒜\mathcal{A}.

This result follows immediately from the following (the full proof is presented in Appendix B):

Lemma 10.

Let PP be a probability transition function with support EE and :=(S,A,s0,P)\mathcal{M}:=(S,A,s_{0},P).

  1. 1.

    For every policy π\pi, 𝒥avg(π)𝒥𝒜(π)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi).

  2. 2.

    For every policy π\pi, there exists some policy π\pi^{\prime} satisfying 𝒥𝒜(π)𝒥avg(π)\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime}).

Proof sketch.

1. By construction, every run receiving a limit-average reward of 11, must have entered some ASEC 𝒞i\mathcal{C}_{i} and never left it. Furthermore, almost surely all states are visited infinitely often and the run is accepted by definition of accepting ECs.

2. By Lemma 8, almost surely, a run is only accepted if it enters some 𝒞i\mathcal{C}_{i}. We set π\pi^{\prime} to be the policy agreeing with π\pi until reaching one of the 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n} and henceforth following the action Act(st,qt)(st,qt)\operatorname{Act}_{(s_{t},q_{t})}(s_{t},q_{t}), where qtq_{t} is the state of the DRA at step tt, yielding a guaranteed limit-average reward of 11 for the run by construction. ∎

\changed

[dw]

Remark 11.

Our construction considers a collection of ASECs covering all states in ASECs. Whilst it does not necessarily require listing all possible ASECs but only (up to) one ASEC per state, it is unclear whether this can be obtained in polynomial time. In Section B.1, we present an alternative (yet more complicated) construction which has polynomial time complexity.

5 Main Result

In this section, we generalise the approach of the preceding section to prove our main result:

Theorem 12.

There exists an optimality-preserving translation from ω\omega-regular languages to limit-average reward machines.

Again, we fix an MDP without transition function (S,A,s0)(S,A,s_{0}), a label function λ:S×A×S2𝒜𝒫\lambda:S\times A\times S\to 2^{\mathcal{AP}} and a DRA 𝒜=(Q,2𝒜𝒫,q0,δ,F)\mathcal{A}=(Q,2^{\mathcal{AP}},q_{0},\delta,F). Note that the ASECs of a product MDP are uniquely determined by the non-zero probability transitions. Thus, for each set of transitions E(S×Q)×A×(S×Q)E\subseteq(S\times Q)\times A\times(S\times Q), we let 𝒞1E=(T1,Act1),,𝒞nE=(Tn,Actn)\mathcal{C}^{E}_{1}=(T_{1},\operatorname{Act}_{1}),\ldots,\mathcal{C}^{E}_{n}=(T_{n},\operatorname{Act}_{n}) denote a collection of ASECs covering all states in ASECs w.r.t. the MDPs in which EE is the set of non-zero probability transitions.555To achieve the same number nn of ASECs we can add duplicates. If there are no ASECs we can set Ti:=T_{i}:=\emptyset. Then, for each set EE and state (s,q)T1ETnE(s,q)\in T^{E}_{1}\cup\cdots\cup T^{E}_{n}, we let 𝒞(s,q)E=(T(s,q)E,Act(s,q)E)\mathcal{C}_{(s,q)}^{E}=(T_{(s,q)}^{E},\operatorname{Act}_{(s,q)}^{E}) be the ASEC 𝒞iE\mathcal{C}_{i}^{E} that contains (s,q)(s,q) in which the index ii is minimal.

Our reward machine =(S,A,λ,𝒜)\mathcal{R}=\mathcal{R}_{(S,A,\lambda,\mathcal{A})} extends the ideas from the preceding section. Importantly, we keep track of the set of transitions EE taken so far and assign rewards according to our current knowledge about the graph of the product MDP. Therefore, we propose employing states of the form (q,f,E)(q,f,E), where qQq\in Q keeps track of the state of the DRA, f{,}f\in\{\top,\bot\} is a status flag and E(S×Q)×A×(S×Q)E\subseteq(S\times Q)\times A\times(S\times Q) memorises the transitions in the product MDP encountered thus far.

Intuitively, we set the flag to \bot if we are in MDP state ss, (s,q)(s,q) is in one of the 𝒞1E,,𝒞nE\mathcal{C}_{1}^{E},\ldots,\mathcal{C}_{n}^{E} and the chosen action deviates from Act(s,q)E(s,q)\operatorname{Act}_{(s,q)}^{E}(s,q). We can recover from \bot by discovering new transitions. Besides, we give reward 11 if f=f=\top and (s,q)(s,q) is in one of the 𝒞1E,,𝒞nE\mathcal{C}_{1}^{E},\ldots,\mathcal{C}_{n}^{E} (and 0 otherwise).

The status flag is required since discovering new transitions will change the structure of (accepting simple) end components. Hence, differently from the preceding section, it is not sufficient to have a single sink state.

The initial state of our reward machine is u0:=(q0,,)u_{0}:=(q_{0},\top,\emptyset) and we formally define the update and reward functions as follows:

δu((q,f,E),(s,a,s))\displaystyle\delta_{u}((q,f,E),(s,a,s^{\prime})) :={(q,,E)if f= and eE(q,,E)if f=,eE,(s,q)T1ETnE and aAct(s,q)E(s,q)(q,,E{e})otherwise\displaystyle:=\begin{cases}(q^{\prime},\bot,E)&\text{if }f=\bot\text{ and }e\in E\\ (q^{\prime},\bot,E)&\text{if }f=\top,e\in E,(s,q)\in T_{1}^{E}\cup\cdots\cup T_{n}^{E}\text{ and }\\ &a\not\in\operatorname{Act}_{(s,q)}^{E}(s,q)\\ (q^{\prime},\top,E\cup\{e\})&\text{otherwise}\end{cases}
δr((q,f,E),(s,a,s))\displaystyle\delta_{r}((q,f,E),(s,a,s^{\prime})) :={1if f=,(s,q)T1ETnE0otherwise\displaystyle:=\begin{cases}1&\text{if }f=\top,(s,q)\in T_{1}^{E}\cup\cdots\cup T_{n}^{E}\\ 0&\text{otherwise}\end{cases}

where q:=δ(q,λ(s,a,s))q^{\prime}:=\delta(q,\lambda(s,a,s^{\prime})) and e:=((q,s),a,(q,s))e:=((q,s),a,(q^{\prime},s^{\prime})).

Example 13.

For our running example (see Examples 1 and 1) initially no transitions are known (hence no ASECs). Therefore, all transitions receive reward 0. Once action aa has been performed in state s0s_{0} in the MDP \mathcal{M} and (q1,f,E)(q_{1},f,E) in the reward machine \mathcal{R}, we have discovered the ASEC ({(s0,q1)},(s0,q1){a})(\{(s_{0},q_{1})\},(s_{0},q_{1})\mapsto\{a\}) and a reward of 11 is given henceforth unless action bb is selected eventually. In that case, we leave the ASEC and we will not discover further ASECs since there is only one. From here, it is not possible to return to state q1q_{1} in the DRA and henceforth only reward 0 will be obtained.

Theorem 12 is proven by demonstrating an extension of Lemma 10 (see Appendix C):

Lemma 14.

Suppose =(S,A,s0,P)\mathcal{M}=(S,A,s_{0},P) is an arbitrary MDP.

  1. 1.

    For every policy π\pi, 𝒥avg(π)𝒥𝒜(π)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi).

  2. 2.

    For every policy π\pi, there exists some policy π\pi^{\prime} satisfying 𝒥𝒜(π)𝒥avg(π)\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime}).

Note that Lemma 14 immediately proves that the reduction is not only optimality preserving (Theorem 12) but also robust: every ϵ\epsilon-approximately limit-average optimal policy is also ϵ\epsilon-approximately optimal w.r.t. 𝒥𝒜\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}. This observation is important because exactly optimal policies for the limit average problem may be hard to find.

Intuitively, to see part 1 of Lemma 14 we note: If an average reward of 11 is obtained for a run, the reward machine believes, based on the partial observation of the product MDP, that the run ends up in an ASEC. Almost surely, we eventually discover all possible transitions involving the same state-action pairs as this ASEC and therefore this must also be an ASEC w.r.t. the true, unknown product MDP. For part 2, we modify the policy π\pi similarly as in Lemma 10 by selecting actions Act(st,qt)\operatorname{Act}(s_{t},q_{t}) once having entered an ASEC 𝒞=(T,Act)\mathcal{C}=(T,\operatorname{Act}) w.r.t. the true, unknown product MDP.666NB The modified policy depends on the true, unknown support of the Probability transition function; we only claim the existence of such a policy.

6 Convergence for Limit Average, ω\omega-Regular and LTL Objectives

Thanks to the described translation, advances (in both theory and practice) in the study of RL with average rewards carry over to RL with ω\omega-regular and LTL objectives. In this section, we show that it is possible to learn optimal policies for limit average rewards in the limit. Hence, we resolve an open problem [3]: also RL with ω\omega-regular and LTL objectives can also be learned in the limit.

We start with the case of simple reward functions :S×A×S\mathcal{R}:S\times A\times S\to\mathbb{R}. Recently, [18] have shown that discount optimal policies for sufficiently high discount factor γ¯[0,1)\overline{\gamma}\in[0,1) are also limit average optimal. This is not enough to demonstrate Theorem 16 since γ¯\overline{\gamma} is generally not known and in finite time we might only obtain approximately limit average optimal policies.

Our approach is to reduce RL with average rewards to a sequence of discount sum \changed[dw]problems with increasingly high discount factor, which are solved with increasingly high accuracy. Our crucial insight is that eventually the approximately optimal solutions to the discounted problems will also be limit average optimal (see Appendix D for a proof):

Lemma 15.

Suppose γk1\gamma_{k}\nearrow 1, ϵk0\epsilon_{k}\searrow 0 and suppose each πk\pi_{k} is a memoryless policy. Then there exists k0k_{0} such that for all Kkk0K\ni k\geq k_{0}, πk\pi_{k} is limit average optimal, where KK is the set of kk\in\mathbb{N} satisfying 𝒥γk(πk)𝒥γk(π)ϵk\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi_{k})\geq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi)-\epsilon_{k} for all memoryless policies.

Our proof harnesses yet another notion of optimality: a policy π\pi is Blackwell optimal (cf. [6] and [22, Sec. 8.1]) if there exists γ¯(0,1)\overline{\gamma}\in(0,1) such that π\pi is γ\gamma-discount optimal for all γ¯γ<1\overline{\gamma}\leq\gamma<1. It is well-known that memoryless Blackwell optimal strategies always exist [6, 18] and they are also limit-average optimal [22, 18].

Thanks to the PAC (probably approximately correct) learnability of RL with discounted rewards [25, 35], there exists an algorithm Discounted which receives as inputs a simulator for \mathcal{M}, \mathcal{R} as well as γ,ϵ\gamma,\epsilon and δ\delta, and with probability 1δ1-\delta returns an ϵ\epsilon-optimal memoryless policy for discount factor γ\gamma. In view of Lemma 15, our approach is to run the PAC algorithm for discount-sum RL for increasingly large discount factors γ\gamma and increasingly low δ\delta and ϵ\epsilon (Algorithm 1).

Algorithm 1 RL for limit average rewards
simulator for ,\mathcal{M},\mathcal{R}
for kk\in\mathbb{N} do
     πkDiscounted(,,11/kγk,1/kϵk,1/k2δk)\pi_{k}\leftarrow\texttt{Discounted}(\mathcal{M},\mathcal{R},\underbrace{1-1/k}_{\gamma_{k}},\underbrace{1/k}_{\epsilon_{k}},\underbrace{1/k^{2}}_{\delta_{k}})
end for
Theorem 16.

RL with average reward functions can be learned in the limit by Algorithm 1: almost surely there exists k0k_{0}\in\mathbb{N} such that πk\pi_{k} is limit-average optimal for kk0k\geq k_{0}.

Proof.

Using the definition for KK of Lemma 15 of iterations where the PAC-MDP algorithm succeeds,

𝔼[#(K)]\displaystyle\mathbb{E}\left[\#(\mathbb{N}\setminus K)\right] k[PAC-MDP fails in iteration k]kδk=k1k2<\displaystyle\leq\sum_{k\in\mathbb{N}}\mathbb{P}[\text{PAC-MDP fails in iteration $k$}]\leq\sum_{k\in\mathbb{N}}\delta_{k}=\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}<\infty

The claim follows immediately with Lemma 15. ∎

Next, we turn to the more general case of reward machines. [23, 24] observe that optimal policies for reward machines can be learned by learning optimal policies for the modified MDP which additionally tracks the state the reward machine is in and assigns rewards accordingly. We conclude at once:

Corollary 17.

RL with average reward machines can be learned in the limit.

Finally, harnessing Theorem 12 we resolve Open Problem 4 of [3]:

Theorem 18.

RL with ω\omega-regular and LTL objectives can be learned in the limit.

Discussion.

Algorithm 1 makes independent calls to black box algorithms for discount sum rewards. Many such algorithms with PAC guarantees are model based (e.g. [25, 35]) and sample from the MDP to obtain suitable approximations of the transition probabilities. Thus, Algorithm 1 can be improved in practice by re-using approximations obtained in earlier iterations and refining them.

7 Related Work and Conclusion

The connection between acceptance of ω\omega-regular languages in the product MDP and AECs is well-known in the field of probabilistic model checking [5, 12]. As an alternative to DRAs [40, 14, 31], Limit Deterministic Büchi Automata [34] have been employed to express ω\omega-regular languages for RL [37, 7, 10, 20, 21].

A pioneering work on RL for ω\omega-regular rewards is [40], which expresses ω\omega-regular objectives using Deterministic Rabin Automata. Similar RL approaches for ω\omega-regular objectives can also be found in [14, 37, 10, 15]. The authors of [15, 29] approach RL for ω\omega-regular objectives directly by studying the reachability of AECs in the product MDP and developing variants of the R-MAX algorithm [8] to find optimal policies. However, these approaches require prior knowledge of the MDP, such as the structure of the MDP, the optimal ϵ\epsilon-return mixing time [15], or the ϵ\epsilon-recurrence time [29].

Various studies have explored reductions of ω\omega-regular objectives to discounted rewards, and subsequently applied Q-learning and its variants for learning optimal policies [7, 31, 20, 21, 16]. \changed[dw]In a similar spirit, [38] present a translation from LTL objectives to eventual discounted rewards, where only strictly positive rewards are discounted. These translations are generally not optimality preserving unless the discount factor is selected in a suitable way. Again, this is impossible without prior knowledge of the exact probability transition functions in the MDP.

Furthermore, whilst there are numerous convergent RL algorithms for average rewards for unichain \changed[dw]or communitcating777These assumptions generally fail for our setting, where in view of Corollary 17, MDP states also track the states of the reward machine. For instance, in the reward machine in Fig. 2(a) it is impossible to reach u1u_{1} from u2u_{2}. MDPs (e.g. [8, 42, 17, 32, 4, 39]), it is unknown whether such an algorithm exists for general multichain MDPs with a guaranteed convergence property. In fact, a negative result in [3] shows that there is no PAC (probably approximately correct) algorithm for LTL objectives and limit-average rewards when the MDP transition probabilities are unknown.

[8] have proposed an algorithm with PAC guarantees provided ϵ\epsilon-return mixing times are known. They informally argue that for fixed sub-optimality tolerance ϵ\epsilon, this assumption can be lifted by guessing increasingly large candidates for the ϵ\epsilon-return mixing time. This yields ϵ\epsilon-approximately optimal policies in the limit. However, it is not clear how to asymptotically obtain exactly optimal policies as this would require simultaneously decreasing ϵ\epsilon and increasing guesses for the ϵ\epsilon-return mixing time (which depends on ϵ\epsilon).

Conclusion.

We have presented an optimality-preserving translation from ω\omega-regular objectives to limit-average rewards furnished by reward machines. As a consequence, off-the-shelf RL algorithms for average rewards can be employed in conjunction with our translation to learn policies for ω\omega-regular objectives. Furthermore, we have developed an algorithm asymptotically learning provably optimal policies for limit-average rewards. Hence, also optimal policies for ω\omega-regular and LTL objectives can be learned in the limit. Our results provide affirmative answers to two open problems in [3].

Limitations.

We focus on MDPs with finite state and action sets and assume states are fully observable. The assumption of Section 4 that the support of the MDP’s probability transition function is known is eliminated in Section 5. Whilst the size of our general translation—the first optimality-preserving translation—is exponential, the additional knowledge in Section 4 enables a construction of the reward machine of the same size as the DRA expressing the objective. Hence, we conjecture that this size is minimal. Since RL with average rewards is not PAC learnable, we cannot possibly provide finite-time complexity guarantees of our Algorithm 1.

Acknowledgments and Disclosure of Funding

This research is supported by the National Research Foundation, Singapore, under its RSS Scheme (NRFRSS2022-009).

References

  • [1] David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, and Satinder Singh. On the expressivity of markov reward. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 7799–7812, 2021.
  • [2] Luca Alfaro. Formal Verification of Probabilistic Systems. Phd thesis, Stanford University, Stanford, CA, USA, 1998.
  • [3] Rajeev Alur, Suguman Bansal, Osbert Bastani, and Kishor Jothimurugan. A framework for transforming specifications in reinforcement learning. In Jean-François Raskin, Krishnendu Chatterjee, Laurent Doyen, and Rupak Majumdar, editors, Principles of Systems Design: Essays Dedicated to Thomas A. Henzinger on the Occasion of His 60th Birthday, pages 604–624, Cham, 2022. Springer Nature Switzerland.
  • [4] Peter Auer, Thomas Jaksch, and Ronald Ortner. Near-optimal regret bounds for reinforcement learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008.
  • [5] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT Press, 2008.
  • [6] David Blackwell. Discrete Dynamic Programming. The Annals of Mathematical Statistics, 33(2):719 – 726, 1962.
  • [7] Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, and Miroslav Pajic. Control synthesis from Linear Temporal Logic specifications using model-free reinforcement learning. 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 10349–10355, 2019.
  • [8] Ronen I. Brafman and Moshe Tennenholtz. R-max - A general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3(null):213–231, mar 2003.
  • [9] Tomáš Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtěch Forejt, Jan Křetínskỳ, Marta Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov decision processes using learning algorithms. In Automated Technology for Verification and Analysis: 12th International Symposium, ATVA 2014, Sydney, NSW, Australia, November 3-7, 2014, Proceedings 12, pages 98–114. Springer, 2014.
  • [10] Mingyu Cai, Shaoping Xiao, Zhijun Li, and Zhen Kan. Optimal probabilistic motion planning with potential infeasible LTL constraints. IEEE Transactions on Automatic Control, 68(1):301–316, 2023.
  • [11] Krishnendu Chatterjee and Monika Henzinger. Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In Dana Randall, editor, Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 1318–1336. SIAM, 2011.
  • [12] Luca de Alfaro. Computing minimum and maximum reachability times in probabilistic systems. In Jos C. M. Baeten and Sjouke Mauw, editors, CONCUR’99 Concurrency Theory, pages 66–81, Berlin, Heidelberg, 1999. Springer Berlin Heidelberg.
  • [13] Vektor Dewanto, George Dunn, Ali Eshragh, Marcus Gallagher, and Fred Roosta. Average-reward model-free reinforcement learning: a systematic review and literature mapping, 2021.
  • [14] Xuchu Ding, Stephen L. Smith, Calin Belta, and Daniela Rus. Optimal control of Markov Decision Processes with Linear Temporal Logic constraints. IEEE Transactions on Automatic Control, 59(5):1244–1257, 2014.
  • [15] Jie Fu and Ufuk Topcu. Probably approximately correct MDP learning and control with Temporal Logic constraints. In Dieter Fox, Lydia E. Kavraki, and Hanna Kurniawati, editors, Robotics: Science and Systems X, University of California, Berkeley, USA, July 12-16, 2014, 2014.
  • [16] Qitong Gao, Davood Hajinezhad, Yan Zhang, Yiannis Kantaros, and Michael M. Zavlanos. Reduced variance deep reinforcement learning with Temporal Logic specifications. In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS ’19, page 237–248, New York, NY, USA, 2019. Association for Computing Machinery.
  • [17] Abhijit Gosavi. Reinforcement learning for long-run average cost. European Journal of Operational Research, 155(3):654–674, 2004. Traffic and Transportation Systems Analysis.
  • [18] Julien Grand-Clément and Marek Petrik. Reducing blackwell and average optimality to discounted mdps via the blackwell discount factor. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
  • [19] Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, and Dominik Wojtczak. Omega-regular objectives in model-free reinforcement learning. In Tomáš Vojnar and Lijun Zhang, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 395–412, Cham, 2019. Springer International Publishing.
  • [20] Hosein Hasanbeig, Daniel Kroening, and Alessandro Abate. Certified reinforcement learning with logic guidance. Artificial Intelligence, 322:103949, 2023.
  • [21] Mohammadhosein Hasanbeig, Daniel Kroening, and Alessandro Abate. Deep reinforcement learning with Temporal Logics. In Nathalie Bertrand and Nils Jansen, editors, Formal Modeling and Analysis of Timed Systems, pages 1–22, Cham, 2020. Springer International Publishing.
  • [22] Arie Hordijk and Alexander A. Yushkevich. Blackwell Optimality, pages 231–267. Springer US, Boston, MA, 2002.
  • [23] Rodrigo Toro Icarte. Reward Machines. Phd thesis, University of Toronto, 03 2022.
  • [24] Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith. Using reward machines for high-level task specification and decomposition in reinforcement learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2107–2116. PMLR, 10–15 Jul 2018.
  • [25] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49:209–232, 2002.
  • [26] Bakhadyr Khoussainov and Anil Nerode. Automata Theory and its Applications. Birkhäuser Boston, Boston, MA, 2001.
  • [27] Achim Klenke. Probability Theory: A Comprehensive Course. Universitext. Springer London, 2014.
  • [28] Dexter Kozen. Theory of Computation. Springer, London, 2006.
  • [29] Mateo Perez, Fabio Somenzi, and Ashutosh Trivedi. A PAC Learning Algorithm for LTL and Omega-Regular Objectives in MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19):21510–21517, 2024.
  • [30] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., USA, 1st edition, 1994.
  • [31] Dorsa Sadigh, Eric S. Kim, Samuel Coogan, S. Shankar Sastry, and Sanjit A. Seshia. A learning based approach to control synthesis of Markov Decision Processes for Linear Temporal Logic specifications. In 53rd IEEE Conference on Decision and Control, pages 1091–1096, 2014.
  • [32] Anton Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In International Conference on Machine Learning, 1993.
  • [33] Daqian Shao and Marta Kwiatkowska. Sample Efficient Model-free Reinforcement Learning from LTL Specifications with Optimality Guarantees. IJCAI International Joint Conference on Artificial Intelligence, 2023-Augus:4180–4189, 2023.
  • [34] Salomon Sickert, Javier Esparza, Stefan Jaax, and Jan Křetínský. Limit-deterministic Büchi automata for Linear Temporal Logic. In Swarat Chaudhuri and Azadeh Farzan, editors, Computer Aided Verification, pages 312–332, Cham, 2016. Springer International Publishing.
  • [35] Alexander L. Strehl, Lihong Li, and Michael L. Littman. Reinforcement learning in finite mdps: PAC analysis. J. Mach. Learn. Res., 10:2413–2444, 2009.
  • [36] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018.
  • [37] Cameron Voloshin, Hoang Le, Swarat Chaudhuri, and Yisong Yue. Policy optimization with Linear Temporal Logic constraints. Advances in Neural Information Processing Systems, 35:17690–17702, 2022.
  • [38] Cameron Voloshin, Abhinav Verma, and Yisong Yue. Eventual discounting temporal logic counterfactual experience replay. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 35137–35150. PMLR, 2023.
  • [39] Yi Wan, Abhishek Naik, and Richard S. Sutton. Learning and planning in average-reward markov decision processes. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 10653–10662. PMLR, 2021.
  • [40] Eric M. Wolff, Ufuk Topcu, and Richard M. Murray. Robust control of uncertain Markov Decision Processes with Temporal Logic specifications. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 3372–3379, 2012.
  • [41] Cambridge Yang, Michael L. Littman, and Michael Carbin. On the (In)Tractability of Reinforcement Learning for LTL Objectives. IJCAI International Joint Conference on Artificial Intelligence, pages 3650–3658, 2022.
  • [42] Shangdong Yang, Yang Gao, Bo An, Hao Wang, and Xingguo Chen. Efficient average reward reinforcement learning using constant shifting values. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1), 2016.

Appendix A Supplementary Materials for Section 3

Recall that a ω\omega-regular language LL is prefix-independent if for every infinite label sequence w(2𝒜𝒫)ωw~{}\in~{}(2^{\mathcal{AP}})^{\omega}, we have wLw\in L iff wLw^{\prime}\in L for every suffix ww^{\prime} of ww. We prove that there is no optimality-preserving translation for reward functions regardless of whether LL is prefix-independent or not. The prefix-dependent case was given in Section 3. Here we focus on the other case:

Proposition 19.

There exists a tuple (S,A,s0,λ)(S,A,s_{0},\lambda) and a prefix-independent ω\omega-regular language LL for which it is impossible to find a reward function :S×A×S\mathcal{R}:S\times A\times S\rightarrow\mathbb{R} such that for every probability transition PP, let =(S,A,s0,P,λ)\mathcal{M}=(S,A,s_{0},P,\lambda), then every avg\mathcal{R}^{\textrm{avg}}-optimal policy of \mathcal{M} is also LL-optimal (i.e. maximizing the probability of membership in LL).

Proof.

Our proof technique is based on the fact that we can modify the transition probability function. Consider the MDP in Fig. 3(a), where the objective is to visit either s1s_{1} or s3s_{3} infinitely often. It can be checked that the DRA in Fig. 3(b) captures the given objective and the language accepted by 𝒜\mathcal{A} is prefix-independent. There are only two deterministic memoryless policies: π1\pi_{1}, which consistently selects action aa, and π2\pi_{2}, which consistently selects action bb. For the sake of contradiction, let’s assume the existence of a reward function \mathcal{R} that preserves optimality for every transition probability function PP. Pick p1=1p_{1}=1 and p2=0p_{2}=0. Then 𝒥𝒜(π1)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi_{1})=1 and 𝒥𝒜(π2)=0\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi_{2})=0, which implies that π1\pi_{1} is 𝒜\mathcal{A}-optimal whereas π2\pi_{2} is not. Thus (s1,a,s1)=𝒥avg(π1)>𝒥avg(π2)=(s0,b,s0)\mathcal{R}(s_{1},a,s_{1})=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{1})>\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{2})=\mathcal{R}(s_{0},b,s_{0}). Now, assume p1,p2(0,1)p_{1},p_{2}\in(0,1). Accordingly, we have 𝒥avg(π1)p1(s1,a,s1)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{1})\geq p_{1}\mathcal{R}(s_{1},a,s_{1}) and we can deduce that (e.g. by solving the linear equation system described in [30, §8.2.3]) 𝒥avg(π2)=p22p2(s0,b,s0)+1p22p2((s0,b,s3)+(s3,b,s0))\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{2})=\frac{p_{2}}{2-p_{2}}\mathcal{R}(s_{0},b,s_{0})+\frac{1-p_{2}}{2-p_{2}}\left(\mathcal{R}(s_{0},b,s_{3})+\mathcal{R}(s_{3},b,s_{0})\right). As a result:

limp11𝒥avg(π1)(s1,a,s1)>(s0,b,s0)=limp21𝒥avg(π2)\lim_{p_{1}\rightarrow 1}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{1})~{}\geq~{}\mathcal{R}(s_{1},a,s_{1})~{}>~{}\mathcal{R}(s_{0},b,s_{0})~{}=~{}\lim_{p_{2}\rightarrow 1}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{2})

Consequently, if p1,p2p_{1},p_{2} are sufficiently large then 𝒥avg(π1)>𝒥avg(π2)\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{1})>\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{2}). However, this contradicts to the fact that π2\pi_{2} is 𝒜\mathcal{A}-optimal and π1\pi_{1} is not, since 𝒥𝒜(π2)=1>p1=𝒥𝒜(π1)\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi_{2})=1>p_{1}=\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi_{1}). Hence, there is no such reward function \mathcal{R}. ∎

s0s_{0}starts1s_{1}s3s_{3}s2s_{2}a/1a/1a/1a/1b/p2b/p_{2}a/p1a/p_{1}a/1p1a/1-p_{1}b/1p2b/1-p_{2}b/1b/1
(a) An MDP \mathcal{M} where λ(s1,a,s1)=λ(s3,b,s0)={c}\lambda(s_{1},a,s_{1})=\lambda(s_{3},b,s_{0})=\{c\}, and the rest are labeled with \emptyset.
q0q_{0}startq1q_{1}{c}\{c\}\emptyset{c}\{c\}\emptyset
(b) A DRA 𝒜\mathcal{A} for the objective of visiting s1s_{1} or s3s_{3} infinitely often where F:={({q1},)}F:=\{(\{q_{1}\},\emptyset)\}.
Figure 3: Counter-example for prefix-independent objectives.

Appendix B Supplementary Materials for Section 4

Lemma 20.

Every AEC contains an ASEC.

Proof.

Consider an AEC 𝒞=(T,Act)\mathcal{C}=(T,\operatorname{Act}) of 𝒜\mathcal{M}_{\mathcal{A}}. We will prove this by using induction on the number of actions in 𝒞\mathcal{C}, denoted as 𝗌𝗂𝗓𝖾(𝒞):=sT|Act(s)|1\mathsf{size}(\mathcal{C}):=\sum_{s\in T}|\operatorname{Act}(s)|\geq 1. For the base case where 𝗌𝗂𝗓𝖾(𝒞)=1\mathsf{size}(\mathcal{C})=1, it can be deduced that 𝒞\mathcal{C} consists of only one accepting state with a self-loop. Therefore, 𝒞\mathcal{C} itself is an ASEC.

Now, let’s assume that 𝗌𝗂𝗓𝖾(𝒞)=k+12\mathsf{size}(\mathcal{C})=k+1\geq 2. If 𝒞\mathcal{C} is already an ASEC, then we are done. Otherwise, there exists a state sTs\in T such that |Act(s)|>1|\operatorname{Act}(s)|>1. Since 𝒞\mathcal{C} is strongly connected, there exists a finite path ρ=sas1a1snansF\rho=sas_{1}a_{1}\ldots s_{n}a_{n}s_{F} where sFs_{F} is an accepting state and all the states s1,,sns_{1},\ldots,s_{n} are different from ss. Let aAct(s)a^{\prime}\in\operatorname{Act}(s) such that aaa^{\prime}\neq a. We construct a new AEC 𝒞=(T,Act)\mathcal{C}^{\prime}=(T^{\prime},\operatorname{Act}^{\prime}) by first removing aa^{\prime} from Act(s)\operatorname{Act}(s) and then removing all the states that are no longer reachable from ss along with their associated transitions. It is important to note that after the removal, sFTs_{F}\in T^{\prime} since we can reach sFs_{F} from ss without taking the action aa^{\prime}. (Besides, the graph is still strongly connected.) Since 𝗌𝗂𝗓𝖾(𝒞)k\mathsf{size}(\mathcal{C}^{\prime})\leq k, we can apply the induction hypothesis to conclude that 𝒞\mathcal{C}^{\prime} contains an ASEC, thus completing the proof. ∎

See 8

To proof this result, we recall a well-known result in probabilistic model checking that with probability of one (wpo), every run ρ\rho of the policy π\pi eventually stays in one of the ECs of 𝒜\mathcal{M}_{\mathcal{A}} and visits every transition in that EC infinitely often. To state this formally, we define for any run ρ=s0a0s1\rho=s_{0}a_{0}s_{1}\cdots,

InfSA(ρ):={(s,a)S×A|{isi=sai=a}|=}\displaystyle\operatorname{InfSA}(\rho):=\{(s,a)\in S\times A\mid|\{i\in\mathbb{N}\mid s_{i}=s\land a_{i}=a\}|=\infty\}

the set of state-action-pairs occurring infinitely often in ρ\rho. Furthermore, a state-action set χS×A\chi\subseteq S\times A defines a sub-MDP sub(χ):=(T,Act)\operatorname{sub}(\chi):=(T,\operatorname{Act}), where

T\displaystyle T :={sS(s,a)χ for some aA}\displaystyle:=\{s\in S\mid(s,a)\in\chi\text{ for some }a\in A\} Act(s)\displaystyle\operatorname{Act}(s) :={a(s,a)χ}\displaystyle:=\{a\mid(s,a)\in\chi\}
Lemma 21 ([12]).

ρ𝒟π𝒜[sub(InfSA(ρ)) is an end component]=1\mathbb{P}_{\rho\sim\mathcal{D}^{\mathcal{M}\otimes\mathcal{A}}_{\pi}}[\operatorname{sub}(\operatorname{InfSA}(\rho))\text{ is an end component}]=1.

For the sake of self-containedness, we recall the proof of [12].

Proof.

We start with two more definitions: for any sub-MDP (T,Act)(T,\operatorname{Act}) [2], let

sa(T,Act):={(s,a)T×AaAct(s)}\displaystyle\operatorname{sa}(T,\operatorname{Act}):=\{(s,a)\in T\times A\mid a\in\operatorname{Act}(s)\}

be the set of state-action pairs (s,a)(s,a) such that aa is enabled in ss. Finally, let

Ω(T,Act):={ρRuns(S,A)InfSA(ρ)=sa(T,Act)}\displaystyle\Omega^{(T,\operatorname{Act})}:=\{\rho\in\mathrm{Runs}(S,A)\mid\operatorname{InfSA}(\rho)=\operatorname{sa}(T,\operatorname{Act})\}

be the set of runs such that action aa is taken infinitely often in state ss iff sTs\in T and aAct(s)a\in\operatorname{Act}(s). Note that the Ω(T,Act)\Omega^{(T,\operatorname{Act})} constitute a partition of Runs(S,A)\mathrm{Runs}(S,A).

Therefore, it suffices to establish for any sub-MDP (T,Act)(T,\operatorname{Act}), (T,Act)(T,\operatorname{Act}) is an end-component or [ρΩ(T,Act)]=0\mathbb{P}[\rho\in\Omega^{(T,\operatorname{Act})}]=0.

Let (T,Act)(T,\operatorname{Act}) be an arbitrary sub-MDP. First, suppose there exist sTs\in T and aAct(t)a\in\operatorname{Act}(t) such that p:=sTΔ(t,a,t)<1p:=\sum_{s^{\prime}\in T}\Delta(t,a,t^{\prime})<1. By definition each ρΩ(T,Act)\rho\in\Omega^{(T,\operatorname{Act})} takes action aa in state ss infinitely often. Hence, not only [ρΩ(T,Act)]pk\mathbb{P}[\rho\in\Omega^{(T,\operatorname{Act})}]\leq p^{k} for all kk\in\mathbb{N} but also [ρΩ(T,Act)]=0\mathbb{P}[\rho\in\Omega^{(T,\operatorname{Act})}]=0.

Thus, we can assume that for all sTs\in T and aAct(t)a\in\operatorname{Act}(t), sTΔ(t,a,t)=1\sum_{s^{\prime}\in T}\Delta(t,a,t^{\prime})=1. If Ω(T,Act)=\Omega^{(T,\operatorname{Act})}=\emptyset then clearly [ρΩ(T,Act)]=0\mathbb{P}[\rho\in\Omega^{(T,\operatorname{Act})}]=0 follows. Otherwise, take any ρ=s0a0a1Ω(T,Act)\rho=s_{0}a_{0}a_{1}\cdots\in\Omega^{(T,\operatorname{Act})}, and let t,tTt,t^{\prime}\in T be arbitrary. We show that there exists a connecting path in (T,Act)(T,\to_{\operatorname{Act}}), which implies that (T,Act)(T,\operatorname{Act}) is an end component.

Evidently, there exists an index i0i_{0} such that all state-action pairs occur infinitely often in ρ\rho, i.e.

{(si0,ai0),(si0+1,ai0+1),}=InfSA(ρ)\displaystyle\{(s_{i_{0}},a_{i_{0}}),(s_{i_{0}+1},a_{i_{0}+1}),\ldots\}=\operatorname{InfSA}(\rho)

Thus, for all ii0i\geq i_{0}, siTs_{i}\in T and aiAct(si)a_{i}\in\operatorname{Act}(s_{i}), and for all i>ii0i^{\prime}>i\geq i_{0}, there is a path from sis_{i} to sis_{i^{\prime}} in (T,Act)(T,\to_{\operatorname{Act}}). Finally, it suffices to note that clearly for some i>i=i0i^{\prime}>i=i_{0}, si=ts_{i}=t and si=ts_{i^{\prime}}=t^{\prime}. ∎

Proof of Lemma 8.

By Lemma 21, almost surely sub(InfSA(ρ))\operatorname{sub}(\operatorname{InfSA}(\rho)) is an accepting end component. Clearly, ρ\rho is only accepted by the product MDP if this end component is an accepting EC. By Lemma 20 this AEC contains an ASEC. Therefore, by definition of sub(InfSA(ρ))\operatorname{sub}(\operatorname{InfSA}(\rho)), ρ\rho almost surely in particular enters some ASEC. Finally, since the 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n} cover all states in ASECs, ρ\rho almost surely enters some 𝒞i\mathcal{C}_{i}. ∎

q0q_{0}startq1q_{1}\botq2q_{2}(s0,a,s0)/0(s_{0},a,s_{0})/0(s1,b,s0)/0(s_{1},b,s_{0})/0(s0,b,s1)/0(s_{0},b,s_{1})/0(s0,a,s0)/1(s_{0},a,s_{0})/1(s1,b,s0)/0(s_{1},b,s_{0})/0(s0,b,s1)/1(s_{0},b,s_{1})/{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}1}/0*/0/0*/0
Figure 4: Reward machine yielded by our construction in Section 4 for the running example.

Before turning to the proof of Lemma 10, let 𝒥avg(ρ)=lim inft1ti=0t1ri\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=\liminf_{t\to\infty}\frac{1}{t}\cdot\sum_{i=0}^{t-1}~{}r_{i} denote the limit-average reward of a run ρ\rho. Note that, for any run ρ\rho, 𝒥avg(ρ){0,1}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)\in\{0,1\}. Thus, by the dominated convergence theorem [27, Cor. 6.26],

ρ𝒟π[𝒥avg(ρ)=1]=𝔼ρ𝒟π[𝒥avg(ρ)]=lim inft𝔼ρ𝒟π[1ti=0t1ri]=𝒥avg(π)\displaystyle\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1\right]~{}=~{}\mathbb{E}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)]~{}=~{}\liminf_{t\to\infty}\mathbb{E}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\frac{1}{t}\cdot\sum_{i=0}^{t-1}~{}r_{i}\right]~{}=~{}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi) (3)

See 10

Proof.
  1. 1.

    For any run ρ\rho, 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1 only if ρ\rho^{\otimes} enters a 𝒞i\mathcal{C}_{i} and never leaves it. (ρ\rho^{\otimes} might have entered other 𝒞j\mathcal{C}_{j}’s earlier but then those necessarily need to overlap with yet another 𝒞k\mathcal{C}_{k} such that ik<ji\leq k<j to avoid being trapped in state \bot, resulting in 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1. Furthermore, this 𝒞i\mathcal{C}_{i} can only overlap with 𝒞j\mathcal{C}_{j} if i<ji<j. Otherwise, the reward machine would have enforced transitioning to 𝒞j\mathcal{C}_{j}.) \comm@parse@datecomm@commentdwmore elaboration necessary?

    Since 𝒞i\mathcal{C}_{i} is an ASEC, ρ\rho^{\otimes} is accepted by the product MDP 𝒜\mathcal{M}\otimes\mathcal{A}. Hence, by Eqs. 2 and 3,

    𝒥avg(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi) =ρ𝒟π[𝒥avg(ρ)=1]ρ𝒟π[ρ accepted by 𝒜]=𝒥𝒜(π)\displaystyle~{}=~{}\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1\right]~{}\leq~{}\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}\left[\rho^{\otimes}\text{ accepted by }\mathcal{M}\otimes\mathcal{A}\right]~{}=~{}\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)
  2. 2.

    Let π\pi be arbitrary. For a run s0a0s_{0}a_{0}\cdots let qtq_{t} be the state of the DRA in step tt. Define π\pi^{\prime} to follow π\pi until reaching sts_{t} such that (st,qt)T1Tn(s_{t},q_{t})\in T_{1}\cup\cdots\cup T_{n}. Henceforth, we select the (unique) action guaranteeing to stay in the 𝒞i\mathcal{C}_{i} with minimal ii including the current state, i.e. Act(q,u)(q,u)\operatorname{Act}_{(q,u)}(q,u). Formally888We slightly abuse notation in the “otherwise”-case and denote by Act(st,qt)(st,qt)\operatorname{Act}_{(s_{t},q_{t})}(s_{t},q_{t}) the distribution selecting the state in the singleton set Act(st,qt)(st,qt)\operatorname{Act}_{(s_{t},q_{t})}(s_{t},q_{t}) with probability 1.,

    π(s0a0st):={π(s0a0st)if (st,qt)T1TnAct(st,qt)(st,qt)otherwise\displaystyle\pi^{\prime}(s_{0}a_{0}\cdots s_{t})~{}:=~{}\begin{cases}\pi(s_{0}a_{0}\cdots s_{t})&\text{if }(s_{t},q_{t})\not\in T_{1}\cup\cdots\cup T_{n}\\ \operatorname{Act}_{(s_{t},q_{t})}(s_{t},q_{t})&\text{otherwise}\end{cases} (4)

    Note that whenever a run ρ𝒟π\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}} follows the modified policy π\pi^{\prime} and its induced run ρ\rho^{\otimes} reaches some ASEC 𝒞i\mathcal{C}_{i} then 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1. Thus,

    ρ𝒟π[ρ reaches some 𝒞i]𝔼ρ𝒟π[𝒥avg(ρ)]=𝒥avg(π)\displaystyle\mathbb{P}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some }\mathcal{C}_{i}]~{}\leq~{}\mathbb{E}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)]~{}=~{}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime})

    Furthermore, by Lemma 8 almost surely, every induced run ρ\rho^{\otimes} accepted by the product MDP must reach some 𝒞i\mathcal{C}_{i}. Consequently, by Eq. 2,

    𝒥𝒜(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi) =ρ𝒟π[ρ is accepted by 𝒜]\displaystyle=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ is accepted by }\mathcal{M}\otimes\mathcal{A}]
    ρ𝒟π[ρ reaches some 𝒞i]\displaystyle\leq\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some }\mathcal{C}_{i}]
    =ρ𝒟π[ρ reaches some 𝒞i]𝒥avg(π)\displaystyle=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some }\mathcal{C}_{i}]\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime})

    In the penultimate step, we have exploited the fact that π\pi and π\pi^{\prime} agree until reaching the first 𝒞i\mathcal{C}_{i}.∎

B.1 Efficient Construction

\changed

[dw] We consider a different collection 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n} of ASECs:

Suppose 𝒞1,,𝒞n\mathcal{C}^{\prime}_{1},\ldots,\mathcal{C}^{\prime}_{n} is a collection of AECs (not necessarily simple ones) containing all states in AECs. Then we consider ASECs 𝒞1,,𝒞n\mathcal{C}_{1},\ldots,\mathcal{C}_{n} such that 𝒞i\mathcal{C}_{i} is contained in 𝒞i\mathcal{C}^{\prime}_{i}.

The definition of the reward machine in Section 4.2 and the extension in Section 5 do not need to be changed. Next, we argue the following:

  1. 1.

    This collection can be obtained efficiently (in time polynomial in the size of the MDP and DRA).

  2. 2.

    Lemma 10 and hence the correctness result (Theorem 9) still hold.

For 1. it is well-known that a collection of maximal AECs (covering all states in AECs) can be found efficiently using graph algorithms [2, Alg. 3.1], [15, 11] and [5, Alg. 47 and Lemma 10.125]. Subsequently, Lemma 20 can be used to obtain an ASEC contained in each of them. In particular, note that the proof of Lemma 20 immediately gives rise to an efficient algorithm. (Briefly, we iteratively remove actions and states whilst querying reachability properties.)

For 2., the first part of Lemma 10 clearly still holds. For the second, we modify policy π\pi as follows: Once, π\pi enters a maximal accepting end component we select an action on the shortest path to the respective ASEC 𝒞i\mathcal{C}_{i} inside 𝒞i\mathcal{C}^{\prime}_{i}. Once we enter one of the 𝒞i\mathcal{C}_{i} we follow the actions specified by the ASEC as before. Observe that the probability that under an AEC is entered is the same as the probability that one of the 𝒞i\mathcal{C}_{i} is entered under the modified policy. The lemma, hence Theorem 9, follow.

Appendix C Supplementary Materials for Section 5

See 14

Proof.
  1. 1.

    For a run ρ\rho, let EρE_{\rho} be the set of transitions encountered in the product MDP. Note that 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1 only if ρ\rho^{\otimes} enters some 𝒞iEρ\mathcal{C}^{E_{\rho}}_{i} and never leaves it. (ρ\rho^{\otimes} might have entered other 𝒞jE\mathcal{C}^{E}_{j}s earlier for EEρE\subseteq E_{\rho}.) \comm@parse@datecomm@commentdwmore elaboration necessary?

    With probability 1, EρE_{\rho} contains all the transitions present in 𝒞iEρ\mathcal{C}^{E_{\rho}}_{i} in the actual MDP. \comm@parse@datecomm@commentdwclear what is meant?\comm@parse@datecomm@commentlxbOnly true if you consider ‘normal’ ζ\zeta which eventually ends up in some EC \comm@parse@datecomm@commentdwYes, but that happens with probability 1. Do you agree? (NB possible transitions outside of 𝒞iEρ\mathcal{C}^{E_{\rho}}_{i} might be missing from EρE_{\rho}.) In particular, with probability 1, 𝒞iEρ\mathcal{C}^{E_{\rho}}_{i} is also an ASEC for the true unknown MDP and ρ\rho^{\otimes} is accepted by the product MDP 𝒜\mathcal{M}\otimes\mathcal{A}. Consequently, using Eq. 3 again,

    𝒥avg(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi) =ρ𝒟π[𝒥avg(ρ)=1]ρ𝒟π[ρ accepted by 𝒜]=𝒥𝒜(π)\displaystyle=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1]\leq\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ accepted by }\mathcal{M}\otimes\mathcal{A}]=\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi)
  2. 2.

    Let π\pi be arbitrary. We modify π\pi to π\pi^{\prime} as follows: until reaching an ASEC 𝒞=(T,Act)\mathcal{C}=(T,\operatorname{Act}) w.r.t. the true, unknown999NB The modified policy depends on the true, unknown EE^{*}; we only claim the existence of such a policy. set of transitions EE^{*} follow π\pi. Henceforth, select action Act(st,qt)E(st,qt)\operatorname{Act}^{E^{*}}_{(s_{t},q_{t})}(s_{t},q_{t}).

    We claim that whenever ρ𝒟π\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}} follows the modified policy π\pi^{\prime} and ρ\rho^{\otimes} reaches some ASEC in the true product MDP, 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1. \comm@parse@datecomm@commentdwelaborate

    To see this, suppose ρ𝒟π\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}} is such that for some minimal t0t_{0}\in\mathbb{N}, (st0,qt0)T1ETnE(s_{t_{0}},q_{t_{0}})\in T_{1}^{E^{*}}\cup\cdots\cup T_{n}^{E^{*}}. Let 𝒞=(T,Act):=𝒞(st0,qt0)E\mathcal{C}=(T,\operatorname{Act}):=\mathcal{C}^{E^{*}}_{(s_{t_{0}},q_{t_{0}})}.

    Define EtE_{t} to be the transitions encountered up to step tt\in\mathbb{N}, i.e. Et:={((sk,qk),ak,(sk+1,qk+1))0k<t}E_{t}:=\{((s_{k},q_{k}),a_{k},(s_{k+1},q_{k+1}))\mid 0\leq k<t\}. Then almost surely for some minimal tt0t\geq{t_{0}}, EtE_{t} contains all transitions in 𝒞\mathcal{C}, and no further transitions will be encountered, i.e. for all ttt^{\prime}\geq t, Et=EtE_{t^{\prime}}=E_{t}. Define E¯:=Et\overline{E}:=E_{t}. \comm@parse@datecomm@commentdwASECs don’t contain more ASECs Note that for all ((s,q),a,(s,q))E¯((s,q),a,(s^{\prime},q^{\prime}))\in\overline{E} such that (s,q)T(s,q)\in T, Act(s,q)={a}\operatorname{Act}(s,q)=\{a\}. (This is because upon entering the ASEC 𝒞\mathcal{C} we immediately switch to following the action dictated by Act\operatorname{Act}. Thus, we avoid “accidentally” discovering other ASECs w.r.t. the partial knowledge of the product MDP’s graph, which might otherwise force us to perform actions leaving 𝒞\mathcal{C}.) Consequently, there cannot be another ASEC 𝒞=(T,Act)\mathcal{C}^{\prime}=(T^{\prime},\operatorname{Act}^{\prime}) w.r.t. E¯\overline{E} overlapping with 𝒞\mathcal{C}, i.e. TTT\cap T^{\prime}\neq\emptyset\comm@parse@datecomm@commentlxbI think you can make this statement stronger by saying that E¯\overline{E} does not contain any other ASEC beside 𝒞\mathcal{C}. Therefore, for all (s,q)𝒞(s,q)\in\mathcal{C}, Act(s,q)E¯=Act\operatorname{Act}_{(s,q)}^{\overline{E}}=\operatorname{Act}. Consequently, 𝒥avg(ρ)=1\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)=1. \comm@parse@datecomm@commentdwIs this convincing?

    Thus,

    ρ𝒟π[ρ reaches some ASEC in true product MDP]𝔼ρ𝒟π[𝒥avg(ρ)]=𝒥avg(π)\displaystyle\mathbb{P}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some ASEC in true product MDP}]\leq\mathbb{E}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\rho)]=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime})

    Consequently,

    𝒥𝒜(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{A}}(\pi) =ρ𝒟π[ρ is accepted by 𝒜]\displaystyle=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ is accepted by }\mathcal{M}\otimes\mathcal{A}]
    ρ𝒟π[ρ reaches some ASEC in true product MDP]\displaystyle\leq\mathbb{P}_{\rho\sim\mathcal{D}_{\pi}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some ASEC in true product MDP}]
    =ρ𝒟π[ρ reaches some ASEC in true product MDP]𝒥avg(π)\displaystyle=\mathbb{P}_{\rho\sim\mathcal{D}_{\pi^{\prime}}^{\mathcal{M}}}[\rho^{\otimes}\text{ reaches some ASEC in true product MDP}]\leq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{\prime})

    In the penultimate step we have exploited that π\pi and π\pi^{\prime} agree until reaching some ASEC in true product MDP.∎

Appendix D Supplementary Materials for Section 6

\comm@parse@date

comm@commentdwNotation: Let Π\Pi be the set of all memoryless policies and Π\Pi^{*} be the set of all limit-average optimal policies. Besides, let w:=𝒥avg(π)w^{*}:=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{*}) the limit average reward of any optimal πΠ\pi^{*}\in\Pi^{*}.

Lemma 15 is proven completely analagously to the following (where K=K=\mathbb{N}):

Lemma 22.

Suppose γk1\gamma_{k}\nearrow 1, ϵk0\epsilon_{k}\searrow 0 and each πk\pi_{k} is a memoryless policy satisfying 𝒥γk(πk)𝒥γk(π)ϵk\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi_{k})\geq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi)-\epsilon_{k} for all πΠ\pi\in\Pi. Then there exists k0k_{0} such that for all kk0k\geq k_{0}, πk\pi_{k} is limit average optimal.

Proof.

We define Δ:=minπΠΠ𝒥avg(π)w>0\Delta:=\min_{\pi\in\Pi\setminus\Pi^{*}}\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)-w^{*}>0. Recall (see e.g. [22, Sec. 8.1]) that for any policy πΠ\pi\in\Pi,

limγ1(1γ)𝒥γ(π)=𝒥avg(π)\displaystyle\lim_{\gamma\nearrow 1}(1-\gamma)\cdot\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}}(\pi)=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi) (5)

Since Π\Pi is finite, due to Eq. 5 there exists γ0\gamma_{0} such that

|𝒥avg(π)(1γ)𝒥γ(π)|Δ4\displaystyle|\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi)-(1-\gamma)\cdot\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}}(\pi)|\leq\frac{\Delta}{4} (6)

for all πΠ\pi\in\Pi and γ[γ0,1)\gamma\in[\gamma_{0},1). Let π\pi^{*} be a memoryless Blackwell optimal policy (which exists due to [6, 18]). Note that

w\displaystyle w^{*} =𝒥avg(π)\displaystyle=\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi^{*}) (7)

and there exists γ¯[0,1)\overline{\gamma}\in[0,1) such that

𝒥γ(π)𝒥γ(π)\displaystyle\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}}(\pi^{*})\geq\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma}}(\pi) (8)

for all γ[γ¯,1)\gamma\in[\overline{\gamma},1) and πΠ\pi\in\Pi. Moreover, there clearly exists k0k_{0} such that ϵkΔ/4\epsilon_{k}\leq\Delta/4 and γkγ0,γ¯\gamma_{k}\geq\gamma_{0},\overline{\gamma} for all kk0k\geq k_{0}.

Therefore, for any kk0k\geq k_{0},

|𝒥avg(πk)w|\displaystyle|\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\textrm{avg}}}(\pi_{k})-w^{*}| (1γk)|𝒥γk(πk)𝒥γk(π)|+Δ2\displaystyle\leq(1-\gamma_{k})\cdot\left|\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi_{k})-\mathcal{J}^{\mathcal{M}}_{\mathcal{R}^{\gamma_{k}}}(\pi^{*})\right|+\frac{\Delta}{2} Eqs. 6 and 7
(1γk)ϵk+Δ2\displaystyle\leq(1-\gamma_{k})\cdot\epsilon_{k}+\frac{\Delta}{2} premise and Eq. 8
43Δ\displaystyle\leq\frac{4}{3}\cdot\Delta

Consequently, by definition of Δ\Delta, πkΠ\pi_{k}\in\Pi^{*}. ∎

NeurIPS Paper Checklist

  1. 1.

    Claims

  2. Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?

  3. Answer: [Yes]

  4. Justification: The main results mentioned in the abstract and introduction are Propositions 4, 12, 16 and 18. They accurately reflect the paper’s contributions and scope.

  5. Guidelines:

    • The answer NA means that the abstract and introduction do not include the claims made in the paper.

    • The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.

    • The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.

    • It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.

  6. 2.

    Limitations

  7. Question: Does the paper discuss the limitations of the work performed by the authors?

  8. Answer: [Yes]

  9. Justification: Limitations are discussed in Section 7. \comm@parse@datecomm@commentdwtodo: elaborate

  10. Guidelines:

    • The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.

    • The authors are encouraged to create a separate "Limitations" section in their paper.

    • The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.

    • The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.

    • The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.

    • The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.

    • If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.

    • While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.

  11. 3.

    Theory Assumptions and Proofs

  12. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?

  13. Answer: [Yes]

  14. Justification: Full proofs are presented in the appendices and results are cross-referenced. At the beginning of Section 4 we assume knowledge of the support of the MDP’s probability transition function for presentational purposes. This assumption is fully removed in Section 5.

  15. Guidelines:

    • The answer NA means that the paper does not include theoretical results.

    • All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.

    • All assumptions should be clearly stated or referenced in the statement of any theorems.

    • The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.

    • Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.

    • Theorems and Lemmas that the proof relies upon should be properly referenced.

  16. 4.

    Experimental Result Reproducibility

  17. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?

  18. Answer: [N/A]

  19. Justification: The paper does not include experiments.

  20. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.

    • If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.

    • Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.

    • While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example

      1. (a)

        If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.

      2. (b)

        If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.

      3. (c)

        If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).

      4. (d)

        We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.

  21. 5.

    Open access to data and code

  22. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?

  23. Answer: [N/A]

  24. Justification: The paper does not include experiments.

  25. Guidelines:

    • The answer NA means that paper does not include experiments requiring code.

    • Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.

    • While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).

    • The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.

    • The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.

    • The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.

    • At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).

    • Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.

  26. 6.

    Experimental Setting/Details

  27. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?

  28. Answer: [N/A]

  29. Justification: The paper does not include experiments.

  30. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.

    • The full details can be provided either with the code, in appendix, or as supplemental material.

  31. 7.

    Experiment Statistical Significance

  32. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?

  33. Answer: [N/A]

  34. Justification: The paper does not include experiments.

  35. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.

    • The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).

    • The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)

    • The assumptions made should be given (e.g., Normally distributed errors).

    • It should be clear whether the error bar is the standard deviation or the standard error of the mean.

    • It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.

    • For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).

    • If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.

  36. 8.

    Experiments Compute Resources

  37. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?

  38. Answer: [N/A]

  39. Justification: The paper does not include experiments.

  40. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.

    • The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.

    • The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper).

  41. 9.

    Code Of Ethics

  42. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?

  43. Answer: [Yes]

  44. Justification: The research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines.

  45. Guidelines:

    • The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.

    • If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.

    • The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).

  46. 10.

    Broader Impacts

  47. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?

  48. Answer: [N/A]

  49. Justification: There is no societal impact of the work performed.

  50. Guidelines:

    • The answer NA means that there is no societal impact of the work performed.

    • If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.

    • Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.

    • The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.

    • The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.

    • If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).

  51. 11.

    Safeguards

  52. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?

  53. Answer: [N/A]

  54. Justification: The paper poses no such risks.

  55. Guidelines:

    • The answer NA means that the paper poses no such risks.

    • Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.

    • Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.

    • We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.

  56. 12.

    Licenses for existing assets

  57. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?

  58. Answer: [N/A]

  59. Justification: The paper does not use existing assets.

  60. Guidelines:

    • The answer NA means that the paper does not use existing assets.

    • The authors should cite the original paper that produced the code package or dataset.

    • The authors should state which version of the asset is used and, if possible, include a URL.

    • The name of the license (e.g., CC-BY 4.0) should be included for each asset.

    • For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.

    • If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.

    • For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.

    • If this information is not available online, the authors are encouraged to reach out to the asset’s creators.

  61. 13.

    New Assets

  62. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?

  63. Answer: [N/A]

  64. Justification: The paper does not release new assets.

  65. Guidelines:

    • The answer NA means that the paper does not release new assets.

    • Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.

    • The paper should discuss whether and how consent was obtained from people whose asset is used.

    • At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.

  66. 14.

    Crowdsourcing and Research with Human Subjects

  67. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?

  68. Answer: [N/A]

  69. Justification: The paper does not involve crowdsourcing nor research with human subjects.

  70. Guidelines:

    • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.

    • Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.

    • According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.

  71. 15.

    Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects

  72. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?

  73. Answer: [N/A]

  74. Justification: The paper does not involve crowdsourcing nor research with human subjects.

  75. Guidelines:

    • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.

    • Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.

    • We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.

    • For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.