comment
Reinforcement Learning with LTL and -Regular Objectives via Optimality-Preserving Translation to Average Rewards
Abstract
Linear temporal logic (LTL) and, more generally, -regular objectives are alternatives to the traditional discount sum and average reward objectives in reinforcement learning (RL), offering the advantage of greater comprehensibility and hence explainability. In this work, we study the relationship between these objectives. Our main result is that each RL problem for -regular objectives can be reduced to a limit-average reward problem in an optimality-preserving fashion, via (finite-memory) reward machines. Furthermore, we demonstrate the efficacy of this approach by showing that optimal policies for limit-average problems can be found asymptotically by solving a sequence of discount-sum problems approximately. Consequently, we resolve an open problem: optimal policies for LTL and -regular objectives can be learned asymptotically.
1 Introduction
Reinforcement learning (RL) is a machine learning paradigm whereby an agent aims to accomplish a task in a generally unknown environment [36]. Traditionally, tasks are specified via a scalar reward signal obtained continuously through interactions with the environment. These rewards are aggregated over entire trajectories either through averaging or by summing the exponentially decayed rewards. However, in many applications, there are no reward signals that can naturally be extracted from the environment. Moreover, reward signals that are supplied by the user are prone to error in that the chosen low-level rewards often fail to accurately capture high-level objectives. Generally, policies derived from local rewards-based specifications are hard to understand because it is difficult to express or explain their global intent.
As a remedy, it has been proposed to specify tasks using formulas in Linear Temporal Logic (LTL) [40, 29, 9, 37, 15, 33, 14] or -regular languages more generally [29]. In this framework, the aim is to maximise the probability of satisfying a logical specification. LTL can precisely express a wide range of high-level behavioural properties such as liveness (infinitely often ), safety (always ), stability (eventually always ), and priority ( then then ).
Motivated by this, a growing body of literature study learning algorithms for RL with LTL and -regular objectives (e.g. [40, 15, 29, 7, 31, 20, 21, 16]). However, to the best of our knowledge, all of these approaches may fail to learn provably optimal policies without prior knowledge of a generally unknown parameter. Moreover, it is known that neither LTL nor (limit) average reward objectives are PAC (probably approximately correct) learnable [3]. Consequently, approximately optimal policies can only possibly be found asymptotically but not in bounded time. 111Formally, for some it is impossible to learn -approximately optimal policies with probability in finite time. \comm@parse@datecomm@commentloCitation? \comm@parse@datecomm@commentdwThis is immediate from the definition of PAC learnability. What do you want a reference for?
In this work, we pursue a different strategy: rather than solving the RL problem directly, we study optimality-preserving translations [3] from -regular objectives to more traditional rewards, in particular, limit-average rewards. This method offers a significant advantage: it enables the learning of optimal policies for -regular objectives by solving a single more standard problem, for which we can leverage existing off-the-shelf algorithms (e.g. [25, 15, 29]). In this way, all future advances—in both theory and practice—for these much more widely studied problems carry over directly, whilst still enjoying significantly more explainable and comprehensible specifications. It is well-known that such a translation from LTL to discounted rewards is impossible [3]. Intuitively, this is because the latter cannot capture infinite horizon tasks such as reachability or safety [3, 41, 19]. Hence, we instead investigate translations to limit-average rewards in this paper.
Contributions
We study reinforcement learning of -regular and LTL objectives in Markov decision processes (MDPs) with unknown probability transitions, translations to limit-average reward objectives and learning algorithms for the latter. In detail:
-
1.
We prove a negative result (Proposition 4): in general it is not possible to translate -regular objectives to limit average objectives in an optimality-preserving manner if rewards are memoryless \changed[lo](i.e., independent of previously performed actions, sometimes called history-free \changed[dw]or Markovian).
-
2.
On the other hand, our main result (Theorem 12) resolves Open Problem 1 in [3]: such an optimality-preserving translation is possible if the reward assignment may use finite memory as formalised by reward machines [23, 24].
-
3.
To underpin the efficacy of our reduction approach, we provide the first convergence proof (Theorem 16) of an RL algorithm (Algorithm 1) for average rewards. To the best of our knowledge (and as indicated by [13]), this is the first proof without assumptions on the induced Markov chains. In particular, the result applies to multichain MDPs, which our translation generally produces, with unknown probability transitions. Consequently, we also resolve Open Problem 4 of [3]: RL for -regular and LTL objectives can be learned in the limit (Theorem 18).
Outline.
We start by reviewing the problem setup in Section 2. Motivated by the impossibility result for simple reward functions, we define reward machines (Section 3). In Section 4 we build intuition for the proof of our main result in Section 5. Thereafter, we demonstrate that RL with limit-average, -regular and LTL objectives can be learned asymptotically (Section 6). Finally, we review related work and conclude in Section 7.
2 Background
Recall that a Markov Decision Process (MDP) is a tuple where is a finite set of states, is the initial state, is the finite set of actions and is the probability transition function such that for every and . MDPs may be graphically represented; see e.g. Fig. 1(a). We let and denote the set of finite runs and the set of infinite runs in respectively.
A policy maps finite runs to distributions over actions. We let denote the set of all such policies. A policy is memoryless if for all finite runs and such that . For each MDP and policy , there is a natural induced probability measure on its runs.
The desirability of policies for a given MDP can be expressed as a function . Much of the RL literature focuses on discounted-sum and limit-average reward objectives , which lift a reward function for single transitions to runs as follows:
where and is the discount factor.
-Regular Objectives.
-regular objectives (which subsume LTL objectives) are an alternative to these traditional objectives. Henceforth, we fix an alphabet and a label function for transitions, where is the power set of a set . Each run induces a sequence of labels . Thus, for a set of “desirable” label sequences we can consider the probability of a run’s labels being in that set: .
Example 1.
For instance, an autonomous car may want to “visit a petrol station exactly once” to conserve resources (e.g. time or petrol). Consider the MDP in Fig. 1(a) where the state represents a petrol station. We let ( for petrol), , and the rest are labeled with . The desirable label sequences are .
In this work, we focus on which are -regular languages. It is well known that -regular languages are precisely the languages recognised by Deterministic Rabin Automata (DRA) [26, 28]:
Definition 2.
A DRA is a tuple where is a finite state set, is the alphabet, is the initial state, is the transition function, and , where , is the accepting condition. Let be an infinite run and the set of states visited infinitely often by . We say is accepted by if there exists some such that visits some state in infinitely often while visiting every states in finitely often, i.e. and .
Thus, the desirability of is the probability of generating an accepting sequence in the DRA :
(1) |
Remarks.
The class of -regular languages subsumes languages expressed by Linear Temporal Logic (LTL, see e.g. [5, Ch. 5]), a logical framework in which e.g. reachability (eventually , ), safety (always , ) and reach-avoid (eventually whilst avoiding , ) properties can be expressed concisely and intuitively. The specification of our running Example 1 to visit the petrol station exactly once can be expressed as the LTL formula , where denotes “ holds at the next step”. Furthermore, our label function , which maps transitions to labels, is more general than other definitions (e.g. [40, 15, 29]) \changed[dw]instead mapping states to labels. \comm@parse@datecomm@commentloConfusing to use then in the preceding. Typo? \comm@parse@datecomm@commentdwbetter? As a result, we are able to articulate properties that involve actions, such as “to reach the state while avoiding taking the action ”.
Optimality-Preserving Specification Translations.
Rather than solving the problem of synthesising optimal policies for Eq. 1 directly, we are interested in reducing it to more traditional RL problems and applying off-the-shelf RL algorithms to find optimal policies. To achieve this, the reduction needs to be optimality preserving222This definition makes sense both for the case of reward functions and reward machines (introduced in the subsequent section).:
comm@commentdwWhat do you think of this modified definition? \comm@parse@datecomm@commentloOK. But note that we have not defined reward machines yet, and the type of is , which rules out memoryless reward machines. \comm@parse@datecomm@commentdwIs the clarifying footnote helpful?
Definition 3 ([3]).
An optimality-preserving specification translation from -regular objectives to limit-average rewards is a computable function mapping each tuple to s.t.
policies maximising also maximise , where
for every MDP , label function and DRA .
We stress that since the probability transition function is generally not known, the specification translation may not depend on it.
3 Negative Result and Reward Machines
Reward functions emit rewards purely based on the transition being taken without being able to take the past into account, whilst DRAs have finite memory. Therefore, there cannot generally be optimality-preserving translations from -regular objectives to limit average rewards provided by reward functions:
Proposition 4.
There is an MDP and an -regular language for which it is impossible to find a reward function such that every -optimal policy of also maximises the probability of membership in .
Remarkably, this rules out optimality-preserving specification translations even if transition probabilities are fully known333In Appendix A we show another negative result (Proposition 19): even for a strict subset of -regular specifications such translations are impossible..
Proof.
Consider the deterministic MDP in Fig. 1(a) and the objective of Example 1 “to visit exactly once” expressed by the DRA in Fig. 1(b). Assume towards contradiction there exists a reward function such that optimal policies w.r.t. maximise acceptance by . Note that every policy maximising acceptance by the DRA induces the run for some , and . Thus, its limit-average reward is . Now, consider the policy always selecting action with probability . As the run induced by is , we deduce that and , which is a contradiction since is not -optimal. ∎
Since simple reward functions lack the expressiveness to capture -regular objectives, we employ a generalisation, reward machines [23, 24], whereby rewards may also depend on an internal state:
Definition 5.
A reward machine (RM) is a tuple where is a finite set of states, is the initial state, is the reward function, and is the update function.
Intuitively, a RM utilises the current transition to update its states through and assigns the rewards through . For example, Fig. 2(a) depicts a reward machine for the MDP of Fig. 1(a), where the states count the number of visits to (0 times, once, more than once).
Let be an infinite run. Since is deterministic, it induces a sequence of states in , where and . The limit-average reward of a policy is defined as:
It is seen that limit-average optimal policies for the MDP in Fig. 1(a) and the RM in Fig. 2(a) eventually select action exactly once in state to achieve .
In the following two sections, we present a general translation from -regular languages to limit-average reward machines, and we show that our translation is optimality-preserving (Theorem 12).
Remarks.
Our definition of RM is more general than the one presented in [23, 24], where and . Note that can be reduced to by expanding the state space of the RM to include the previous state and utilising the inverse label function . It is worth pointing out that Theorem 12 does not contradict a negative result in [3] regarding the non-existence of an optimality-preserving translation from LTL constraints to abstract limit-average reward machines (where only the label of transitions is provided to and ).
4 Warm-Up: Transitions with Positive Probability are Known
To help the reader gain intuition about our construction, we first explore the situation where the support of the MDP’s transition function is known. Crucially, we do not assume that the magnitude of these (non-zero) probabilities are known. Subsequently, in Section 5, we fully eliminate this assumption.
This assumption allows us to draw connections between our problem and a familiar scenario in probabilistic model checking [5, Ch. 10], where the acceptance problem for -regular objectives can be transformed into a reachability problem. Intuitively, our reward machine monitors the state of the DRA and provides reward if the MDP and the DRA are in certain “good” states ( otherwise).
For the rest of this section, we fix an MDP without transition function , a set of possible transitions , a label function and a DRA . Our aim is to find a reward machine such that for every transition function compatible with (formally: ), optimal policies for limit-average rewards are also optimal for the acceptance probability of the DRA .
4.1 Product MDP and End Components
First, we form the product MDP (e.g. [40, 15]), which synchronises the dynamics of the MDP with the DRA . Formally, where is the set of states, is the set of actions, is the initial state. The transition probability function satisfies given that , , and . The accepting condition is where , , and . A run is accepted by if there exists some such that and , where is the set of states in the product MDP visited infinitely often by .
Note that product MDPs have characteristics of both MDPs and DRAs which neither possesses in isolation: transitions are generally probabilistic and there is a notation of acceptance of runs. For example, the product MDP for Fig. 1 is shown in Fig. 2(b). Due to the deterministic nature of the DRA , every run in gives rise to a unique run in . Crucially, for every policy ,
(2) |
We make use of well-known almost-sure characterisation of accepting runs via the notion of accepting end components:
Definition 6.
An end component (EC) of is a pair where and satisfies the following conditions
-
1.
For every and , we have , and
-
2.
The graph is strongly connected, where iff for some .
is an accepting EC (AEC) if and for some .
Intuitively, an EC is a strongly connected sub-MDP. For instance, for the product MDP in Fig. 2(b) there are five end components, , , , and . is its only accepting end component.
It turns out that, almost surely, a run is accepted iff it enters an accepting end component and never leaves it [2]. Therefore, a natural idea for a reward machine is to use its state to keep track of the state the DRA is in and give reward 1 to transitions if is in some AEC (and otherwise). Unfortunately, this approach falls short since the AEC may contain non-accepting ECs, thus assigning maximal reward to sub-optimal policies.444To illustrate this point, consider the product MDP where and , i.e. the objective is to visit infinitely often. As a remedy, we introduce a notion of minimal AEC, and ensure that only runs eventually committing to one such minimal AEC get a limit-average reward of 1.
Definition 7.
An AEC is an accepting simple EC (ASEC) if for every .
Let be a collection of ASECs covering all states in ASECs, i.e. if is in some ASEC then . In particular, is sufficient.
We can prove that every AEC contains an ASEC (see Lemma 20 in Appendix B). Consequently,
Lemma 8.
Almost surely, if is accepted by then reaches a state in some ASEC of .
4.2 Reward Machine and Correctness
Next, to ensure that runs eventually commit to one such ASEC we introduce the following notational shorthand: for , let be the with minimal containing , i.e. .
Intuitively, we give a reward of 1 if is in one of the . However, once an action is performed which deviates from no rewards are given thereafter, thus resulting in a limit average reward of .
A state in the reward machine has the form , keeping track of the state in the DRA, or , which is a sink state signifying that in a state in we have previously deviated from .
Finally, we are ready to formally define the reward machine exhibiting our specification translation as , where
For our running example, this construction essentially yields the reward machine in Fig. 2(a) (with some inconsequential modifications cf. Fig. 4 in Appendix B).
Theorem 9.
For all transition probability functions with support , policies maximising the limit-average reward w.r.t. also maximise the acceptance probability of the DRA .
This result follows immediately from the following (the full proof is presented in Appendix B):
Lemma 10.
Let be a probability transition function with support and .
-
1.
For every policy , .
-
2.
For every policy , there exists some policy satisfying .
Proof sketch.
1. By construction, every run receiving a limit-average reward of , must have entered some ASEC and never left it. Furthermore, almost surely all states are visited infinitely often and the run is accepted by definition of accepting ECs.
2. By Lemma 8, almost surely, a run is only accepted if it enters some . We set to be the policy agreeing with until reaching one of the and henceforth following the action , where is the state of the DRA at step , yielding a guaranteed limit-average reward of for the run by construction. ∎
[dw]
Remark 11.
Our construction considers a collection of ASECs covering all states in ASECs. Whilst it does not necessarily require listing all possible ASECs but only (up to) one ASEC per state, it is unclear whether this can be obtained in polynomial time. In Section B.1, we present an alternative (yet more complicated) construction which has polynomial time complexity.
5 Main Result
In this section, we generalise the approach of the preceding section to prove our main result:
Theorem 12.
There exists an optimality-preserving translation from -regular languages to limit-average reward machines.
Again, we fix an MDP without transition function , a label function and a DRA . Note that the ASECs of a product MDP are uniquely determined by the non-zero probability transitions. Thus, for each set of transitions , we let denote a collection of ASECs covering all states in ASECs w.r.t. the MDPs in which is the set of non-zero probability transitions.555To achieve the same number of ASECs we can add duplicates. If there are no ASECs we can set . Then, for each set and state , we let be the ASEC that contains in which the index is minimal.
Our reward machine extends the ideas from the preceding section. Importantly, we keep track of the set of transitions taken so far and assign rewards according to our current knowledge about the graph of the product MDP. Therefore, we propose employing states of the form , where keeps track of the state of the DRA, is a status flag and memorises the transitions in the product MDP encountered thus far.
Intuitively, we set the flag to if we are in MDP state , is in one of the and the chosen action deviates from . We can recover from by discovering new transitions. Besides, we give reward if and is in one of the (and otherwise).
The status flag is required since discovering new transitions will change the structure of (accepting simple) end components. Hence, differently from the preceding section, it is not sufficient to have a single sink state.
The initial state of our reward machine is and we formally define the update and reward functions as follows:
where and .
Example 13.
For our running example (see Examples 1 and 1) initially no transitions are known (hence no ASECs). Therefore, all transitions receive reward . Once action has been performed in state in the MDP and in the reward machine , we have discovered the ASEC and a reward of is given henceforth unless action is selected eventually. In that case, we leave the ASEC and we will not discover further ASECs since there is only one. From here, it is not possible to return to state in the DRA and henceforth only reward will be obtained.
Theorem 12 is proven by demonstrating an extension of Lemma 10 (see Appendix C):
Lemma 14.
Suppose is an arbitrary MDP.
-
1.
For every policy , .
-
2.
For every policy , there exists some policy satisfying .
Note that Lemma 14 immediately proves that the reduction is not only optimality preserving (Theorem 12) but also robust: every -approximately limit-average optimal policy is also -approximately optimal w.r.t. . This observation is important because exactly optimal policies for the limit average problem may be hard to find.
Intuitively, to see part 1 of Lemma 14 we note: If an average reward of is obtained for a run, the reward machine believes, based on the partial observation of the product MDP, that the run ends up in an ASEC. Almost surely, we eventually discover all possible transitions involving the same state-action pairs as this ASEC and therefore this must also be an ASEC w.r.t. the true, unknown product MDP. For part 2, we modify the policy similarly as in Lemma 10 by selecting actions once having entered an ASEC w.r.t. the true, unknown product MDP.666NB The modified policy depends on the true, unknown support of the Probability transition function; we only claim the existence of such a policy.
6 Convergence for Limit Average, -Regular and LTL Objectives
Thanks to the described translation, advances (in both theory and practice) in the study of RL with average rewards carry over to RL with -regular and LTL objectives. In this section, we show that it is possible to learn optimal policies for limit average rewards in the limit. Hence, we resolve an open problem [3]: also RL with -regular and LTL objectives can also be learned in the limit.
We start with the case of simple reward functions . Recently, [18] have shown that discount optimal policies for sufficiently high discount factor are also limit average optimal. This is not enough to demonstrate Theorem 16 since is generally not known and in finite time we might only obtain approximately limit average optimal policies.
Our approach is to reduce RL with average rewards to a sequence of discount sum \changed[dw]problems with increasingly high discount factor, which are solved with increasingly high accuracy. Our crucial insight is that eventually the approximately optimal solutions to the discounted problems will also be limit average optimal (see Appendix D for a proof):
Lemma 15.
Suppose , and suppose each is a memoryless policy. Then there exists such that for all , is limit average optimal, where is the set of satisfying for all memoryless policies.
Our proof harnesses yet another notion of optimality: a policy is Blackwell optimal (cf. [6] and [22, Sec. 8.1]) if there exists such that is -discount optimal for all . It is well-known that memoryless Blackwell optimal strategies always exist [6, 18] and they are also limit-average optimal [22, 18].
Thanks to the PAC (probably approximately correct) learnability of RL with discounted rewards [25, 35], there exists an algorithm Discounted which receives as inputs a simulator for , as well as and , and with probability returns an -optimal memoryless policy for discount factor . In view of Lemma 15, our approach is to run the PAC algorithm for discount-sum RL for increasingly large discount factors and increasingly low and (Algorithm 1).
Theorem 16.
RL with average reward functions can be learned in the limit by Algorithm 1: almost surely there exists such that is limit-average optimal for .
Proof.
Next, we turn to the more general case of reward machines. [23, 24] observe that optimal policies for reward machines can be learned by learning optimal policies for the modified MDP which additionally tracks the state the reward machine is in and assigns rewards accordingly. We conclude at once:
Corollary 17.
RL with average reward machines can be learned in the limit.
Finally, harnessing Theorem 12 we resolve Open Problem 4 of [3]:
Theorem 18.
RL with -regular and LTL objectives can be learned in the limit.
Discussion.
Algorithm 1 makes independent calls to black box algorithms for discount sum rewards. Many such algorithms with PAC guarantees are model based (e.g. [25, 35]) and sample from the MDP to obtain suitable approximations of the transition probabilities. Thus, Algorithm 1 can be improved in practice by re-using approximations obtained in earlier iterations and refining them.
7 Related Work and Conclusion
The connection between acceptance of -regular languages in the product MDP and AECs is well-known in the field of probabilistic model checking [5, 12]. As an alternative to DRAs [40, 14, 31], Limit Deterministic Büchi Automata [34] have been employed to express -regular languages for RL [37, 7, 10, 20, 21].
A pioneering work on RL for -regular rewards is [40], which expresses -regular objectives using Deterministic Rabin Automata. Similar RL approaches for -regular objectives can also be found in [14, 37, 10, 15]. The authors of [15, 29] approach RL for -regular objectives directly by studying the reachability of AECs in the product MDP and developing variants of the R-MAX algorithm [8] to find optimal policies. However, these approaches require prior knowledge of the MDP, such as the structure of the MDP, the optimal -return mixing time [15], or the -recurrence time [29].
Various studies have explored reductions of -regular objectives to discounted rewards, and subsequently applied Q-learning and its variants for learning optimal policies [7, 31, 20, 21, 16]. \changed[dw]In a similar spirit, [38] present a translation from LTL objectives to eventual discounted rewards, where only strictly positive rewards are discounted. These translations are generally not optimality preserving unless the discount factor is selected in a suitable way. Again, this is impossible without prior knowledge of the exact probability transition functions in the MDP.
Furthermore, whilst there are numerous convergent RL algorithms for average rewards for unichain \changed[dw]or communitcating777These assumptions generally fail for our setting, where in view of Corollary 17, MDP states also track the states of the reward machine. For instance, in the reward machine in Fig. 2(a) it is impossible to reach from . MDPs (e.g. [8, 42, 17, 32, 4, 39]), it is unknown whether such an algorithm exists for general multichain MDPs with a guaranteed convergence property. In fact, a negative result in [3] shows that there is no PAC (probably approximately correct) algorithm for LTL objectives and limit-average rewards when the MDP transition probabilities are unknown.
[8] have proposed an algorithm with PAC guarantees provided -return mixing times are known. They informally argue that for fixed sub-optimality tolerance , this assumption can be lifted by guessing increasingly large candidates for the -return mixing time. This yields -approximately optimal policies in the limit. However, it is not clear how to asymptotically obtain exactly optimal policies as this would require simultaneously decreasing and increasing guesses for the -return mixing time (which depends on ).
Conclusion.
We have presented an optimality-preserving translation from -regular objectives to limit-average rewards furnished by reward machines. As a consequence, off-the-shelf RL algorithms for average rewards can be employed in conjunction with our translation to learn policies for -regular objectives. Furthermore, we have developed an algorithm asymptotically learning provably optimal policies for limit-average rewards. Hence, also optimal policies for -regular and LTL objectives can be learned in the limit. Our results provide affirmative answers to two open problems in [3].
Limitations.
We focus on MDPs with finite state and action sets and assume states are fully observable. The assumption of Section 4 that the support of the MDP’s probability transition function is known is eliminated in Section 5. Whilst the size of our general translation—the first optimality-preserving translation—is exponential, the additional knowledge in Section 4 enables a construction of the reward machine of the same size as the DRA expressing the objective. Hence, we conjecture that this size is minimal. Since RL with average rewards is not PAC learnable, we cannot possibly provide finite-time complexity guarantees of our Algorithm 1.
Acknowledgments and Disclosure of Funding
This research is supported by the National Research Foundation, Singapore, under its RSS Scheme (NRFRSS2022-009).
References
- [1] David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, and Satinder Singh. On the expressivity of markov reward. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 7799–7812, 2021.
- [2] Luca Alfaro. Formal Verification of Probabilistic Systems. Phd thesis, Stanford University, Stanford, CA, USA, 1998.
- [3] Rajeev Alur, Suguman Bansal, Osbert Bastani, and Kishor Jothimurugan. A framework for transforming specifications in reinforcement learning. In Jean-François Raskin, Krishnendu Chatterjee, Laurent Doyen, and Rupak Majumdar, editors, Principles of Systems Design: Essays Dedicated to Thomas A. Henzinger on the Occasion of His 60th Birthday, pages 604–624, Cham, 2022. Springer Nature Switzerland.
- [4] Peter Auer, Thomas Jaksch, and Ronald Ortner. Near-optimal regret bounds for reinforcement learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., 2008.
- [5] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. The MIT Press, 2008.
- [6] David Blackwell. Discrete Dynamic Programming. The Annals of Mathematical Statistics, 33(2):719 – 726, 1962.
- [7] Alper Kamil Bozkurt, Yu Wang, Michael M. Zavlanos, and Miroslav Pajic. Control synthesis from Linear Temporal Logic specifications using model-free reinforcement learning. 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 10349–10355, 2019.
- [8] Ronen I. Brafman and Moshe Tennenholtz. R-max - A general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3(null):213–231, mar 2003.
- [9] Tomáš Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtěch Forejt, Jan Křetínskỳ, Marta Kwiatkowska, David Parker, and Mateusz Ujma. Verification of Markov decision processes using learning algorithms. In Automated Technology for Verification and Analysis: 12th International Symposium, ATVA 2014, Sydney, NSW, Australia, November 3-7, 2014, Proceedings 12, pages 98–114. Springer, 2014.
- [10] Mingyu Cai, Shaoping Xiao, Zhijun Li, and Zhen Kan. Optimal probabilistic motion planning with potential infeasible LTL constraints. IEEE Transactions on Automatic Control, 68(1):301–316, 2023.
- [11] Krishnendu Chatterjee and Monika Henzinger. Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In Dana Randall, editor, Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 1318–1336. SIAM, 2011.
- [12] Luca de Alfaro. Computing minimum and maximum reachability times in probabilistic systems. In Jos C. M. Baeten and Sjouke Mauw, editors, CONCUR’99 Concurrency Theory, pages 66–81, Berlin, Heidelberg, 1999. Springer Berlin Heidelberg.
- [13] Vektor Dewanto, George Dunn, Ali Eshragh, Marcus Gallagher, and Fred Roosta. Average-reward model-free reinforcement learning: a systematic review and literature mapping, 2021.
- [14] Xuchu Ding, Stephen L. Smith, Calin Belta, and Daniela Rus. Optimal control of Markov Decision Processes with Linear Temporal Logic constraints. IEEE Transactions on Automatic Control, 59(5):1244–1257, 2014.
- [15] Jie Fu and Ufuk Topcu. Probably approximately correct MDP learning and control with Temporal Logic constraints. In Dieter Fox, Lydia E. Kavraki, and Hanna Kurniawati, editors, Robotics: Science and Systems X, University of California, Berkeley, USA, July 12-16, 2014, 2014.
- [16] Qitong Gao, Davood Hajinezhad, Yan Zhang, Yiannis Kantaros, and Michael M. Zavlanos. Reduced variance deep reinforcement learning with Temporal Logic specifications. In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS ’19, page 237–248, New York, NY, USA, 2019. Association for Computing Machinery.
- [17] Abhijit Gosavi. Reinforcement learning for long-run average cost. European Journal of Operational Research, 155(3):654–674, 2004. Traffic and Transportation Systems Analysis.
- [18] Julien Grand-Clément and Marek Petrik. Reducing blackwell and average optimality to discounted mdps via the blackwell discount factor. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
- [19] Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, and Dominik Wojtczak. Omega-regular objectives in model-free reinforcement learning. In Tomáš Vojnar and Lijun Zhang, editors, Tools and Algorithms for the Construction and Analysis of Systems, pages 395–412, Cham, 2019. Springer International Publishing.
- [20] Hosein Hasanbeig, Daniel Kroening, and Alessandro Abate. Certified reinforcement learning with logic guidance. Artificial Intelligence, 322:103949, 2023.
- [21] Mohammadhosein Hasanbeig, Daniel Kroening, and Alessandro Abate. Deep reinforcement learning with Temporal Logics. In Nathalie Bertrand and Nils Jansen, editors, Formal Modeling and Analysis of Timed Systems, pages 1–22, Cham, 2020. Springer International Publishing.
- [22] Arie Hordijk and Alexander A. Yushkevich. Blackwell Optimality, pages 231–267. Springer US, Boston, MA, 2002.
- [23] Rodrigo Toro Icarte. Reward Machines. Phd thesis, University of Toronto, 03 2022.
- [24] Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith. Using reward machines for high-level task specification and decomposition in reinforcement learning. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2107–2116. PMLR, 10–15 Jul 2018.
- [25] Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49:209–232, 2002.
- [26] Bakhadyr Khoussainov and Anil Nerode. Automata Theory and its Applications. Birkhäuser Boston, Boston, MA, 2001.
- [27] Achim Klenke. Probability Theory: A Comprehensive Course. Universitext. Springer London, 2014.
- [28] Dexter Kozen. Theory of Computation. Springer, London, 2006.
- [29] Mateo Perez, Fabio Somenzi, and Ashutosh Trivedi. A PAC Learning Algorithm for LTL and Omega-Regular Objectives in MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19):21510–21517, 2024.
- [30] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., USA, 1st edition, 1994.
- [31] Dorsa Sadigh, Eric S. Kim, Samuel Coogan, S. Shankar Sastry, and Sanjit A. Seshia. A learning based approach to control synthesis of Markov Decision Processes for Linear Temporal Logic specifications. In 53rd IEEE Conference on Decision and Control, pages 1091–1096, 2014.
- [32] Anton Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In International Conference on Machine Learning, 1993.
- [33] Daqian Shao and Marta Kwiatkowska. Sample Efficient Model-free Reinforcement Learning from LTL Specifications with Optimality Guarantees. IJCAI International Joint Conference on Artificial Intelligence, 2023-Augus:4180–4189, 2023.
- [34] Salomon Sickert, Javier Esparza, Stefan Jaax, and Jan Křetínský. Limit-deterministic Büchi automata for Linear Temporal Logic. In Swarat Chaudhuri and Azadeh Farzan, editors, Computer Aided Verification, pages 312–332, Cham, 2016. Springer International Publishing.
- [35] Alexander L. Strehl, Lihong Li, and Michael L. Littman. Reinforcement learning in finite mdps: PAC analysis. J. Mach. Learn. Res., 10:2413–2444, 2009.
- [36] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018.
- [37] Cameron Voloshin, Hoang Le, Swarat Chaudhuri, and Yisong Yue. Policy optimization with Linear Temporal Logic constraints. Advances in Neural Information Processing Systems, 35:17690–17702, 2022.
- [38] Cameron Voloshin, Abhinav Verma, and Yisong Yue. Eventual discounting temporal logic counterfactual experience replay. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 35137–35150. PMLR, 2023.
- [39] Yi Wan, Abhishek Naik, and Richard S. Sutton. Learning and planning in average-reward markov decision processes. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 10653–10662. PMLR, 2021.
- [40] Eric M. Wolff, Ufuk Topcu, and Richard M. Murray. Robust control of uncertain Markov Decision Processes with Temporal Logic specifications. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 3372–3379, 2012.
- [41] Cambridge Yang, Michael L. Littman, and Michael Carbin. On the (In)Tractability of Reinforcement Learning for LTL Objectives. IJCAI International Joint Conference on Artificial Intelligence, pages 3650–3658, 2022.
- [42] Shangdong Yang, Yang Gao, Bo An, Hao Wang, and Xingguo Chen. Efficient average reward reinforcement learning using constant shifting values. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1), 2016.
Appendix A Supplementary Materials for Section 3
Recall that a -regular language is prefix-independent if for every infinite label sequence , we have iff for every suffix of . We prove that there is no optimality-preserving translation for reward functions regardless of whether is prefix-independent or not. The prefix-dependent case was given in Section 3. Here we focus on the other case:
Proposition 19.
There exists a tuple and a prefix-independent -regular language for which it is impossible to find a reward function such that for every probability transition , let , then every -optimal policy of is also -optimal (i.e. maximizing the probability of membership in ).
Proof.
Our proof technique is based on the fact that we can modify the transition probability function. Consider the MDP in Fig. 3(a), where the objective is to visit either or infinitely often. It can be checked that the DRA in Fig. 3(b) captures the given objective and the language accepted by is prefix-independent. There are only two deterministic memoryless policies: , which consistently selects action , and , which consistently selects action . For the sake of contradiction, let’s assume the existence of a reward function that preserves optimality for every transition probability function . Pick and . Then and , which implies that is -optimal whereas is not. Thus . Now, assume . Accordingly, we have and we can deduce that (e.g. by solving the linear equation system described in [30, §8.2.3]) . As a result:
Consequently, if are sufficiently large then . However, this contradicts to the fact that is -optimal and is not, since . Hence, there is no such reward function . ∎
Appendix B Supplementary Materials for Section 4
Lemma 20.
Every AEC contains an ASEC.
Proof.
Consider an AEC of . We will prove this by using induction on the number of actions in , denoted as . For the base case where , it can be deduced that consists of only one accepting state with a self-loop. Therefore, itself is an ASEC.
Now, let’s assume that . If is already an ASEC, then we are done. Otherwise, there exists a state such that . Since is strongly connected, there exists a finite path where is an accepting state and all the states are different from . Let such that . We construct a new AEC by first removing from and then removing all the states that are no longer reachable from along with their associated transitions. It is important to note that after the removal, since we can reach from without taking the action . (Besides, the graph is still strongly connected.) Since , we can apply the induction hypothesis to conclude that contains an ASEC, thus completing the proof. ∎
See 8
To proof this result, we recall a well-known result in probabilistic model checking that with probability of one (wpo), every run of the policy eventually stays in one of the ECs of and visits every transition in that EC infinitely often. To state this formally, we define for any run ,
the set of state-action-pairs occurring infinitely often in . Furthermore, a state-action set defines a sub-MDP , where
Lemma 21 ([12]).
.
For the sake of self-containedness, we recall the proof of [12].
Proof.
We start with two more definitions: for any sub-MDP [2], let
be the set of state-action pairs such that is enabled in . Finally, let
be the set of runs such that action is taken infinitely often in state iff and . Note that the constitute a partition of .
Therefore, it suffices to establish for any sub-MDP , is an end-component or .
Let be an arbitrary sub-MDP. First, suppose there exist and such that . By definition each takes action in state infinitely often. Hence, not only for all but also .
Thus, we can assume that for all and , . If then clearly follows. Otherwise, take any , and let be arbitrary. We show that there exists a connecting path in , which implies that is an end component.
Evidently, there exists an index such that all state-action pairs occur infinitely often in , i.e.
Thus, for all , and , and for all , there is a path from to in . Finally, it suffices to note that clearly for some , and . ∎
Proof of Lemma 8.
By Lemma 21, almost surely is an accepting end component. Clearly, is only accepted by the product MDP if this end component is an accepting EC. By Lemma 20 this AEC contains an ASEC. Therefore, by definition of , almost surely in particular enters some ASEC. Finally, since the cover all states in ASECs, almost surely enters some . ∎
Before turning to the proof of Lemma 10, let denote the limit-average reward of a run . Note that, for any run , . Thus, by the dominated convergence theorem [27, Cor. 6.26],
(3) |
See 10
Proof.
-
1.
For any run , only if enters a and never leaves it. ( might have entered other ’s earlier but then those necessarily need to overlap with yet another such that to avoid being trapped in state , resulting in . Furthermore, this can only overlap with if . Otherwise, the reward machine would have enforced transitioning to .) \comm@parse@datecomm@commentdwmore elaboration necessary?
-
2.
Let be arbitrary. For a run let be the state of the DRA in step . Define to follow until reaching such that . Henceforth, we select the (unique) action guaranteeing to stay in the with minimal including the current state, i.e. . Formally888We slightly abuse notation in the “otherwise”-case and denote by the distribution selecting the state in the singleton set with probability 1.,
(4) Note that whenever a run follows the modified policy and its induced run reaches some ASEC then . Thus,
Furthermore, by Lemma 8 almost surely, every induced run accepted by the product MDP must reach some . Consequently, by Eq. 2,
In the penultimate step, we have exploited the fact that and agree until reaching the first .∎
B.1 Efficient Construction
[dw] We consider a different collection of ASECs:
Suppose is a collection of AECs (not necessarily simple ones) containing all states in AECs. Then we consider ASECs such that is contained in .
The definition of the reward machine in Section 4.2 and the extension in Section 5 do not need to be changed. Next, we argue the following:
-
1.
This collection can be obtained efficiently (in time polynomial in the size of the MDP and DRA).
- 2.
For 1. it is well-known that a collection of maximal AECs (covering all states in AECs) can be found efficiently using graph algorithms [2, Alg. 3.1], [15, 11] and [5, Alg. 47 and Lemma 10.125]. Subsequently, Lemma 20 can be used to obtain an ASEC contained in each of them. In particular, note that the proof of Lemma 20 immediately gives rise to an efficient algorithm. (Briefly, we iteratively remove actions and states whilst querying reachability properties.)
For 2., the first part of Lemma 10 clearly still holds. For the second, we modify policy as follows: Once, enters a maximal accepting end component we select an action on the shortest path to the respective ASEC inside . Once we enter one of the we follow the actions specified by the ASEC as before. Observe that the probability that under an AEC is entered is the same as the probability that one of the is entered under the modified policy. The lemma, hence Theorem 9, follow.
Appendix C Supplementary Materials for Section 5
See 14
Proof.
-
1.
For a run , let be the set of transitions encountered in the product MDP. Note that only if enters some and never leaves it. ( might have entered other s earlier for .) \comm@parse@datecomm@commentdwmore elaboration necessary?
With probability 1, contains all the transitions present in in the actual MDP. \comm@parse@datecomm@commentdwclear what is meant?\comm@parse@datecomm@commentlxbOnly true if you consider ‘normal’ which eventually ends up in some EC \comm@parse@datecomm@commentdwYes, but that happens with probability 1. Do you agree? (NB possible transitions outside of might be missing from .) In particular, with probability 1, is also an ASEC for the true unknown MDP and is accepted by the product MDP . Consequently, using Eq. 3 again,
-
2.
Let be arbitrary. We modify to as follows: until reaching an ASEC w.r.t. the true, unknown999NB The modified policy depends on the true, unknown ; we only claim the existence of such a policy. set of transitions follow . Henceforth, select action .
We claim that whenever follows the modified policy and reaches some ASEC in the true product MDP, . \comm@parse@datecomm@commentdwelaborate
To see this, suppose is such that for some minimal , . Let .
Define to be the transitions encountered up to step , i.e. . Then almost surely for some minimal , contains all transitions in , and no further transitions will be encountered, i.e. for all , . Define . \comm@parse@datecomm@commentdwASECs don’t contain more ASECs Note that for all such that , . (This is because upon entering the ASEC we immediately switch to following the action dictated by . Thus, we avoid “accidentally” discovering other ASECs w.r.t. the partial knowledge of the product MDP’s graph, which might otherwise force us to perform actions leaving .) Consequently, there cannot be another ASEC w.r.t. overlapping with , i.e. \comm@parse@datecomm@commentlxbI think you can make this statement stronger by saying that does not contain any other ASEC beside . Therefore, for all , . Consequently, . \comm@parse@datecomm@commentdwIs this convincing?
Thus,
Consequently,
In the penultimate step we have exploited that and agree until reaching some ASEC in true product MDP.∎
Appendix D Supplementary Materials for Section 6
comm@commentdwNotation: Let be the set of all memoryless policies and be the set of all limit-average optimal policies. Besides, let the limit average reward of any optimal .
Lemma 15 is proven completely analagously to the following (where ):
Lemma 22.
Suppose , and each is a memoryless policy satisfying for all . Then there exists such that for all , is limit average optimal.
Proof.
We define . Recall (see e.g. [22, Sec. 8.1]) that for any policy ,
(5) |
Since is finite, due to Eq. 5 there exists such that
(6) |
for all and . Let be a memoryless Blackwell optimal policy (which exists due to [6, 18]). Note that
(7) |
and there exists such that
(8) |
for all and . Moreover, there clearly exists such that and for all .
NeurIPS Paper Checklist
-
1.
Claims
-
Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
-
Answer: [Yes]
-
Justification: The main results mentioned in the abstract and introduction are Propositions 4, 12, 16 and 18. They accurately reflect the paper’s contributions and scope.
-
Guidelines:
-
•
The answer NA means that the abstract and introduction do not include the claims made in the paper.
-
•
The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
-
•
The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
-
•
It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
-
•
-
2.
Limitations
-
Question: Does the paper discuss the limitations of the work performed by the authors?
-
Answer: [Yes]
-
Justification: Limitations are discussed in Section 7. \comm@parse@datecomm@commentdwtodo: elaborate
-
Guidelines:
-
•
The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
-
•
The authors are encouraged to create a separate "Limitations" section in their paper.
-
•
The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
-
•
The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
-
•
The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
-
•
The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
-
•
If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
-
•
While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
-
•
-
3.
Theory Assumptions and Proofs
-
Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
-
Answer: [Yes]
-
Guidelines:
-
•
The answer NA means that the paper does not include theoretical results.
-
•
All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
-
•
All assumptions should be clearly stated or referenced in the statement of any theorems.
-
•
The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
-
•
Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
-
•
Theorems and Lemmas that the proof relies upon should be properly referenced.
-
•
-
4.
Experimental Result Reproducibility
-
Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
-
Answer: [N/A]
-
Justification: The paper does not include experiments.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
-
•
If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
-
•
Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
-
•
While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
-
(a)
If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
-
(b)
If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
-
(c)
If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
-
(d)
We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
-
(a)
-
•
-
5.
Open access to data and code
-
Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
-
Answer: [N/A]
-
Justification: The paper does not include experiments.
-
Guidelines:
-
•
The answer NA means that paper does not include experiments requiring code.
-
•
Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
-
•
While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
-
•
The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
-
•
The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
-
•
The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
-
•
At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
-
•
Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
-
•
-
6.
Experimental Setting/Details
-
Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
-
Answer: [N/A]
-
Justification: The paper does not include experiments.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
-
•
The full details can be provided either with the code, in appendix, or as supplemental material.
-
•
-
7.
Experiment Statistical Significance
-
Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
-
Answer: [N/A]
-
Justification: The paper does not include experiments.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
-
•
The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
-
•
The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
-
•
The assumptions made should be given (e.g., Normally distributed errors).
-
•
It should be clear whether the error bar is the standard deviation or the standard error of the mean.
-
•
It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.
-
•
For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
-
•
If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
-
•
-
8.
Experiments Compute Resources
-
Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
-
Answer: [N/A]
-
Justification: The paper does not include experiments.
-
Guidelines:
-
•
The answer NA means that the paper does not include experiments.
-
•
The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
-
•
The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
-
•
The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper).
-
•
-
9.
Code Of Ethics
-
Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
-
Answer: [Yes]
-
Justification: The research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines.
-
Guidelines:
-
•
The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
-
•
If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
-
•
The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
-
•
-
10.
Broader Impacts
-
Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
-
Answer: [N/A]
-
Justification: There is no societal impact of the work performed.
-
Guidelines:
-
•
The answer NA means that there is no societal impact of the work performed.
-
•
If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
-
•
Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
-
•
The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
-
•
The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
-
•
If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
-
•
-
11.
Safeguards
-
Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
-
Answer: [N/A]
-
Justification: The paper poses no such risks.
-
Guidelines:
-
•
The answer NA means that the paper poses no such risks.
-
•
Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
-
•
Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
-
•
We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
-
•
-
12.
Licenses for existing assets
-
Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
-
Answer: [N/A]
-
Justification: The paper does not use existing assets.
-
Guidelines:
-
•
The answer NA means that the paper does not use existing assets.
-
•
The authors should cite the original paper that produced the code package or dataset.
-
•
The authors should state which version of the asset is used and, if possible, include a URL.
-
•
The name of the license (e.g., CC-BY 4.0) should be included for each asset.
-
•
For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
-
•
If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
-
•
For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
-
•
If this information is not available online, the authors are encouraged to reach out to the asset’s creators.
-
•
-
13.
New Assets
-
Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
-
Answer: [N/A]
-
Justification: The paper does not release new assets.
-
Guidelines:
-
•
The answer NA means that the paper does not release new assets.
-
•
Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
-
•
The paper should discuss whether and how consent was obtained from people whose asset is used.
-
•
At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
-
•
-
14.
Crowdsourcing and Research with Human Subjects
-
Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
-
Answer: [N/A]
-
Justification: The paper does not involve crowdsourcing nor research with human subjects.
-
Guidelines:
-
•
The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
-
•
Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
-
•
According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
-
•
-
15.
Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects
-
Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
-
Answer: [N/A]
-
Justification: The paper does not involve crowdsourcing nor research with human subjects.
-
Guidelines:
-
•
The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
-
•
Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
-
•
We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
-
•
For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
-
•