Convergence Guarantee of Dynamic Programming for LTL Surrogate Reward
Abstract
Linear Temporal Logic (LTL) is a formal way of specifying complex objectives for planning problems modeled as Markov Decision Processes (MDPs). The planning problem aims to find the optimal policy that maximizes the satisfaction probability of the LTL objective. One way to solve the planning problem is to use the surrogate reward with two discount factors and dynamic programming, which bypasses the graph analysis used in traditional model-checking. The surrogate reward is designed such that its value function represents the satisfaction probability. However, in some cases where one of the discount factors is set to for higher accuracy, the computation of the value function using dynamic programming is not guaranteed. This work shows that a multi-step contraction always exists during dynamic programming updates, guaranteeing that the approximate value function will converge exponentially to the true value function. Thus, the computation of satisfaction probability is guaranteed.
I Introduction
Modern autonomous systems need to solve planning problems for complex rule-based tasks that are usually expressible by linear temporal logic (LTL)[1]. LTL is a symbolic language to describe high-level tasks like reaching a sequence of goals or ordering a set of events. The planning problem with the LTL objective is to find the optimal policy that maximizes the probability of satisfying the given LTL objective. This problem can be formally treated as a quantitative model-checking problem [2] when the environment is modeled as MDPs with known transition probability. That is, given an MDP and an LTL objective, find the maximum satisfaction probability within all possible policies.
Although an LTL objective can be complex, its maximal satisfaction probability and optimal policy can be computed by quantitative (or probabilistic) model checking [2, 3, 4] via reachability. First, a specific set of states is selected so the reachability probability to this set equals the maximum satisfaction probability of the LTL objective. The set of states shall be identified by graph analysis, such as a depth-first search, on the product MDP. The product MDP is a product of the original MDP and an -regular automaton, encoding all the information necessary for quantitative model-checking. Then, dynamic programming or linear programming shall be applied to compute the reachability probability of the set.
An alternative approach finds the optimal policy via a surrogate reward function without the graph analysis, thus is more generalizable to model-free reinforcement learning (RL) [5, 6, 7, 8]. The surrogate reward is a reward function automatically derived from the LTL objective, which yields a value function representing the satisfaction probability of the LTL objective. The satisfaction probability can be computed using dynamic programming. Dynamic programming iteratively updates an approximate value function by the Bellman equation. Ideally, the approximate value function will converge to the value function during updates. Furthermore, the finding of the optimal policy and the corresponding maximum satisfaction probability can be done by policy or value iteration. Another advantage of using surrogate reward can be a smooth transformation into model-free RL to deal with situations when the MDP transitions are unknown. In this case, the two-phase model-checking approach has to be transformed into model-based RL and requires additional computation [9].
This work focuses on a widely used surrogate reward [8] based on the limit-deterministic Büchi automaton. It assigns a constant reward for “good” states with two discount factors. Given a policy, the probability of infinitely often visiting the “good” states shall equal the satisfaction probability of the LTL objective. The surrogate reward yields a value function equal to the satisfaction probability when taking the limit of both discount factors to one.
Nevertheless, whether dynamic programming based on this surrogate reward can find the satisfaction probability has still not been fully studied. We noticed that more recent works [10, 11, 12, 13] allow one discount factor to equal while using the surrogate reward with two discount factors from [8]. Their Bellman equations yield multiple solutions, as the discount factor of holds in many states. The approximate value function updated by these Bellman equations may not converge to the value function. [14] proposes a sufficient condition to identify the value function from multiple solutions satisfying the Bellman equation. However, when discounting is missing in many states, whether using dynamic programming or RL based on the Bellman equation shall give us the value function is still not fully answered.
This work gives a convergence guarantee for the approximate value function during dynamic programming updates. We find an upper bound that decays exponentially for the infinite norm of the approximation error. Even though the one-step dynamic programming update does not provide a contraction on the approximation error, we show that a multi-step contraction always happens as we keep doing the dynamic programming update for enough steps. We verify our result in the case study where our upper bound holds and approximation error goes to zero. The intuition behind our result is that although only a few states have discounting, the discount works on them shall be propagated to other states via the reachability between states.
II Preliminaries
This section introduces how an LTL objective can be translated to a surrogate reward function. First, we formulate the LTL planning problem into an MDP with a Büchi objective, which is a standard approach used by probabilistic model-checking [2, 15] and other LTL planning works (e.g., [8]). Then, we show that the satisfaction probability of the Büchi objective can be expressed in another form, i.e. the value function of the surrogate reward.
II-A Modeling planning problems with LTL objectives as MDPs with Büchi objectives
We propose our model called Markov decision processes with Büchi objective. It augments general MDPs [2] with a set of accepting states.
Definition 1
A Markov decision process with Büchi objective is a tuple where
-
•
is a finite set of states and is the initial state,
-
•
is a finite set of actions where denotes the set of allowed actions in the state ,
-
•
is the transition probability function such that for all , we have
-
•
is the set of accepting states. The set of rejecting states .
A path of the MDP is an infinite state sequence such that for all , there exists and with . Given a path , the th state is denoted by . We denote the prefix by and suffix by . We say a path satisfies the Büchi objective if . Here, denotes the set of states visited infinitely many times on .
Our model is valid as the LTL objective can be translated into a Büchi objective. The translation is done by constructing a product MDP from the original MDP and a limit-deterministic Büchi automaton [15] generated from the LTL objective. Instead of looking for the optimal memory-dependent policy for the LTL objective, one can find the optimal memoryless policy on the product MDP [15]. Since the maximum satisfaction probability of Büchi objective can be achieved by a memoryless deterministic policy.
II-B Policy evaluation for Büchi objective via reachability
We change an LTL planning problem into seeking the policy that maximizes the satisfaction probability of the Büchi objective. Evaluation of the policy requires the calculation of the satisfaction probability, which can be done by calculating the reachability probability.
Definition 2
A memoryless policy is a function such that . Given an MDP and a memoryless policy , a Markov chain (MC) induced by policy is a tuple where for all .
Under the policy , for a path starting at state , its satisfaction probability of the Büchi objective is defined as
(1) |
The calculation of the satisfaction probability can be done by calculating the reachability probability. The probability of a path under the policy satisfying the Büchi objective equals the probability of a path entering the accepting bottom strongly connected component (BSCC) of the induced MC .
Definition 3
A bottom strongly connected component (BSCC) of an MC is a strongly connected component without outgoing transitions. A strongly connected component of an MC is a communicating class, which is a maximal set of states that communicate with each other. A BSCC is rejecting111Here we call a state as an accepting state, a state as a rejecting state. Notice that an accepting state must not exist in a rejecting BSCC, and a rejecting state may exist in an accepting BSCC. if all states . Otherwise, we call it an accepting BSCC.
Any path on the MC will eventually enter a BSCC and stay there. Any path entering an accepting BSCC will visit accepting states infinitely many times, therefore satisfying the Büchi objective.
The calculation of the reachability probability can be done via dynamic programming after one detects all the BSCCs. This approach is adopted by probabilistic model-checking and requires complete model knowledge.
II-C Surrogate reward that approximates the satisfaction probability
A surrogate reward allows one to calculate the satisfaction probability even without knowledge of BSCCs. This equivalent representation of the Büchi objective allows one to transfer an LTL objective as a discounted reward.
We study the surrogate reward for Büchi objective proposed in [8], which is also widely used in [10, 11, 12, 13, 16]. This surrogate reward is designed in a way that its value function approximates the satisfaction probability. It consists of a reward function and a state-dependent discount factor function with two discount factors ,
(2) |
A positive reward is collected only when an accepting state is visited along the path. For this surrogate reward, the -step return ( or ) of a path from time is
(3) |
Here, the definition follows a standard discounted reward setting [17] but allows state-dependent discounting. Suppose the discount factor . If a path satisfies the Büchi objective, its return shall be a summation of a geometric series as .
The value function is the expected return conditional on the path starting at under the policy . And it is related to Büchi objective as follows,
(4) |
where stands for the probability of a path not satisfying the Büchi objective conditional on the path starting at . As , close to , the value function becomes close to as
(5) |
The setting and is critical for solving an discounted reward problem using the value iteration, or Q-learning. These methods are guaranteed to find the optimal policy for the surrogate reward when . To make sure the optimal policy for the surrogate reward is the optimal policy for the LTL objective, one has to take , as close to as possible to reduce the error between the value function and the satisfaction probability. For , one can never take as can not serve as a positive reward. Setting seems to work in several works. However, [14] exposes setting would break the uniqueness of the solution of the Bellman equation, thus hindering a correct evaluation of the satisfaction probability.
Remark 1
Other surrogate rewards based on Büchi and Rabin automata have been studied but have flaws. The surrogate rewards [6] based on limit-deterministic Büchi automata assign a constant reward for “good” states with a constant discount factor. This approach is technically flawed, as demonstrated by [7]. Surrogate reward [5] based on Rabin automata assigns constant positive rewards to certain “good” states and negative rewards to “bad” states. However, this surrogate reward function is also not technically correct, as demonstrated in [18].
III Problem Formulation and main result
This section introduces a key challenge when one calculates the satisfaction probability and our answer to it. Specifically, one needs to utilize the recursive formulation of the Bellman equation to solve it. Thus, two conditions the Bellman equation to have a unique solution, and a discount works on every dynamic programming update are required. However, our surrogate reward breaks both conditions as it allows . Here, we investigate how dynamic programming calculates the satisfaction probability under the state-dependent discounting.
III-A Bellman equation and sufficient condition for the uniqueness of the solution
Given a policy, the value function satisfies the Bellman equation.222We call as the Bellman equation and as the Bellman optimality equation. The Bellman equation is derived from the fact that the value of the current state is equal to the expectation of the current reward plus the discounted value of the next state. For the surrogate reward in the equation (2), the Bellman equation is given as follows:
(6) |
III-B Open question on the convergence of dynamic programming
Based on Lemma 1, dynamic programming is a way to compute the value function using the Bellman equation. Given an initialization of the approximate value function and the Bellman equation (6), we shall iteratively do the dynamic programming update as
(7) |
where is the number of accepting states, is the number of rejecting states. and are column vectors with all and elements, respectively. We expect the approximate value function to converge to the value function during updates.
Suppose we initialize as a zero vector. The sufficient condition for the uniqueness of the solution holds for all . The value function is the only fix-point for the above update. However, a convergence guarantee is still needed to show we can compute the satisfaction probability using the above dynamic programming update.
When , the convergence is guaranteed by the one-step contraction shown as
(8) |
As , this contraction no longer holds, and the convergence to the value function is still an open question. This motivates us to study the following problem.
Problem Formulation: For an MDP with Büchi objective by Definition 1 and the surrogate reward (2), given a policy . Starting with , show approximate value function updated by dynamic programming (7) will converge to the value function as . And give an upper bound for the error .
In the following, we assume a fixed policy , leading us to omit the subscript from most notation when its implication is clear from the context.
III-C Overview on main result
When , we find a multi-step contraction shown as
(9) |
exists for the dynamic programming update, where and is a constant. Even though rejecting states lacks discounting, their reachability to accepting states still provides contraction. With this finding, we claim the convergence guarantee as follows.
Theorem 1
As the theorem claims, one can use the surrogate reward and dynamic programming to compute the satisfaction probability of the Büchi objective on the MDP. Thus, the following corollary holds.
Corollary 1
For a product MDP constructed by the MDP and the limit-deterministic Büchi automaton [15] generated from the LTL objective. Given a policy on the product MDPs. The surrogate reward and dynamic programming can be used to compute the satisfaction probability of the LTL objective on the MDPs.
Here, we give the convergence guarantee based on the discount factor and the reachability to the discounted state expressed as . The first bound is commonly seen for dynamic programming as each update provides one-step contraction when . The second bound relies on the multi-step contraction shown in the following section.
IV Multi-step contraction and proof of the main result
In this section, we will formally prove the main result by exploiting the reachability from undiscounted rejecting states to discounted accepting states. First, we simplify the dynamic programming update (7) using Lemma 1. Then, we show the infinite norm of the error vector will surely be contracted when the update is applied enough times.
IV-A Dynamic programming for states outside rejecting BSCCs
The sufficient condition in Lemma 1 always holds as we initialize . Since the approximate value function on the rejecting BSCCs stays at zero. We can simplify the dynamic programming update (7) by dropping all states inside rejecting BSCCs,
(12) |
where
(13) |
Here, the state space is partitioned into three sets of states , , representing the set of accepting states, the set of states in the rejecting BSCCs and the set of remaining rejecting states. , are the vectors listing the approximate value function for all , , respectively. Matrix represents the transition between states in . Sub-matrix , etc. contains the transition probability from a set of states to a set of states.
At each dynamic programming update, the approximation error is updated by a linear mapping as
(14) |
where
(15) |
, are the vectors listing the value function for states in and , respectively. When setting , the requirement for the one-step contraction (8), which is does not hold.
IV-B Multi-step contraction
Even with , we can still show there exists a such that . The multi-step update is given as
(16) |
By showing each row sum of is strictly less than that of , we guarantee . In , each element is the probability of a path starting at visiting in the next steps, thus . In , each element is expressed as a “discounted” version of ,
(17) |
where
(18) |
Whenever a one-step transition starting from the accepting state happens, a discount shall be applied in (18), making .
The row sum of shall be strictly less than the corresponding row sum of if the probability of visiting an accepting state within the future steps is greater than zero. Thus, the contraction is brought in by the reachability from rejecting states to accepting, which is formalized as the following Lemma.
Lemma 2
Starting with a state in , the probability of a path not visiting the set in steps is upper bounded by .
Proof:
A transition leaving must exist since all states in are either
-
1.
a recurrent state inside an accepting BSCC. Any path starting from such a recurrent state will eventually meet an accepting state. Thus, at least one path leaves the set .
-
2.
A transient state that will enter a BSCC. Any path starting from such a transient state will eventually enter an accepting BSCC or a rejecting BSCC. Either way, the transient leaving the set will happen.
As for all state , there exists at least one path that will leave in steps,
(19) |
The existence of such a path can be shown by constructing a path starting at and never visiting any states in more than once. One can use the diameter of a graph to prove this existence formally and we omit it for space considerations. ∎
By this reachability property, we show the multi-step contraction, which is technically described by the following lemma.
Lemma 3
When , given the approximation error updated by equation (14) , there always exists a constant and a positive integer such that
(20) |
Proof:
For the first rows of , every path starts at an accepting state and receives discounting in the beginning. For all , each element , we have .
The remaining rows need additional treatment, as discounting may not happen. However, we can rule out the case when no discount happens by setting .
We split the sum of each row of into two parts,
(21) |
where represents all the paths that will not visit accepting states in steps and thus will not get discounted. Suppose , the sum of the entire row of shall be . However Lemma 2 prohibits from happening,
(22) |
The second part in (IV-B) represents all paths that will visit the accepting state in steps at least once. Thus, in which represents sum of “discounted” probability (IV-B), the first part stays the same. Meanwhile, the second part has to be discounted at least once,
(23) |
Thus, the sum of each row of where . And we have
(24) |
∎
The multi-step update always provides a multi-step contraction, so we can upper bound the convergence of approximation error.
IV-C Proof of Theorem 1
In the case of , we get the -step update of the approximation error as,
(25) |
By Lemma 3, after every update, the infinite norm of error must shrink by a constant ,
(26) |
As , the error .
Meanwhile, since the sum of each row of , the error won’t grow at each one-step update. Thus, we have the convergence as,
(27) |
Theorem 1 also considers the case when . We get the upper bound for the approximation error as
(28) |
The upper bound on instantly holds for as starting the approximate value function at guarantees , where is the vector listing the value function for all states inside a rejecting BSCCs.
V Case Study
In this section, we show our upper bound holds in a three-state Markov chain shown in Fig. 1. , are states in and is the only accepting state. All transitions are deterministic. The discount factor setting here is , . The dynamic programming update is given as
The infinite norm of the matrix in (16) is shown as
The approximate value function is initialized as , thus the approximate error is and during the first three updates we have,
Since the discount only works on , can decrease only after the is decreased. It takes three updates for the to decrease from to .
The upper bound for estimation error provided by Theorem 1 is where since all transitions are deterministic and as . Thus, our theorem yields a three-step contraction shown as
which captures the fact . The infinite norm of error shall be contracted by after every three dynamic programming updates.
In Fig. 2, the error is decreased by in the first three updates, aligning our upper bound. However, the error is decreased by after every two updates when . The reason is that our bound on the infinite norm only considers the worst case. Where the two-step update
decreases , by every two updates meanwhile making . Since in vector , is the smallest element, thus serves as a good initialisation to make decreasing after every two updates.

VI Conclusion and future work
This work answers a challenge when using surrogate reward with two discount factors for complex LTL objectives. Specifically, we can always find the value function using dynamic programming even if discounting does not hold in many states. We discuss the convergence when using dynamic programming and show that a multi-step contraction exists as we do dynamic programming updates enough times. Our findings have implications for the correct policy evaluation of LTL objectives.
Our future effect is to investigate if the convergence result still holds as we apply value iteration and Q-learning. The challenge is that once the policy is updated, it may induce a new MC, which may have different rejecting BSCCs. The sufficient condition for the uniqueness of the Bellman equation may not hold during policy updates.
References
- [1] A. Pnueli, “The temporal logic of programs,” in Annual Symposium on Foundations of Computer Science, 1977.
- [2] C. Baier and J.-P. Katoen, Principles of Model Checking. The MIT Press, 2008.
- [3] G. Fainekos, H. Kress-Gazit, and G. Pappas, “Temporal Logic Motion Planning for Mobile Robots,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation. IEEE, 2005, pp. 2020–2025.
- [4] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas, “Temporal-logic-based reactive mission and motion planning,” IEEE Transactions on Robotics, vol. 25, no. 6, pp. 1370–1381, 2009.
- [5] D. Sadigh, E. S. Kim, S. Coogan, S. S. Sastry, and S. A. Seshia, “A learning based approach to control synthesis of Markov decision processes for linear temporal logic specifications,” in 53rd IEEE Conference on Decision and Control, 2014, pp. 1091–1096.
- [6] M. Hasanbeig, Y. Kantaros, A. Abate, D. Kroening, G. J. Pappas, and I. Lee, “Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees,” in 2019 IEEE 58th Conference on Decision and Control (CDC), 2019, pp. 5338–5343.
- [7] E. M. Hahn, M. Perez, S. Schewe, F. Somenzi, A. Trivedi, and D. Wojtczak, “Faithful and effective reward schemes for model-free reinforcement learning of omega-regular objectives,” in Automated Technology for Verification and Analysis: 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020, Proceedings. Springer-Verlag, 2020, pp. 108–124.
- [8] A. K. Bozkurt, Y. Wang, M. M. Zavlanos, and M. Pajic, “Control Synthesis from Linear Temporal Logic Specifications using Model-Free Reinforcement Learning,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 10 349–10 355.
- [9] P. Ashok, J. Křetínský, and M. Weininger, “PAC Statistical Model Checking for Markov Decision Processes and Stochastic Games,” in Computer Aided Verification. Springer International Publishing, 2019, pp. 497–519.
- [10] C. Voloshin, A. Verma, and Y. Yue, “Eventual Discounting Temporal Logic Counterfactual Experience Replay,” in Proceedings of the 40th International Conference on Machine Learning. PMLR, 2023, pp. 35 137–35 150.
- [11] D. Shao and M. Kwiatkowska, “Sample Efficient Model-free Reinforcement Learning from LTL Specifications with Optimality Guarantees,” in Thirty-Second International Joint Conference on Artificial Intelligence, vol. 4, 2023, pp. 4180–4189.
- [12] M. Cai, M. Hasanbeig, S. Xiao, A. Abate, and Z. Kan, “Modular deep reinforcement learning for continuous motion planning with temporal logic,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7973–7980, 2021.
- [13] H. Hasanbeig, D. Kroening, and A. Abate, “Certified reinforcement learning with logic guidance,” Artificial Intelligence, vol. 322, no. C, 2023.
- [14] Z. Xuan, A. Bozkurt, M. Pajic, and Y. Wang, “On the uniqueness of solution for the Bellman equation of LTL objectives,” in Proceedings of the 6th Annual Learning for Dynamics & Control Conference. PMLR, Jun. 2024, pp. 428–439.
- [15] S. Sickert, J. Esparza, S. Jaax, and J. Křetínský, “Limit-Deterministic Büchi Automata for Linear Temporal Logic,” in Computer Aided Verification, vol. 9780. Springer International Publishing, 2016, pp. 312–332.
- [16] M. Cai, S. Xiao, J. Li, and Z. Kan, “Safe reinforcement learning under temporal logic with reward design and quantum action selection,” Scientific Reports, vol. 13, no. 1, p. 1925, 2023.
- [17] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, second edition ed. The MIT Press, 2018.
- [18] E. M. Hahn, M. Perez, S. Schewe, F. Somenzi, A. Trivedi, and D. Wojtczak, “Omega-regular objectives in model-free reinforcement learning,” in Tools and Algorithms for the Construction and Analysis of Systems. Springer International Publishing, 2019, pp. 395–412.