Examining average and discounted reward optimality criteria in reinforcement learning
Abstract
In reinforcement learning (RL), the goal is to obtain an optimal policy, for which the optimality criterion is fundamentally important. Two major optimality criteria are average and discounted rewards. While the latter is more popular, it is problematic to apply in environments without an inherent notion of discounting. This motivates us to revisit a) the progression of optimality criteria in dynamic programming, b) justification for and complication of an artificial discount factor, and c) benefits of directly maximizing the average reward criterion, which is discounting-free. Our contributions include a thorough examination of the relationship between average and discounted rewards, as well as a discussion of their pros and cons in RL. We emphasize that average-reward RL methods possess the ingredient and mechanism for applying a family of discounting-free optimality criteria (Veinott,, 1969) to RL.
1 Introduction
Reinforcement learning (RL) is concerned with sequential decision making, where a decision maker has to choose an action based on its current state. Determining the best actions amounts to finding an optimal mapping from every state to a probability distribution over actions available at that state. Thus, one fundamental component of any RL method is the optimality criterion, by which we define what we mean by such an optimal mapping.
The most popular optimality criterion in RL is the discounted reward (Duan et al.,, 2016; Machado et al.,, 2018; Henderson et al.,, 2018). On the other hand, there is growing interest in the average reward optimality, as surveyed by Mahadevan, 1996a ; Dewanto et al., (2020). In this paper, we discuss both criteria in order to obtain a comprehensive understanding of their properties, relationships, and differences. This is important because the choice of optimality criteria affects almost every aspect of RL methods, including the policy evaluation function, the policy gradient formulation, and the resulting optimal policy, where the term policy refers to the above-mentioned mapping. Thus, the choice of optimality criteria eventually impacts the performance of an RL system (and the choices made within, e.g. approximation techniques and hyperparameters).
This paper presents a thorough examination of the connection between average and discounted rewards, as well as a discussion of their pros and cons in RL. Our examination here is devised through broader lens of refined optimality criteria (which generalize average and discounted rewards), inspired by the seminal work of Mahadevan, 1996b . It is also broader in the sense of algorithmic styles: value- and policy-iteration, as well as of tabular and function approximation settings in RL. In this paper, we also attempt to compile and unify various justifications for deviating from the common practice of maximizing the discounted reward criterion in RL.
There are a number of papers that specifically compare and contrast average to discounted rewards in RL. For example, Mahadevan, (1994) empirically investigated average versus discounted reward Q-learning (in the context of value-iteration RL). Tsitsiklis and Van Roy, (2002) theoretically compared average versus discounted reward temporal-difference (TD) methods for policy evaluation (in the context of policy-iteration RL). We provide updates and extensions to those existing comparison works in order to obtain a comprehensive view on discounting and discounting-free RL.
Since RL is approximate dynamic programming (DP), we begin with reviewing optimality criteria in DP, model classification (which plays an important role in average-reward optimality), and the interpretation of discounting in Sec 2. We then provide justifications for discounting for environments without any inherent notion of discounting (Sec 3). This is followed by the difficulties that arise from introducing an artificial discount factor (Sec 4). We discuss the benefits from maximizing the average-reward criterion in Sec 5, as well as our finding and viewpoint in Sec 6.
2 Preliminaries
Sequential decision making is often formulated as a Markov decision process (MDP) with a state set , an action set , a reward set , and a decision-epoch set . Here, all but are finite sets, yielding an infinite-horizon finite MDP. At the current decision-epoch , a decision maker (henceforth, an agent) is in a current state , and chooses to then execute a current action . Consequently at the next decision-epoch , it arrives in the next state and earns an (immediate) scalar reward . The decision-epochs occur every single time unit (i.e. timestep) from till the maximum timestep (denoted as ); hence, we have a discrete-time MDP.
For , the agent experiences a sequence (trajectory) of states , actions and rewards . That is, . The initial state, the next state, and the next reward are governed by the environment dynamics that is fully specified by three time-homogenous (time-invariant) entities: i) the initial state distribution , from which , ii) the one-(time)step state transition distribution , from which , and iii) the reward function , where and denote the state and action random variables, respectively. The existence of a probability space that holds this infinite sequence of random variables () can be shown using the Ionescu-Tulcea theorem (Lattimore and Szepesvári,, 2020, p513). We assume that the rewards are bounded, i.e. , and the MDP is unichain and aperiodic (see Sec 2.2 for MDP classification).
The solution to a sequential decision making problem is an optimal mapping from every state to a probability distribution over the (available) action set in that state.111 In general, different states may have different action sets. It is optimal with respect to some optimality criterion, as discussed later in Sec 2.1. Any of such a mapping (regardless whether it is optimal) is called a policy and generally depends on timestep . It is denoted as , or alternatively , where indicating the probability of selecting action given the state at timestep . Thus, each action is sampled from a conditional action distribution, i.e. .
The most specific solution space is the stationary and deterministic policy set whose policies are stationary (time-invariant), i.e. , as well as deterministic, i.e. has a single action support (hence the mapping is reduced to ). In this work, we consider a more general policy set, that is the stationary policy set . It includes the stationary randomized (stochastic) set and its degenerate counterpart: the stationary deterministic set .
2.1 Optimality criteria
In a basic notion of optimality, a policy with the largest value is optimal. That is,
(1) |
Here, the function measures the value (utility) of a policy based on the infinite reward sequence that is earned by an agent following . The subscript indicates the specific type of value functions, which induces a specific -optimality criterion, hence the -optimal policy, denoted as .
One intuitive222 This intuition seems to lead to the so-called reward hypothesis (Sutton and Barto,, 2018, p53). value function is the expected total reward. That is,
(2) |
However, may be infinite (unbounded, divergent, non-summable).333 If a limit is equal to infinity, then we assert that such a limit does not exist. No comparison can be made between finite and infinite policy values, as well as between infinite policy values. Howard, (1960) therefore, examined the expected average reward (also referred to as the gain) defined as
(3) |
which is finite for all and all . For more details, including interpretation about the gain, we refer the reader to (Dewanto et al.,, 2020).
Alternatively, Blackwell, (1962, Sec 4) attempted tackling the infiniteness of (2) through the expected total discounted reward444 There are other discounting schemes, e.g. , for (hyperbolic), and for (Hutter,, 2006; Lattimore and Hutter,, 2014), but they do not exhibit advantageous properties as the geometric discounting (4), such as its decomposition via (6). Hence, those other schemes are not considered here. , which was studied before by Bellman, (1957). That is,
(4) |
with a discount factor . In particular, according to what is later known as the truncated Laurent series expansion (7), Blackwell suggested finding policies that are -discounted optimal for all discount factors sufficiently close to 1. He also established their existence in finite MDPs. Subsequently, Smallwood, (1966) identified that the discount factor interval can be divided into a finite number of intervals555 See also Hordijk et al., (1985) and Feinberg and Shwartz, (2002, Thm 2.16). , i.e. , in such a way that there exist policies for , that are -discounted optimal for all . This leads to the concept of Blackwell optimality. A policy is Blackwell optimal if there exists a critical666 It is critical in that it specifies the sufficiency of being close to 1 for attaining Blackwell optimal policies. discount factor such that
(5) |
Note that whenever the policy value function depends not only on policies but also on states from which the value is measured, the basic optimality (1) requires that the optimal policy has a value greater than or equal to the other policies’ values in all state .
In order to obtain Blackwell optimal policies, Veinott, (1969, Eqn 27) introduced a family of new optimality criteria. That is, a policy is -discount optimal for , if
He showed that -discount optimality777 This is eventually refined to -discount optimality, where is the number of recurrent classes under the Blackwell optimal policy (Feinberg and Shwartz,, 2002, Thm 8.3, Sec 8.1.4). is equivalent to Blackwell optimality because the selectivity increases with (hence, -discount optimality implies -discount optimalities for all ). Then, he developed a policy-iteration algorithm (for finding -discount optimal policies) that utilizes the full Laurent series expansion of for any , where the policy value vector is obtained by stacking the state-wise values altogether for all . That is,
(6) | ||||
(Using the additive identity: ) | ||||
(7) |
where for denotes the expansion coefficients. The truncated form first appeared (before the full one) in Blackwell, (1962, Thm 4a), where is equivalent to the gain from all states , is equivalent to the so-called bias, and converges to as approaches 1, that is .
2.2 MDP classification
A stationary policy of a finite MDP induces a stationary finite Markov chain (MC) with a -by- one-step transition stochastic matrix , whose -th entry indicates
(8) |
Therefore, an MDP can be classified on the basis of the set of MCs induced by its stationary policies. To this end, we need to review the classification of states, then of MCs below; mainly following Puterman, (1994, Appendix A) and Douc et al., (2018, Ch 6.2).
State classification:
Let denote the -th power of of a Markov chain. This is another -by- stochastic matrix, whose -entry indicates , i.e. the probability of being in state in timesteps starting from a state .
A state is accessible from state , denoted as , if for some . Furthermore, and communicate if and .
A subset of is called a closed set if no state outside is accessible from any state in . Furthermore, a subset of is called a recurrent class (or a recurrent chain) if all states in communicate and is closed. In a finite MC, there exists at least one recurrent class.
Let denote the number of visits to a state under a policy . That is, , where is an identity (indicator) operator. Then, the expected number of visits to a state starting from the state itself under a policy is given by .
A state is recurrent if and only if . This means that a recurrent state will be re-visited infinitely often. Equivalently, starting from a recurrent state , the probability of returning to itself for the first time in finite time is 1.
A state is transient if and only if . This means that a transient state will never be re-visited again after some point in time. Equivalently, starting from a transient state , the probability of returning to itself for the first time in finite time is less than 1. A transient state is a non-recurrent state (hence, every state is either recurrent or transient).
The period of a state is defined as the greatest common divisors of all for which . Whenever the period , we clasify as periodic. Otherwise, is aperiodic (not periodic). Intuitively, if returning to a state occurs at irregular times, then is aperiodic. Note that periodicity is a class property, implying that all states in a recurrent class have the same period.
An ergodic class (or an ergodic chain) is a class that is both recurrent and aperiodic. Its recurrent and aperiodic states are referred to as ergodic states. Note however, that the term “ergodic” is also used in the ergodic theory (mathematics), in which its definition does not involve the notion of aperiodicity.
Markov chain (MC) classification:
An MC is irreducible if the set of all its states forms a single recurrent class. An irreducible MC is aperiodic if all its states are aperiodic. Otherwise, it is periodic. An MC is reducible if it has both recurrent states and transient states. The state set of a reducible MC can be partitioned into one or more disjoint recurrent classes, plus a set of transient states.
An MC is unichain if it consists of a single (uni-) recurrent class (chain), plus a (possibly empty) set of transient states. Otherwise, it is multichain.
MDP classification based on the pattern of MCs induced by all stationary policies:
An MDP is unichain if the induced MC corresponding to every stationary policy is unichain. Otherwise, it is multichain (i.e. at least one induced MC is multichain, containing two or more recurrent classes). This is a restrictive definition of a multichain MDP. In the literature, its loose definition is also used, where it simply means a general MDP.
A recurrent MDP is a special case of unichain MDPs, whenever the MC corresponding to every stationary policy is irreducible. In other words, a recurrent MDP is a unichain MDP whose all induced MCs have an empty transient state set.
MDP classification based on the pattern of state accessibility under some stationary policy:
An MDP is communicating (also termed strongly connected) if, for every pair of states and in the state set , there exists a stationary policy (which may depend on and ) under which is accessible from . A recurrent MDP is always communicating, but a communicating MDP may not be recurrent, it may be unichain or multichain.
An MDP is weakly communicating (also termed simply connected) if there exists a closed state set , for which there exists a stationary policy under which is accessible from for every pair of states , plus a (possibly empty) set of transient states under every stationary policy. This weakly communicating classification is more general than the communicating in that any communicating MDP is weakly communicating without any state that is transient under every stationary policy. Unichain MDPs are always weakly communicating.
There exists an MDP that is not weakly communicating. Such a non-weakly-communicating MDP is always multichain, but a multichain MDP may also be weakly communicating (or communicating).
2.3 Discounting in environments with an inherent notion of discounting
A environment with an inherent notion of discounting has a discount factor that encodes one of the following entities. Thus, is part of the environment specification (definition, description).
Firstly, the time value of rewards, i.e. the value of a unit reward timesteps in the future is : This is related to psychological concepts. For example, some people prefer rewards now rather than latter (Mischel et al.,, 1972), hence they assign greater values to early rewards through a small (being shortsighted). It is also natural to believe that there is more certainty about near- than far-future, because immediate rewards are (exponentially more) likely due to recent actions. The time preference is also well-motivated in economics (Samuelson,, 1937). This includes for taking account of the decreasing value of money (because of inflation), as well as the interpretation of as a positive interest rate. Moreover, commercial activities have failure (abandonment) risk due to changing government regulation and consumer preferences over time.
Secondly, the uncertainty about random termination independent of the agent’s actions: Such termination comes from external control beyond the agent, e.g. someone shutting down a robot, engine failure (due to weather/natural disaster), or death of any living organisms.
In particular, whenever the random termination time follows a geometric distribution , we have the following identity between the total and discounted rewards,
(9) |
where the discount factor plays the role of the geometric distribution parameter (Puterman,, 1994, Prop 5.3.1). This discounting implies that at every timestep (for any state-action pair), the agent has a probability of for entering the 0-reward absorbing terminal state, see Fig 4(a). Note that because is invariant to states and actions (as well as time), this way of capturing the randomness of may be inaccurate in cases where termination depends on states, actions, or both.
3 Discounting without an inherent notion of discounting
From now on, we focus on environments without inherent notion of discounting, where is not part of the environment specification (cf. Sec 2.3). We emphasize the qualification “inherent” since any MDP can always be thought of having some notion of discounting from the Blackwell optimality point of view (5). This is because a Blackwell optimal policy is guaranteed to exist in finite MDPs (Puterman,, 1994, Thm 10.1.4); implying the existence of its discount factor and of a (potentially very long) finite-horizon MDP model that gives exactly the same Blackwell-optimal policy as its infinite-horizon counterpart (Chang et al.,, 2013, Ch 1.3).
When there is no inherent notion of discounting, the discount factor is imposed for bounded sums (Sec 2.1) and becomes part of the solution method (algorithm). This is what we refer to as artificial discounting, which induces artificial interpretation as for instance, those described in Sec 2.3. The (mentioned in the previous paragraph) is one of such artificial discount factors. In addition to bounding the sum, we observe other justifications that have been made for introducing an artificial . They are explained in the following Secs 3.1, 3.2 and 3.3.
3.1 Approximation to the average reward (the gain) as approaches 1
For recurrent MDPs, the gain optimality is the most selective888 For an intuitive explanation about the selectivity of optimality criteria, refer to Fig 1. because there are no transient states (Feinberg and Shwartz,, 2002, Sec 3.1). This implies that a gain optimal policy is also Blackwell optimal in recurrent MDPs, for which one should target the gain optimality criterion. Nonetheless, the following relationships exist between the average reward and discounted reward value functions of a policy .
Firstly, for every state ,
(10) | ||||
(11) |
where denotes the stationary probability of a state , i.e. the long-run (steady-state) probability of being in state when the MC begins in . Here, (10) is obtained by multiplying the left and right hand sides of (7) by ), then taking the limit of both sides as . The derivation of (11) begins with taking the expectation of with respect to , then utilizes the discounted-reward Bellman (expectation) equation, and the stationarity of . That is,
which can be re-arranged to obtain (11). Here, whereas is defined in (8). It is interesting that any discount factor maintains the equality in (11), which was also proved by Sutton and Barto, (2018, p254).
The second relationship pertains to the gradient of the gain when a parameterized policy is used. By notationally suppressing the policy parameterization and the dependency on , as well as using , this relation can be expressed as
(12) | ||||
(13) |
for all . Notice that the right hand sides (RHS’s) of (12) and (13) involve the discounted-reward value functions, i.e. the state value function in (4) and the corresponding (state-)action value function . They are related via . The identity (12) was shown by Baxter and Bartlett, (2001, Thm 2), whereas (13) was derived from (11) by Morimura et al., (2010, Appendix A).
Thus for attaining average-reward optimality, one can maximize but merely as an approximation to because the equality in (10) is only in the limit, i.e. setting exactly to 1 is prohibited by definition (4). The similiar limiting behaviour applies to (12), where is used to approximately compute the gain gradient , yielding approximately gain-optimal policies. In particular, Jin and Sidford, (2021, Lemma 3) proved that an -gain-optimal policy is equivalent to an -discounted-optimal policy with a certain value that depends on and a parameter describing some property of the target MDP. Here, a policy is said to be --optimal for any positive and a criterion if its values , for all .
The aforementioned approximation to by with seems to justify the use of stationary state distribution (instead of the (improper) discounted state distribution in (14) below) to weight the errors in recurrent states in approximate discounted policy evaluation, see e.g. Dann et al., (2014, Eqn 4). In fact, the identity in (10) implies the following connection,
(14) | |||
(15) |
Here, the -th row of contains the probability values of the stationary state distribution . On the other hand, the -th row of contains the values of the improper discounted state distribution , which is improper because . Importantly, this connection suggests that is suitable as weights in the discounted value approximator’s error function only when . Otherwise, may be more suitable.
It is also an approximation in (11) whenever is weighted by some initial state distribution (such as in (18)) or transient state-distributions , which generally differs from . In (13), the second RHS term is typically ignored since calculating is difficult in RL settings.999 The difficulty of computing the gradient of state distributions, such as , in RL motivates the development of the policy gradient theorem (Sutton and Barto,, 2018, Ch 13.2). Consequently, is approximated (closely whenever is close to 1) by the first RHS term of (13), then by sampling the state (after the agent interacts long “enough” with its environment).
Moreover, approximately maximizing the average reward via discounting is favourable because discounting formulation has several mathematical virtues, as described in Sec 3.3.
3.2 A technique for attaining the most selective optimality with
As discussed in the previous Sec 3.1, -discounted optimality approximates the gain optimality as . This is desirable since the gain optimality is the most selective in recurrent MDPs. In unichain MDPs however, the gain optimality is generally underselective since the gain ignores the rewards earned in transient states (for an illustrative example, see Fig 1). Consequently, multiple gain-optimal policies prescribe different action selections (earning different rewards) in transient states. The underselectiveness of gain optimality (equivalent to -discount optimality) can be refined up to the most selective optimality by increasing the value of from to (or higher if needed up to for unichain MDPs) in the family of -discount optimality (Sec 2.1).
Interestingly, such a remedy towards the most selective criterion can also be achieved by specifying a discount factor that lies in the Blackwell’s interval, i.e. , for the -discounted optimality (5). This is because the resulting , which is also called a Blackwell optimal policy, is also optimal for all in -discount optimality (Puterman,, 1994, Thm 10.1.5). Moreover, Blackwell optimality is always the most selective regardless of the MDP classification, inheriting the classification invariance property of -discounted optimality. Thus, artificial discounting can be interpreted as a technique to attain the most selective criterion (i.e. the Blackwell optimality) whenever not only in recurrent but also unichain MDPs, as well as the most general multichain MDPs.
Targetting the Blackwell optimality (instead of gain optimality) is imperative, especially for episodic environments101010 Episodic environments are those with at least one terminal state. Once the agent enters the terminal state, the agent-environment interaction terminates. that are commonly modelled as infinite-horizon MDPs (so that the stationary policy set is a sufficient space to look at for optimal policies).111111 In finite-horizon MDPs, the optimal policy is generally non-stationary, where it depends on the number of remaining timesteps (in addition to the state). Intuitively, if we (approximately) had only 1 week to live, would we act the same way as if we (approximately) had 10 years to live? In basketball, a long shot attempt (from beyond half court) is optimal only at the final seconds of the game. Such modelling is carried out by augmenting the state set with a 0-reward absorbing terminal state (denoted by ), as shown in Fig 4(a). This yields a unichain MDP with a non-empty set of transient states. For such -models, the gain is trivially for all stationary policies so that gain optimality is underselective. The -discount optimality improves the selectivity. It may be the most selective (hence, it is equivalent to Blackwell optimality) in some cases. Otherwise, it is underselective as well, hence some higher -discount optimality criterion should be used, e.g. for an MDP shown in Fig 1. It is also worth noting that in -models, ()-discount optimality is equivalent to the total reward optimality whose (2) is finite (Puterman,, 1994, Prop 10.4.2). The total reward optimality therefore may also be underselective in some cases.
Towards obtaining Blackwell optimal policies in unichain MDPs, the relationship between maximizing -discounted and -discount criteria can be summarized as follows (see also Fig 6).
(16) |
for all , where denotes the -discount optimal policy set for .
From the second case in the RHS of (16), we know that can be computed by taking the limit of a function involving and as approaches 1. This means that for unichain MDPs, the Blackwell optimal policies may be obtained by setting very close to 1, similar to that for recurrent MDPs (the first case) but not the same since the function of which the limit is taken differs.
In practice, the limits in the first and second cases in (16) are computed approximately using a discount factor close to unity, which is likely in the Blackwell’s interval, i.e. . Paradoxically however, this does not necessarily attain the Blackwell optimality because the finite-precision computation involving yields quite accurate estimation to the limit values: maximizing the first and second cases in the RHS of (16) with a high attains (approximately) gain () and bias () optimality respectively, which may be underselective in unichain MDPs. Consequently in practice, the most selective Blackwell optimality can always be achieved using that is at least as high as but not too close to 1. That is, for some practical upper bound , which is computation (hence, implementation) dependent.
3.3 Contraction, variance reduction, and independence from MDP classification
Discounted reward optimality is easier to deal with than its average reward counterpart. This can be attributed to three main factors as follows.
Firstly, the discounted-reward theory holds regardless of the classification of the induced MCs (Sec 2.2), whereas that of the average reward involves such classification. Because in RL settings, the transition probability is unknown, average reward algorithms require estimation or assumption about the chain classification, specifically whether unichain or multichain. Nevertheless, note that such (assumed) classification is needed in order to apply a specific (simpler) class of average-reward algorithms: leveraging the fact that a unichain MDP has a single scalar gain (associated with its single chain) that is constant across all states, whereas a multichain MDP generally has different gain values associated with its multiple chains.
Secondly, the discounted-reward Bellman optimality operator is contractive, where the discount factor serves as the contraction modulus (hence, is a -contraction). That is,
where in state-wise form, , and denotes the maximum norm. This means that makes and closer by at least such that the sequence of iterates converges to the unique fixed point of as from any initial . This is based on the Banach’s fixed-point theorem Szepesvári, (2010, Thm A.9). In particular as , we obtain , where denotes the optimal discounted value, i.e. . Thus, . In the absense of , the contraction no longer holds. This is the case for the average-reward Bellman optimality operator . As a result, the basic value iteration based on is not guaranteed to converge (Mahadevan, 1996a, , Sec 2.4.3).
The aforementioned contraction property also applies to the discounted-reward Bellman expectation operator, i.e. . As a consequence of Banach’s fixed-point theorem, we have , where is iteratively applied on any initial for , whereas is the discounted policy value of a policy . In other words, is the unique fixed point of such that , which is known as the discounted-reward Bellman evaluation equation (-BEE). This -BEE is further utilized to derive a TD-based parametric value approximator whose convergence depends on the fact that . That is, the approximator’s error minimizer formula involves the inverse , which exists (Sutton and Barto,, 2018, p206). This is in contrast to the matrix whose inverse does not exist (Puterman,, 1994, p596).
In addition, the contractive nature induced by is also utilized for computing state similarity metrics (Castro,, 2020; Ferns et al.,, 2006). There, guarantees the convergence of the metric operator to its fixed point (when such an operator is iteratively applied). Note that the notion of state similarity plays an important role in for example, state aggregation and state representation learning.
Third, discounting can be used to reduce the variance of, for example policy gradient estimates, at the cost of bias-errors Baxter and Bartlett,, 2001, Thm 3; Kakade,, 2001, Thm 1. In particular, the variance (the bias-error) increases (decreases) as a function of . This is because the effective number of timesteps (horizon) can be controlled by . This is also related to the fact that the infinite-horizon discounted reward (4) can be -approximated by a finite horizon proportional to , as noted by Tang and Abbeel, (2010, Footnote 1). That is,
This is then re-arranged to obtain , whose both sides are taken to the logarithm with base to yield
where indicates the smallest integer greater than or equal to .
4 Artificial discount factors are sensitive and troublesome
The artificial discount factor (which is part of the solution method) is said to be sensitive because the performance of RL methods often depends largely on . Fig 2(a) illustrates this phenomenon using -learning with various values. As can be seen, higher leads to slower convergence, whereas lower leads to suboptimal policies (with respect to the most selective criterion, which in this case, is the gain optimality since the MDP is recurrent). This trade-off is elaborated more in Secs 4.1 and 4.2. The sensitivity to has also been observed in the error (and even whether convergence or divergence) of approximate policy evaluation methods with function approximators Scherrer,, 2010, Fig 1; Sutton and Barto,, 2018, Example 11.1.
Additionally, Fig 3 shows the effect of various values on the optimization landscapes on which policy gradient methods look for the maximizer of the discounted value . It is evident that different values induce different maximizing policy parameters. From the 2D visualization in Fig 3, we can observe that the landscape of begins to look like that of the gain when is around , then becomes more and more look like it as approaches 1. When is far less than , the maximizer of does not coincide with that of . In such cases, some local maximizer of may be desirable (instead of the global one) because of its proximity to the maximizer of , which represents the optimal point of the most selective criterion for recurrent MDPs examined in Fig 3.
The artificial is troublesome because its critical value, i.e. , is difficult to determine, even in DP where the transition and reward functions are known (Hordijk et al.,, 1985). This is exacerbated by the fact that is specific to each environment instance (even from the same environment family, as shown in Fig 2(b)). Nevertheless, knowing this critical value is always desirable. For example, despite the gain optimality can be attained by having very close to 1, setting (or some value around it) leads to not only convergence to the optimal gain (or close to it) but also faster convergence, as demonstrated by -learning (Fig 2(a)). We can also observe visually in Fig 3 that the discounted -landscape already resembles the gain -landscape. Thus, for obtaining the gain-optimal policy in recurrent MDPs, does not need to be too close to 1 (as long as it is larger than or equal to ); see also (16).
Apart from that, is troublesome because some derivation involving it demands extra care, e.g. for handling the improper discounted state distribution (14) in discounted-reward policy gradient algorithms (Nota and Thomas,, 2020; Thomas,, 2014).
4.1 Higher discount factors lead to slower convergence
According to (10), increasing the discount factor closer to 1 makes the scaled discounted reward approximate the average reward more closely. This means that a discounted-reward method with such a setting obtains more accurate estimates of gain-optimal policies. However, it suffers from a lower rate of convergence (to the approximate gain optimality), as well as from some numerical issue (since for example, it involves the term that explodes as ). This becomes unavoidable whenever is indeed very close to unity because the most selective Blackwell optimality (equivalent to gain optimality in recurrent MDPs) requires that .
The slow convergence can be explained by examining the effect of the effective horizon induced by . That is, as approaches 1, the reward information is propagated to more states (Beleznay et al.,, 1999, Fig 1). From discounted policy gradient methods, we also know that an i.i.d state sample from the discounted state distribution in (14) is the last state of a trajectory whose length is drawn from a geometric distribution , see Kumar et al., (2020, Algo 3). Evidently, the closer to 1, the longer the required trajectory. Also recall that such a geometric distribution has a mean of and a variance of , which blow up as .
There are numerous works that prove and demonstrate slow convergence due to higher . From them, we understand that the error (hence, iteration/sample complexity) essentially grows as a function of . Those works include Thrun and Schwartz,, 1993, Fig 3; Melo and Ribeiro,, 2007, Thm 1; Fan et al.,, 2020, Thm 4.4 for Q-learning with function approximators, (Sutton and Barto,, 2018, Eqn 9.14, 12.8) for TD learning, and (Agarwal et al.,, 2019, Table 1, 2) for policy gradient methods. We note that for a specific environment type and with some additional hyperparameter, Devraj and Meyn, (2020) proposed a variant of Q-learning whose sample complexity is independent of .
4.2 Lower discount factors likely lead to suboptimal policies
Setting further from 1 such that yields -discounted optimal policies that are suboptimal with respect to the most selective criterion (see Fig 2(a)). From the gain optimality standpoint, lower makes deviate from in the order of as shown by (Baxter and Bartlett,, 2001). More generally based on (16), induces an optimal policy that is not Blackwell optimal (hence, is also not gain optimal in recurrent MDPs). This begs the question: is it ethical to run a suboptimal policy (due to misspecifying the optimality criterion) in perpetuity?
For a parameterized policy in recurrent MDPs, induces a discounted -landscape that is different form the gain -landscape. Fig 3 shows such a discrepancy, which becomes more significant as is set further below 1. Therefore, the maximizing parameters do not coincide, i.e. , where denotes the policy parameter in some parameter set. Interestingly, is a local maximum in -landscape so that the -discounted-reward optimization is ill-posed in that the (global) maximum is not what we desire in terms of obtaining an optimal policy with respect to the most selective criterion (that is, the gain optimal policy in recurrent MDPs).
Petrik and Scherrer, (2008, Thm 2) established the following error bound due to misspecifying a discount factor . That is,
Subsequently, Jiang et al., (2016) refined the above error bound by taking into account the transition and reward functions. They also highlighted that such an error as a function of is not monotonically decreasing (with increasing ). This is consistent with what Smallwood, (1966) observed, i.e. multiple disconnected -intervals that share the same -discounted optimal policy. We note that specifically for sparse-reward environments (where non-zero rewards are not received in every timestep), a lower discount factor is likely to improve the performance of RL algorithms (Petrik and Scherrer,, 2008, Thm 10). They argue that lowering decreases the value approximation error more significantly than it increases the -misspecification error . Here, denotes an approximation to .
5 Benefits of maximizing the average reward in recurrent MDPs
In this Section, we enumerate the benefits of directly maximizing the average reward (gain) optimality criterion in recurrent MDPs. Loosely speaking, such a recurrent structure is found in continuing environments121212 Continuing environments are those with no terminal state; cf. episodic environments (Footnote 10). with cyclical events across all states. For episodic environments, we can obtain their recurrent MDPs by explicitly modelling the episode repetition, as explained in Sec 5.5. For a non-exhaustive list of continuing and episodic environments, refer to (Dewanto et al.,, 2020).
The combination of recurrent MDPs and gain optimality is advantageous. There is no discount factor involved (as a by-product), removing the difficulties due to artificial discounting (Sec 4). Other benefits are described in the following sub-sections.
5.1 Unconditionally the most selective criterion
Gain optimality is the most selective criterion for recurrent MDPs. This is because all states are recurrent so that the gain is all that is needed to quantitatively measure the quality of any stationary policy from those states (such a gain quantity turns out to be constant for all states in recurrent or more generally unichain MDPs). Recall that the gain is concerned with the long-run rewards (3), and recurrent states are states that are re-visited infinitely many times in the long-run (Sec 2.2).
Gain optimality therefore is equivalent to Blackwell optimality unconditionally, as well as instantaneously in that there is no need for hyperparameter tuning for the optimality criterion (which fundamentally determines the optimization objective function, hence the overall optimization). This is in contrast to -discounted optimality, where it is equivalent to Blackwell optimality if as in (5). Moreover since is unknown, tuning is necessary (Sec 3.2).
5.2 Uniformly optimal policies
Since recurrent (up to unichain) MDPs have only a single recurrent class (chain), the gain of any policy is constant across all states. As a result, average-reward policy gradient methods maximize an objective (17) that is independent of initial states, or generally of initial state distributions. The resulting gain-optimal policies are said to be uniformly optimal because they are optimal for all initial states or all initial state distributions (Altman,, 1999, Def 2.1).
In contrast, the discounted-reward counterpart maximizes an objective (18) that is defined with respect to some initial state distribution (since the discounted-reward value depends on the state from which the value is measured, as in (4)). Consequently, the resulting -discounted optimal policies may not be optimal for all initial states; they are said to be non-uniformly optimal, as noted by Bacon, (2018, p41). This non-uniform optimality can be interpreted as a relaxation of the uniform optimality in DP, which requires that the superiority of optimal policies holds in every state, i.e. , for all states and all policies , see Sec 2.1.
The objectives of average- and discounted-reward policy gradient methods are as follows.
Average-reward policy gradient objective: | (17) | |||
Discounted-reward policy gradient objective: | (18) |
where is the policy parameter and is the initial state random variable.
5.3 Potentially higher convergence rates
Without delving into specific algorithms, there are at least two reasons for faster convergence of average-reward methods (compared to their discounted-reward counterparts), as hinted by Schwartz, (1993, Sec 6) and Van Roy, (1998, p114).
First is the common gain across states. Such commonality eases the gain approximation in that no generalization is required. That is, a single gain estimation is all that is needed for one true gain of all states in unichain MDPs.
Second, average-reward methods optimize solely the gain term in the Laurent series expansion of in (6). On the other hand, their discounted-reward counterparts optimize the gain, bias, and higher-order terms altogether simultaneously, whose implication becomes more substantial as is set further below 1, as can be observed in (7).
5.4 Less discrepancy between the learning objective and the performance metric
Since there is a finite number of timesteps (as training budget) and no inherent notion of discounting, the final learning performance metric in RL is typically measured by the average return over several last experiment-episodes131313 The term “experiment-episode” refers to one trial (run, roll-out, or simulation), which produces one sample of trajectory (of states, actions, and rewards). The term “experiment-episode” is applicable to both episodic and continuing environments. In contrast, the term “episode” only makes sense for episodic environments. Machado et al.,, 2018, Sec 5.2; Henderson et al.,, 2018, Fig 2. That is,
(with ) | ||||
(19) |
where denotes the number of i.i.d experiment-episodes, from which we pick the last experiment-episodes once the learning is deemed converged to a final learned policy . Each -th experiment-episode runs from to a finite , where the reward realization at every timestep is denoted by . The expectation in (19) of the reward random variable is with respect to , and .
We argue that the learning performance metric has less discrepancy with respect to the gain than to the discounted reward . In other words, whenever the criterion is , what is measured during training (i.e. the learning performance metric) is more aligned to what the agent learns/optimizes (i.e. the learning objective).141414 In some cases, this alignment may be traded-off for more tractable computation/training/analysis. There are three arguments for this as follows.
First, since the experiment-episodes are i.i.d, the expectation of in (19) converges as to the expected finite-horizon total reward. This is proportional to the expected finite-horizon average reward, which is an approximation to the gain due to the finiteness of .
Second is the fact that is typically fixed for all experiment-episodes during training.151515 One common practice is to set to some constant far less than the training budget to obtain multiple experiment-episodes. This is implemented as for instance, the ‘max_episode_steps’ variable at https://github.com/openai/gym/blob/master/gym/envs/__init__.py of a seminal and popular RL environment codebase: OpenAI Gym (Brockman et al.,, 2016). It is not randomly sampled from a geometric distribution even when maximizing (seemingly because there is no inherent notion of discounting). If it was sampled from such a geometric distribution, then the expectation of would converge to , following the identity in (9). This however would inherently pose a risk of miss-specifying (Sec 4.2). This metric gap due to artificial discounting is highlighted by Schwartz, (1993, Sec 3), van Hasselt, (2011, p25), Dabney, (2014, p47), and Van Seijen et al., (2019, Sec 2).
Third is because the induced Markov chain may reach stationarity (or close to it) within in some environments. That is, for several timesteps , see (15). If this happens, then some number of rewards are generated from states sampled from the stationary state distribution (although they are not independent, but Markovian state samples). This makes the approximation to the gain more accurate (although it is inherently biased due to non-i.i.d state samples) than to the discounted since , which can be shown to be equivalent to (3).
5.5 Modelling the episode repetition explicitly in episodic environments
In order to obtain an infinite-horizon recurrent MDP of an episodic environment, we can model its episode repetition explicitly. This is in contrast to modelling an episodic environment as an infinite-horizon MDP with a 0-reward absorbing terminal state (denoted as , which is recurrent under every policy). This modelling (called -modelling) induces a unichain MDP, where all states but are transient. In -model, the gain is trivially 0 for all stationary policies, rendering the gain optimality underselective (Sec 3.2). Fig 4(a) shows the diagram of a -model.
To explicitly model episode repetition, we augment the original state set with a resetting terminal state (instead of ) that has a single available reset action (instead of a self-loop action ). This action is responsible for a transition from to an initial state , which yields a reward of 0. We call this approach -modelling, whose example diagram is shown in Fig 4(b). The conversion from a unichain -model to a recurrent -model is as follows.
Let and denote the original state and action sets of an episodic environment, respectively, before the augmentation of a 0-reward absorbing terminal state . Given a unichain -model with a state set , an action set , an initial state distribution , a state transition distribution , and a reward function , then the corresponding recurrent -model has the following components.
-
•
A state set , an action set , and an initial state distribution , where .
-
•
A state transition distribution and a reward function , where
-
and , for all and for all ,
-
and , for all and for all , as well as
-
and , for all .
-
The above conversion requires that i) reaching is inevitable with probability 1 under all stationary policies (this is equivalent to inevitable termination in an episodic environment), and ii) there is no inherent notion of discounting that operates from until the end of an episode. For example diagrams of both - and -models, refer to Fig 4.
The -modelling suits the fact that in practice, an RL-agent is expected to run in multiple episodes, e.g. to play some game repeatedly. By using -modelling, we can train an agent operating on episodic environments in the same practical way as if it were operating on continuing environments. Similar to the zero state-value of the terminal state (i.e. ) in discounted-reward -modelling, we may have a zero state-value of the resetting state (i.e. ) in the average-reward -modelling, where denotes the relative value of the bias (7).
We compare - and -modelling by running experiments with two training schemes as follows.
-
•
Scheme-A uses an -model and the total reward criterion (hence, -learning).
-
•
Scheme-B uses an -model and the average reward criterion (hence, -learning).
Fig 5 depicts the learning curves of -learning trained under Scheme-A and Scheme-B on two episodic environments. Both schemes are trained with the same experiment-episode length161616 During training on episodic environments, an agent is always transported back to the initial state after an episode ends, regardless of the training scheme. This transportation to initial states is captured by -modelling, but not by -modelling (see Fig 4). , and gauged by the same performance metric, i.e. the finite-time average reward (19). As can be observed, both schemes converge to the same value, but Scheme-B empirically leads to a higher rate of convergence. This indicates that both - and -models induce the same optimal policy (in the original states ), and that the - to -model conversion is sound. More importantly, conversion to an -model enables obtaining the optimal policy using an average-reward method since such a model is recurrent, for which gain optimality is the most selective.
Mahadevan, 1996a (, Sec 3.1) mentioned the idea about episode repetitions in an episodic grid-navigation environment. He however, did not provide a formal conversion. Pardo et al., (2018) also used a similar conception to the episode repetition for proposing a technique that bootstraps from the value of the state at the end of each experiment-episode for random-horizon episodic environments. Another important related work is of White, (2017) who introduced a unification of episodic and continuing environment specifications. Such a unification is carried out via transition-based discounting, where the discount factor from the terminal to initial states is set to 0. She noted that the transition-based discounting breaks the Laurent-expansion-based connection between the discounted- and average-reward criteria in (6).
6 Discussions
The route of using the discounted reward to approximately maximize the average reward in RL seems to follow Blackwell’s -discounted approach (1962) to Howard’s average reward (1960) in DP.171717 Recall that both Blackwell’s -discounted and Howard’s average-reward approaches were aimed to deal with the infiniteness of the total reward in infinite-horizon MDPs, refer to Sec 2.1. As explained in Sec 2.1, the major successive development of such Blackwell’s approach came from Veinott who introduced a family of -discount optimality criteria (1969), which is discounting-free. Then, he developed an algorithm for obtaining -discount optimal policies, from which policies that are considered nearly-optimal and optimal by Blackwell emerge as special cases, namely - and -discount optimal policies, respectively. Both imply the average reward optimality.
Like in DP, the route in RL should be completed by devising methods based on Veinott optimality criteria. To this end, average reward RL methods constitute the first step in the following senses. First, they yield a set of approximately -discount optimal policies, from which higher -discount optimal policies are sought, as illustrated in Fig 6(b). This follows from the hierarchical property of -discount optimality, whose selectivity increases as increases. Note that such a property is exploited in the -discount policy iteration in DP (Puterman,, 1994, Ch 10.3). Second, average reward RL methods evaluate the gain and the bias that are likely to be useful for attaining higher -discount criteria. This is because the expansion coefficients in (6) are interconnected. For instance, the first three coefficients satisfy three nested equations below,
(20) |
where is the reward vector, is the one-step transition matrix, and is the policy evaluation entity for -discount optimality. Thus, advancement on gain and bias estimations is transferable to higher -discount methods towards obtaining Blackwell optimal policies.
Directly maximizing the average reward brings several benefits, as described in Sec 5. It is also evidently free from any -related difficulties (Sec 4). It is interesting now to discuss whether those benefits outweigh the loss of all the virtues of (Sec 3).
Approximating the average reward via discounting (Sec 3.1) is not without caveats. Taking really close to 1 slows the convergence, as well as increases the variance of, for example, policy gradient estimates. On the other hand, lowering poses the risk of suboptimality (with respect to the most selective criterion). This trade-off can be potentially guided by the critical discount factors (Sec 3.2). Specifically in RL, we conjecture that it is not the exact value of that is needed, but some value around it because of the interplay among -dependent approximation layers, such as those related to policy value and gradient estimations. Thus, fine-tuning is always a crucial yet non-trivial task, dealing with possibly non-concave and non-differentiable learning performance functions with respect to (even though the domain of such a univariate function is limited to ). Research on this front has been going on with some progress (Zahavy et al.,, 2020; Paul et al.,, 2019; Xu et al.,, 2018). In that regard, Veinott’s discounting-free criteria (including average-reward optimality) can be viewed as alternatives towards achieving the most selective Blackwell optimality in RL.
There are several ways to compensate for the other merits of discounting (Sec 3.3), which are missed due to directly maximizing the average reward. We describe some of them as follows.
The chain-classification independence merit:
The need for chain-type determination (or assumption) can be thought of as a way to exploit the chain structural property to be able to apply more simple average-reward methods. That is, for a constant gain across all states, we impose a unichain assumption, which is relatively not restrictive in that it already generalizes the recurrent type. Nevertheless, the ultimate goal remains: an average-reward RL method that can handle non-constant gains across states as in multichain MDPs. Because of its generality (hence, sophistication), it does not require prior determination of the chain type.
The chain-type assumption can also be interpreted as an attempt to break down the difficulty in attaining Blackwell-optimal policies in one go (regardless of the chain structure) as in the -discounted optimality, see Fig 6(a). Recall that in such a criterion, only ()-discounted optimal policies are Blackwell optimal (however the critical is generally unknown, otherwise this one-go technique would be favourable). On the other hand, the average reward (()-discount) optimality is already equivalent to the Blackwell optimality whenever the MDPs are recurrent. For more general unichain MDPs, ()-discount optimality is equivalent to the Blackwell optimality but only for some transition and reward configuration (for other configurations, higher -discount criteria should be applied, as shown in Fig 6(b)).
The contractive merit:
The relative value-iteration (VI) has been shown to alleviate the non-contractive nature of basic VI in average rewards (Abounadi et al.,, 2001). There are also policy gradient methods whose convergence generally follows that of stochastic gradient ascents. Their policy evaluation part can be carried out via stochastic gradient descents. That is, by minimizing some estimation loss derived for instance, from the average-reward Bellman evaluation equation, which involves the substraction of gain (similar to that of the relative VI).
The variance reduction merit:
The variance of trajectory-related estimation can be controlled by directly varying (truncating) the experiment-episode length (as an alternative to varying the artificial discount factor). One may also apply baseline substraction techniques for variance reduction. In general however, lower variance induces higher bias-errors, leading to a trade-off.
We conclude that directly maximizing the average reward has a number of benefits that make it worthwhile to use and to investigate further. It is the root for approaching Blackwell optimality through Veinott’s criteria, which are discounting-free (eliminating any complication due to artificial discounting). Future works include examination about exploration strategies: to what extent strategies developed for the discounted rewards applies to RL aiming at discounting-free criteria.
7 Experimental setup
In this Section, we describe the experimental setups used to produce Figs 2, 3, and 5. They are about environments (Sec 7.1) and learning methods (Sec 7.2).
7.1 Environments
GridNav- (episodic)
refers to an -by- grid-world with no obstacle. An agent has to navigate from an initial state to the goal state. There are states, excluding the goal state. Every state has four available actions, i.e. moving in North, East, South and West compass directions. The state-transition stochasticity is governed by an action-slip parameter. That is, the agent actually moves according to the chosen (intended) directional action with a probability . Otherwise, it stays or moves in one of three other directions; hence there are four alternatives due to slipping, each is with probability . If an action brings the agent beyond the grid-world, then the agent stays at the current grid. There is a reward of for transitioning to the goal; any other movement costs .
Taxi- (episodic)
refers to the -by- grid Taxi environment, which is adopted from OpenAI Gym (Brockman et al.,, 2016). The number of obstacles is increased accordingly with respect to , whereas the number of passengers and pickup/drop-off locations is kept the same.
The environment families 1, 2, and 3
are adopted from the access-control queueing task (Sutton and Barto,, 2018, p252), the Chain problem (Strens,, 2000, Fig 1), and the torus MDP (Morimura et al.,, 2010, Fig 2), respectively. In Fig 2(b), the first two were used as Families 1 and 2, whose numbers of states are varied, whereas the third was used as Family 3, whose reward constant is varied.
7.2 Learning methods
The learning method used for experiments in Figs 2 and 5 is -learning. It is an iterative method that relies on the Bellman optimality equation (BOE) to produce iterates approximating the optimal action value (denoted by ). Different optimality criteria have different BOEs, inducing different types of -learning as follows.
Discounted rewards (-learning): | |||
Average rewards (-learning): | |||
Total rewards (-learning): |
with a positive learning rate (here, we used a fine-tuned constant ). The estimate is initialized optimistically to large values to encourage exploration in the outset of learning. For -learning, we set the optimal gain estimate with a prescribed (arbitrary but fixed) reference state , following the relative-VI (RVI) technique by Abounadi et al., (2001, Sec 2.2). Note that -learning converges as long as the total reward is finite (Schwartz,, 1993, p2).
References
- Abounadi et al., (2001) Abounadi, J., Bertsekas, D., and Borkar, V. S. (2001). Learning algorithms for Markov decision processes with average cost. SIAM Journal on Control and Optimization, 40(3).
- Agarwal et al., (2019) Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. (2019). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. arXiv: 1908.00261.
- Altman, (1999) Altman, E. (1999). Constrained Markov Decision Processes. Taylor & Francis.
- Bacon, (2018) Bacon, P.-L. (2018). Temporal Representation Learning. PhD thesis, School of Computer Science, McGill University.
- Baxter and Bartlett, (2001) Baxter, J. and Bartlett, P. L. (2001). Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15(1).
- Beleznay et al., (1999) Beleznay, F., Grobler, T., and Szepesvari, C. (1999). Comparing value-function estimation algorithms in undiscounted problems. Technical report.
- Bellman, (1957) Bellman, R. (1957). Dynamic Programming. Princeton University Press.
- Blackwell, (1962) Blackwell, D. (1962). Discrete dynamic programming. The Annals of Mathematical Statistics, 33(2).
- Brockman et al., (2016) Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). OpenAI Gym. arXiv:1606.01540.
- Castro, (2020) Castro, P. S. (2020). Scalable methods for computing state similarity in deterministic Markov decision processes. Proceedings of the AAAI Conference on Artificial Intelligence.
- Chang et al., (2013) Chang, H. S., Hu, J., Fu, M. C., and Marcus, S. I. (2013). Simulation-Based Algorithms for Markov Decision Processes. Springer, 2nd edition.
- Dabney, (2014) Dabney, W. C. (2014). Adaptive step-sizes for reinforcement learning. PhD thesis, Computer Science Department, University of Massachusetts Amherst.
- Dann et al., (2014) Dann, C., Neumann, G., and Peters, J. (2014). Policy evaluation with temporal differences: A survey and comparison. Journal of Machine Learning Research, 15(24).
- Devraj and Meyn, (2020) Devraj, A. M. and Meyn, S. P. (2020). Q-learning with uniformly bounded variance: Large discounting is not a barrier to fast learning. arXiv: 2002.10301.
- Dewanto et al., (2020) Dewanto, V., Dunn, G., Eshragh, A., Gallagher, M., and Roosta, F. (2020). Average-reward model-free reinforcement learning: a systematic review and literature mapping. arXiv: 2010.08920.
- Douc et al., (2018) Douc, R., Moulines, E., Priouret, P., and Soulier, P. (2018). Markov Chains. Springer Series in Operations Research and Financial Engineering. Springer International Publishing.
- Duan et al., (2016) Duan, Y., Chen, X., Houthooft, R., Schulman, J., and Abbeel, P. (2016). Benchmarking deep reinforcement learning for continuous control. In Proceedings of The 33rd International Conference on Machine Learning.
- Fan et al., (2020) Fan, J., Wang, Z., Xie, Y., and Yang, Z. (2020). A theoretical analysis of deep Q-learning. In Proceedings of the 2nd Conference on Learning for Dynamics and Control.
- Feinberg and Shwartz, (2002) Feinberg, E. A. and Shwartz, A. (2002). Handbook of Markov Decision Processes: Methods and Applications, volume 40. Springer US.
- Ferns et al., (2006) Ferns, N., Castro, P. S., Precup, D., and Panangaden, P. (2006). Methods for computing state similarity in Markov decision processes. In Conference in Uncertainty in Artificial Intelligence.
- Henderson et al., (2018) Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., and Meger, D. (2018). Deep reinforcement learning that matters. In Proceedings of the AAAI Conference on Artificial Intelligence.
- Hordijk et al., (1985) Hordijk, A., Dekker, R., and Kallenberg, L. C. M. (1985). Sensitivity analysis in discounted Markovian decision problems. Operations Research Spektrum, 7.
- Howard, (1960) Howard, R. A. (1960). Dynamic Programming and Markov Processes. Technology Press of the Massachusetts Institute of Technology.
- Hutter, (2006) Hutter, M. (2006). General discounting versus average reward. In Algorithmic Learning Theory. Springer Berlin Heidelberg.
- Jiang et al., (2016) Jiang, N., Singh, S., and Tewari, A. (2016). On structural properties of MDPs that bound loss due to shallow planning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence.
- Jin and Sidford, (2021) Jin, Y. and Sidford, A. (2021). Towards tight bounds on the sample complexity of average-reward MDPs. In Proceedings of the 38th International Conference on Machine Learning.
- Kakade, (2001) Kakade, S. (2001). Optimizing average reward using discounted rewards. In Proceedings of the 14th Annual Conference on Computational Learning Theory.
- Kumar et al., (2020) Kumar, H., Kalogerias, D. S., Pappas, G. J., and Ribeiro, A. (2020). Zeroth-order deterministic policy gradient. Arxiv:2006.07314.
- Lattimore and Hutter, (2014) Lattimore, T. and Hutter, M. (2014). General time consistent discounting. Theoretical Computer Science, 519.
- Lattimore and Szepesvári, (2020) Lattimore, T. and Szepesvári, C. (2020). Bandit Algorithms. Cambridge University Press.
- Machado et al., (2018) Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M. J., and Bowling, M. (2018). Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research.
- Mahadevan, (1994) Mahadevan, S. (1994). To discount or not to discount in reinforcement learning: A case study comparing R-learning and Q-learning. In Proceedings of the 11th International Conference on Machine Learning.
- (33) Mahadevan, S. (1996a). Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning.
- (34) Mahadevan, S. (1996b). Sensitive discount optimality: Unifying discounted and average reward reinforcement learning. In Proceedings of the 13th International Conference on Machine Learning.
- Melo and Ribeiro, (2007) Melo, F. S. and Ribeiro, M. I. (2007). Q-learning with linear function approximation. In Computational Learning Theory.
- Mischel et al., (1972) Mischel, W., Ebbesen, E. B., and Zeiss, A. R. (1972). Cognitive and attentional mechanisms in delay of gratification. Journal of personality and social psychology, 21(2).
- Morimura et al., (2010) Morimura, T., Uchibe, E., Yoshimoto, J., Peters, J., and Doya, K. (2010). Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning. Neural Computation, 22(2).
- Nota and Thomas, (2020) Nota, C. and Thomas, P. S. (2020). Is the policy gradient a gradient? Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems.
- Pardo et al., (2018) Pardo, F., Tavakoli, A., Levdik, V., and Kormushev, P. (2018). Time limits in reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning.
- Paul et al., (2019) Paul, S., Kurin, V., and Whiteson, S. (2019). Fast efficient hyperparameter tuning for policy gradient methods. In Advances in Neural Information Processing Systems 32.
- Petrik and Scherrer, (2008) Petrik, M. and Scherrer, B. (2008). Biasing approximate dynamic programming with a lower discount factor. In Advances in Neural Information Processing Systems 21.
- Puterman, (1994) Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., 1st edition.
- Samuelson, (1937) Samuelson, P. A. (1937). A note on measurement of utility. Review of Economic Studies, 4.
- Scherrer, (2010) Scherrer, B. (2010). Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view. In Proceedings of the 27th International Conference on Machine Learning.
- Schwartz, (1993) Schwartz, A. (1993). A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the 10th International Conference on Machine Learning.
- Singh et al., (1994) Singh, S. P., Jaakkola, T. S., and Jordan, M. I. (1994). Learning without state-estimation in partially observable Markovian decision processes. In Proceedings of the 11th International Conference on Machine Learning.
- Smallwood, (1966) Smallwood, R. D. (1966). Optimum policy regions for Markov processes with discounting. Operations Research, 14.
- Strens, (2000) Strens, M. J. A. (2000). A Bayesian framework for reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning.
- Sutton and Barto, (2018) Sutton, R. S. and Barto, A. G. (2018). Introduction to Reinforcement Learning. MIT Press.
- Szepesvári, (2010) Szepesvári, C. (2010). Algorithms for Reinforcement Learning. Morgan & Claypool.
- Tang and Abbeel, (2010) Tang, J. and Abbeel, P. (2010). On a connection between importance sampling and the likelihood ratio policy gradient. In Advances in Neural Information Processing Systems 23.
- Thomas, (2014) Thomas, P. (2014). Bias in natural actor-critic algorithms. In Proceedings of the 31st International Conference on Machine Learning.
- Thrun and Schwartz, (1993) Thrun, S. and Schwartz, A. (1993). Issues in using function approximation for reinforcement learning. In Proceedings of Connectionist Models Summer School.
- Tsitsiklis and Van Roy, (2002) Tsitsiklis, J. N. and Van Roy, B. (2002). On average versus discounted reward temporal-difference learning. Machine Learning, 49.
- van Hasselt, (2011) van Hasselt, H. P. (2011). Insights in Reinforcement Learning: formal analysis and empirical evaluation of temporal-difference learning algorithms. PhD thesis, Univ. Utrecht.
- Van Roy, (1998) Van Roy, B. (1998). Learning and Value Function Approximation in Complex Decision Processes. PhD thesis, MIT.
- Van Seijen et al., (2019) Van Seijen, H., Fatemi, M., and Tavakoli, A. (2019). Using a logarithmic mapping to enable lower discount factors in reinforcement learning. In Advances in Neural Information Processing Systems 32.
- Veinott, (1969) Veinott, A. F. (1969). Discrete Dynamic Programming with Sensitive Discount Optimality Criteria. The Annals of Mathematical Statistics, 40(5).
- White, (2017) White, M. (2017). Unifying task specification in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning.
- Xu et al., (2018) Xu, Z., van Hasselt, H. P., and Silver, D. (2018). Meta-gradient reinforcement learning. In Advances in Neural Information Processing Systems 31.
- Zahavy et al., (2020) Zahavy, T., Xu, Z., Veeriah, V., Hessel, M., Oh, J., van Hasselt, H. P., Silver, D., and Singh, S. (2020). A self-tuning actor-critic algorithm. In Advances in Neural Information Processing Systems 33.