This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White
Department of Computing Science
University of Alberta
Edmonton, Alberta, Canada
{qlan3,pan6,alona,whitem}@ualberta.ca
Abstract

Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called Maxmin Q-learning, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems. 111Code is available at https://github.com/qlan3/Explorer

1 Introduction

Q-learning (Watkins, 1989) is one of the most popular reinforcement learning algorithms. One of the reasons for this widespread adoption is the simplicity of the update. On each step, the agent updates its action value estimates towards the observed reward and the estimated value of the maximal action in the next state. This target represents the highest value the agent thinks it could obtain from the current state and action, given the observed reward.

Unfortunately, this simple update rule has been shown to suffer from overestimation bias (Thrun & Schwartz, 1993; van Hasselt, 2010). The agent updates with the maximum over action values might be large because an action’s value actually is high, or it can be misleadingly high simply because of the stochasticity or errors in the estimator. With many actions, there is a higher probability that one of the estimates is large simply due to stochasticity and the agent will overestimate the value. This issue is particularly problematic under function approximation, and can significant impede the quality of the learned policy (Thrun & Schwartz, 1993; Szita & Lőrincz, 2008; Strehl et al., 2009) or even lead to failures of Q-learning (Thrun & Schwartz, 1993). More recently, experiments across several domains suggest that this overestimation problem is common (Hado van Hasselt et al., 2016).

Double Q-learning (van Hasselt, 2010) is introduced to instead ensure underestimation bias. The idea is to maintain two unbiased independent estimators of the action values. The expected action value of estimator one is selected for the maximal action from estimator two, which is guaranteed not to overestimate the true maximum action value. Double DQN (Hado van Hasselt et al., 2016), the extension of this idea to Q-learning with neural networks, has been shown to significantly improve performance over Q-learning. However, this is not a complete answer to this problem, because trading overestimation bias for underestimation bias is not always desirable, as we show in our experiments.

Several other methods have been introduced to reduce overestimation bias, without fully moving towards underestimation. Weighted Double Q-learning (Zhang et al., 2017) uses a weighted combination of the Double Q-learning estimate, which likely has underestimation bias, and the Q-learning estimate, which likely has overestimation bias. Bias-corrected Q-Learning (Lee et al., 2013) reduces the overestimation bias through a bias correction term. Ensemble Q-learning and Averaged Q-learning (Anschel et al., 2017) take averages of multiple action values, to both reduce the overestimation bias and the estimation variance. However, with a finite number of action-value functions, the average operation in these two algorithms will never completely remove the overestimation bias, as the average of several overestimation biases is always positive. Further, these strategies do not guide how strongly we should correct for overestimation bias, nor how to determine—or control—the level of bias.

The overestimation bias also appears in the actor-critic setting (Fujimoto et al., 2018; Haarnoja et al., 2018). For example, Fujimoto et al. (2018) propose the Twin Delayed Deep Deterministic policy gradient algorithm (TD3) which reduces the overestimation bias by taking the minimum value between two critics. However, they do not provide a rigorous theoretical analysis for the effect of applying the minimum operator. There is also no theoretical guide for choosing the number of estimators such that the overestimation bias can be reduced to 0.

In this paper, we study the effects of overestimation and underestimation bias on learning performance, and use them to motivate a generalization of Q-learning called Maxmin Q-learning. Maxmin Q-learning directly mitigates the overestimation bias by using a minimization over multiple action-value estimates. Moreover, it is able to control the estimation bias varying from positive to negative which helps improve learning efficiency as we will show in next sections. We prove that, theoretically, with an appropriate number of action-value estimators, we are able to acquire an unbiased estimator with a lower approximation variance than Q-learning. We empirically verify our claims on several benchmarks. We study the convergence properties of our algorithm within a novel Generalized Q-learning framework, which is suitable for studying several of the recently proposed Q-learning variants. We also combine deep neural networks with Maxmin Q-learning (Maxmin DQN) and demonstrate its effectiveness in several benchmark domains.

2 Problem Setting

We formalize the problem as a Markov Decision Process (MDP), (𝒮,𝒜,P,r,γ)(\mathcal{S},\mathcal{A},\mathrm{P},r,\gamma), where 𝒮\mathcal{S} is the state space, 𝒜\mathcal{A} is the action space, P:𝒮×𝒜×𝒮[0,1]\mathrm{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1] is the transition probabilities, r:𝒮×𝒜×𝒮r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R} is the reward mapping, and γ[0,1]\gamma\in[0,1] is the discount factor. At each time step tt, the agent observes a state St𝒮S_{t}\in\mathcal{S} and takes an action At𝒜A_{t}\in\mathcal{A} and then transitions to a new state St+1𝒮S_{t+1}\in\mathcal{S} according to the transition probabilities P\mathrm{P} and receives a scalar reward Rt+1=r(St,At,St+1)R_{t+1}=r(S_{t},A_{t},S_{t+1})\in\mathbb{R}. The goal of the agent is to find a policy π:𝒮×𝒜[0,1]\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1] that maximizes the expected return starting from some initial state.

Q-learning is an off-policy algorithm which attempts to learn the state-action values Q:𝒮×𝒜Q:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R} for the optimal policy. It tries to solve for

Q(s,a)=𝔼[Rt+1+maxa𝒜Q(St+1,a)|St=s,At=a]Q^{*}(s,a)=\mathbb{E}\Big{[}R_{t+1}+\max_{a^{\prime}\in\mathcal{A}}Q^{*}(S_{t+1},a^{\prime})\ \ \Big{|}\ \ S_{t}=s,A_{t}=a\Big{]}

The optimal policy is to act greedily with respect to these action values: from each ss select aa from argmaxa𝒜Q(s,a)\operatorname*{arg\,max}_{a\in\mathcal{A}}Q^{*}(s,a). The update rule for an approximation QQ for a sampled transition st,at,rt+1,st+1s_{t},a_{t},r_{t+1},s_{t+1} is:

Q(st,at)\displaystyle Q(s_{t},a_{t}) Q(st,at)+α(YtQQ(st,at))\displaystyle\leftarrow Q(s_{t},a_{t})+\alpha(Y^{Q}_{t}-Q(s_{t},a_{t})) for YtQ=defrt+1+γmaxa𝒜Q(st+1,a)\displaystyle\text{for }Y^{Q}_{t}\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}r_{t+1}+\gamma\max_{a^{\prime}\in\mathcal{A}}Q(s_{t+1},a^{\prime}) (1)

where α\alpha is the step-size. The transition can be generated off-policy, from any behaviour that sufficiently covers the state space. This algorithm is known to converge in the tabular setting (Tsitsiklis, 1994), with some limited results for the function approximation setting (Melo & Ribeiro, 2007).

3 Understanding when Overestimation Bias Helps and Hurts

In this section, we briefly discuss the estimation bias issue, and empirically show that both overestimation and underestimation bias may improve learning performance, depending on the environment. This motivates our Maxmin Q-learning algorithm described in the next section, which allows us to flexibly control the estimation bias and reduce the estimation variance.

The overestimation bias occurs since the target maxa𝒜Q(st+1,a)\max_{a^{\prime}\in\mathcal{A}}Q(s_{t+1},a^{\prime}) is used in the Q-learning update. Because QQ is an approximation, it is probable that the approximation is higher than the true value for one or more of the actions. The maximum over these estimators, then, is likely to be skewed towards an overestimate. For example, even unbiased estimates Q(st+1,a)Q(s_{t+1},a^{\prime}) for all aa^{\prime}, will vary due to stochasticity. Q(st+1,a)=Q(st+1,a)+eaQ(s_{t+1},a^{\prime})=Q^{*}(s_{t+1},a^{\prime})+e_{a^{\prime}}, and for some actions, eae_{a^{\prime}} will be positive. As a result, 𝔼[maxa𝒜Q(st+1,a)]maxa𝒜𝔼[Q(st+1,a)]=maxa𝒜Q(st+1,a)\mathbb{E}[\max_{a^{\prime}\in\mathcal{A}}Q(s_{t+1},a^{\prime})]\geq\max_{a^{\prime}\in\mathcal{A}}\mathbb{E}[Q(s_{t+1},a^{\prime})]=\max_{a^{\prime}\in\mathcal{A}}Q^{*}(s_{t+1},a^{\prime}).

This overestimation bias, however, may not always be detrimental. And, further, in some cases, erring towards an underestimation bias can be harmful. Overestimation bias can help encourage exploration for overestimated actions, whereas underestimation bias might discourage exploration. In particular, we expect more overestimation bias in highly stochastic areas of the world; if those highly stochastic areas correspond to high-value regions, then encouraging exploration there might be beneficial. An underestimation bias might actually prevent an agent from learning that a region is high-value. Alternatively, if highly stochastic areas also have low values, overestimation bias might cause an agent to over-explore a low-value region.

We show this effect in the simple MDP, shown in Figure 1. The MDP for state AA has only two actions: Left and Right. It has a deterministic neutral reward for both the Left action and the Right action. The Left action transitions to state BB where there are eight actions transitions to a terminate state with a highly stochastic reward. The mean of this stochastic reward is μ\mu. By selecting μ>0\mu>0, the stochastic region becomes high-value, and we expect overestimation bias to help and underestimation bias to hurt. By selecting μ<0\mu<0, the stochastic region becomes low-value, and we expect overestimation bias to hurt and underestimation bias to help.

Refer to caption
Figure 1: A simple episodic MDP, adapted from Figure 6.56.5 in Sutton & Barto (2018) which is used to highlight the difference between Double Q-learning and Q-learning. This MDP has two non-terminal states AA and BB. Every episode starts from AA which has two actions: Left and Right. The Right action transitions to a terminal state with reward 0. The Left action transitions to state BB with reward 0. From state BB, there are 8 actions that all transition to a terminal state with a reward μ+ξ\mu+\xi, where ξ\xi is drawn from a uniform distribution U(1,1)U(-1,1). When μ>0\mu>0, the optimal action in state AA is Left; when μ<0\mu<0, it is Right.
Refer to caption
(a) μ=+0.1\mu=+0.1 (overestimation helps)
Refer to caption
(b) μ=0.1\mu=-0.1 (underestimation helps)
Figure 2: Comparison of three algorithms using the simple MDP in Figure 1 with different values of μ\mu, and thus different expected rewards. For μ=+0.1\mu=+0.1, shown in (a), the optimal ϵ\epsilon-greedy policy is to take the Left action with 95%95\% probability. For μ=0.1\mu=-0.1, shown in in (b), the optimal policy is to take the Left action with 5%5\% probability. The reported distance is the absolute difference between the probability of taking the Left action under the learned policy compared to the optimal ϵ\epsilon-greedy policy. All results were averaged over 5,0005,000 runs.

We test Q-learning, Double Q-learning and our new algorithm Maxmin Q-learning in this environment. Maxmin Q-learning (described fully in the next section) uses NN estimates of the action values in the targets. For N=1N=1, it corresponds to Q-learning; otherwise, it progresses from overestimation bias at N=1N=1 towards underestimation bias with increasing NN. In the experiment, we used a discount factor γ=1\gamma=1; a replay buffer with size 100100; an ϵ\epsilon-greedy behaviour with ϵ=0.1\epsilon=0.1; tabular action-values, initialized with a Gaussian distribution 𝒩(0,0.01)\mathcal{N}(0,0.01); and a step-size of 0.010.01 for all algorithms.

The results in Figure 2 verify our hypotheses for when overestimation and underestimation bias help and hurt. Double Q-learning underestimates too much for μ=+1\mu=+1, and converges to a suboptimal policy. Q-learning learns the optimal policy the fastest, though for all values of N=2,4,6,8N=2,4,6,8, Maxmin Q-learning does progress towards the optimal policy. All methods get to the optimal policy for μ=1\mu=-1, but now Double Q-learning reaches the optimal policy the fastest, and followed by Maxmin Q-learning with larger NN.

4 Maxmin Q-learning

In this section, we develop Maxmin Q-learning, a simple generalization of Q-learning designed to control the estimation bias, as well as reduce the estimation variance of action values. The idea is to maintain NN estimates of the action values, QiQ^{i}, and use the minimum of these estimates in the Q-learning target: maxamini{1,,N}Qi(s,a)\max_{a^{\prime}}\min_{i\in\{1,\ldots,N\}}Q^{i}(s^{\prime},a^{\prime}). For N=1N=1, the update is simply Q-learning, and so likely has overestimation bias. As NN increase, the overestimation decreases; for some N>1N>1, this maxmin estimator switches from an overestimate, in expectation, to an underestimate. We characterize the relationship between NN and the expected estimation bias below in Theorem 1. Note that Maxmin Q-learning uses a different mechanism to reduce overestimation bias than Double Q-learning; Maxmin Q-learning with N=2N=2 is not Double Q-learning.

The full algorithm is summarized in Algorithm 1, and is a simple modification of Q-learning with experience replay. We use random subsamples of the observed data for each of the NN estimators, to make them nearly independent. To do this training online, we keep a replay buffer. On each step, a random estimator ii is chosen and updated using a mini-batch from the buffer. Multiple such updates can be performed on each step, just like in experience replay, meaning multiple estimators can be updated per step using different random mini-batches. In our experiments, to better match DQN, we simply do one update per step. Finally, it is also straightforward to incorporate target networks to get Maxmin DQN, by maintaining a target network for each estimator.

Input: step-size α\alpha, exploration parameter ϵ>0\epsilon>0, number of action-value functions NN
Initialize NN action-value functions {Q1,,QN}\{Q^{1},\ldots,Q^{N}\} randomly
Initialize empty replay buffer DD
Observe initial state ss
while Agent is interacting with the Environment do
       Qmin(s,a)mink{1,,N}Qk(s,a)Q^{min}(s,a)\leftarrow\min_{k\in\{1,\ldots,N\}}Q^{k}(s,a), a𝒜\forall a\in\mathcal{A}
       Choose action aa by ϵ\epsilon-greedy based on QminQ^{min}
       Take action aa, observe rr, ss^{\prime}
       Store transition (s,a,r,s)(s,a,r,s^{\prime}) in DD
       Select a subset SS from {1,,N}\{1,\ldots,N\}   (e.g., randomly select one ii to update)
       for iSi\in S do
             Sample random mini-batch of transitions (sD,aD,rD,sD)(s_{D},a_{D},r_{D},s^{\prime}_{D}) from DD
             Get update target: YMQrD+γmaxaAQmin(sD,a)Y^{MQ}\leftarrow r_{D}+\gamma\max_{a^{\prime}\in A}Q^{min}(s^{\prime}_{D},a^{\prime})
             Update action-value QiQ^{i}: Qi(sD,aD)Qi(sD,aD)+α[YMQQi(sD,aD)]Q^{i}(s_{D},a_{D})\leftarrow Q^{i}(s_{D},a_{D})+\alpha[Y^{MQ}-Q^{i}(s_{D},a_{D})]
            
       end for
      sss\leftarrow s^{\prime}
      
end while
Algorithm 1 Maxmin Q-learning

We now characterize the relation between the number of action-value functions used in Maxmin Q-learning and the estimation bias of action values. For compactness, we write QsaiQ^{i}_{sa} instead of Qi(s,a)Q^{i}(s,a). Each QsaiQ^{i}_{sa} has random approximation error esaie^{i}_{sa}

Qsai=Qsa+esai.Q^{i}_{sa}=Q_{sa}^{*}+e^{i}_{sa}.

We assume that esaie^{i}_{sa} is a uniform random variable U(τ,τ)U(-\tau,\tau) for some τ>0\tau>0. The uniform random assumption was used by Thrun & Schwartz (1993) to demonstrate bias in Q-learning, and reflects that non-negligible positive and negative esaie^{i}_{sa} are possible. Notice that for NN estimators with nsan_{sa} samples, the τ\tau will be proportional to some function of nsa/Nn_{sa}/N, because the data will be shared amongst the NN estimators. For the general theorem, we use a generic τ\tau, and in the following corollary provide a specific form for τ\tau in terms of NN and nsan_{sa}.

Recall that MM is the number of actions applicable at state ss^{\prime}. Define the estimation bias ZMNZ_{MN} for transition s,a,r,ss,a,r,s^{\prime} to be

ZMN\displaystyle Z_{MN} =def(r+γmaxaQsamin)(r+γmaxaQsa)\displaystyle\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}(r+\gamma\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{min})-(r+\gamma\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{*})
=γ(maxaQsaminmaxaQsa)\displaystyle=\gamma(\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{min}-\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{*})

where

Qsamin=defmini{1,,N}Qsai=Qsa+mini{1,,N}esaiQ_{sa}^{min}\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}\min_{i\in\{1,\ldots,N\}}Q^{i}_{sa}=Q_{sa}^{*}+\min_{i\in\{1,\ldots,N\}}e^{i}_{sa}

We now show how the expected estimation bias E[ZMN]E[Z_{MN}] and the variance of QsaminQ_{sa}^{min} are related to the number of action-value functions NN in Maxmin Q-learning.

Theorem 1

Under the conditions stated above and assume all actions share the same true action-value,

  1. (i)

    the expected estimation bias is

    E[ZMN]=γτ[12tMN]\displaystyle E[Z_{MN}]=\gamma\tau[1-2t_{MN}] where tMN=M(M1)1(M+1N)(M1+1N)(1+1N).\displaystyle\text{ where }t_{MN}=\frac{M(M-1)\cdots 1}{(M+\frac{1}{N})(M-1+\frac{1}{N})\cdots(1+\frac{1}{N})}.

    E[ZMN]E[Z_{MN}] decreases as NN increases: E[ZM,N=1]=γτM1M+1E[Z_{M,N=1}]=\gamma\tau\frac{M-1}{M+1} and E[ZM,N]=γτE[Z_{M,N\rightarrow\infty}]=-\gamma\tau.

  2. (ii)
    Var[Qsamin]=4Nτ2(N+1)2(N+2).Var[Q_{sa}^{min}]=\frac{4N{\tau}^{2}}{(N+1)^{2}(N+2)}.

    Var[Qsamin]Var[Q_{sa}^{min}] decreases as NN increases: Var[Qsamin]=τ23Var[Q_{sa}^{min}]\!=\!\tfrac{{\tau}^{2}}{3} for N=1 and Var[Qsamin]=0Var[Q_{sa}^{min}]=0 for NN\rightarrow\infty.

Theorem 1 is a generalization of the first lemma in Thrun & Schwartz (1993); we provide the proof in Appendix A as well as a visualization of the expected bias for varying MM and NN. This theorem shows that the average estimation bias E[ZMN]E[Z_{MN}], decreases as NN increases. Thus, we can control the bias by changing the number of estimators in Maxmin Q-learning. Specifically, the average estimation bias can be reduced from positive to negative as NN increases. Notice that E[ZMN]=0E[Z_{MN}]=0 when tMN=12t_{MN}=\frac{1}{2}. This suggests that by choosing NN such that tMN12t_{MN}\approx\frac{1}{2}, we can reduce the bias to near 0.

Furthermore, Var[Qsamin]Var[Q_{sa}^{min}] decreases as NN increases. This indicates that we can control the estimation variance of target action value through NN. We show just this in the following Corollary. The subtlety is that with increasing NN, each estimator will receive less data. The fair comparison is to compare the variance of a single estimator that uses all of the data, as compared to the maxmin estimator which shares the samples across NN estimators. We show that there is an NN such that the variance is lower, which arises largely due to the fact that the variance of each estimator decreases linearly in nn, but the τ\tau parameter for each estimator only decreases at a square root rate in the number of samples.

Corollary 1

Assuming the nsan_{sa} samples are evenly allocated amongst the NN estimators, then τ=3σ2N/nsa\tau=\sqrt{3\sigma^{2}N/n_{sa}} where σ2\sigma^{2} is the variance of samples for (s,a)(s,a) and, for QsaQ_{sa} the estimator that uses all nsan_{sa} samples for a single estimate,

Var[Qsamin]=12N2(N+1)2(N+2)Var[Qsa].Var[Q_{sa}^{min}]=\frac{12N^{2}}{(N+1)^{2}(N+2)}Var[Q_{sa}].

Under this uniform random noise assumption, for N8N\geq 8, Var[Qsamin]<Var[Qsa]Var[Q_{sa}^{min}]<Var[Q_{sa}].

5 Experiments

In this section, we first investigate robustness to reward variance, in a simple environment (Mountain Car) in which we can perform more exhaustive experiments. Then, we investigate performance in seven benchmark environments.

Robustness under increasing reward variance in Mountain Car

Mountain Car (Sutton & Barto, 2018) is a classic testbed in Reinforcement Learning, where the agent receives a reward of 1-1 per step with γ=1\gamma=1, until the car reaches the goal position and the episode ends. In our experiment, we modify the rewards to be stochastic with the same mean value: the reward signal is sampled from a Gaussian distribution 𝒩(1,σ2)\mathcal{N}(-1,\sigma^{2}) on each time step. An agent should learn to reach the goal position in as few steps as possible.

The experimental setup is as follows. We trained each algorithm with 1,0001,000 episodes. The number of steps to reach the goal position in the last training episode was used as the performance measure. The fewer steps, the better performance. All experimental results were averaged over 100100 runs. The key algorithm settings included the function approximator, step-sizes, exploration parameter and replay buffer size. All algorithm used ϵ\epsilon-greedy with ϵ=0.1\epsilon=0.1 and a buffer size of 100100. For each algorithm, the best step-size was chosen from {0.005,0.01,0.02,0.04,0.08}\{0.005,0.01,0.02,0.04,0.08\}, separately for each reward setting. Tile-coding was used to approximate the action-value function, where we used 88 tilings with each tile covering 1/81/8th of the bounded distance in each dimension. For Maxmin Q-learning, we randomly chose one action-value function to update at each step.

As shown in Figure 3, when the reward variance is small, the performance of Q-learning, Double Q-learning, Averaged Q-learning, and Maxmin Q-learning are comparable. However, as the variance increases, Q-learning, Double Q-learning, and Averaged Q-learning became much less stable than Maxmin Q-learning. In fact, when the variance was very high (σ=50\sigma=50, see Appendix C.2), Q-learning and Averaged Q-learning failed to reach the goal position in 5,0005,000 steps, and Double Q-learning produced runs >400>400 steps, even after many episodes.

Refer to caption
(a) Robustness under increasing reward variance
Refer to caption
(b) Reward𝒩(1,10)Reward\sim\mathcal{N}(-1,10)
Figure 3: Comparison of four algorithms on Mountain Car under different reward variances. The lines in (a)(a) show the average number of steps taken in the last episode with one standard error. The lines in (b)(b) show the number of steps to reach the goal position during training when the reward variance σ2=10\sigma^{2}=10. All results were averaged across 100100 runs, with standard errors. Additional experiments with further elevated σ2\sigma^{2} can be found in Appendix C.2.
Results on Benchmark Environments

To evaluate Maxmin DQN, we choose seven games from Gym (Brockman et al., 2016), PyGame Learning Environment (PLE) (Tasfi, 2016), and MinAtar (Young & Tian, 2019): Lunarlander, Catcher, Pixelcopter, Asterix, Seaquest, Breakout, and Space Invaders. For games in MinAtar (i.e. Asterix, Seaquest, Breakout, and Space Invaders), we reused the hyper-parameters and settings of neural networks in (Young & Tian, 2019). And the step-size was chosen from [3103,103,3104,104,3105][3*10^{-3},10^{-3},3*10^{-4},10^{-4},3*10^{-5}]. For Lunarlander, Catcher, and Pixelcopter, the neural network was a multi-layer perceptron with hidden layers fixed to [64,64][64,64]. The discount factor was 0.990.99. The size of the replay buffer was 10,00010,000. The weights of neural networks were optimized by RMSprop with gradient clip 55. The batch size was 3232. The target network was updated every 200200 frames. ϵ\epsilon-greedy was applied as the exploration strategy with ϵ\epsilon decreasing linearly from 1.01.0 to 0.010.01 in 1,0001,000 steps. After 1,0001,000 steps, ϵ\epsilon was fixed to 0.010.01. For Lunarlander, the best step-size was chosen from [3103,103,3104,104,3105][3*10^{-3},10^{-3},3*10^{-4},10^{-4},3*10^{-5}]. For Catcher and Pixelcopter, the best step-size was chosen from [103,3104,104,3105,105][10^{-3},3*10^{-4},10^{-4},3*10^{-5},10^{-5}].

For both Maxmin DQN and Averaged DQN, the number of target networks NN was chosen from [2,3,4,5,6,7,8,9][2,3,4,5,6,7,8,9]. And we randomly chose one action-value function to update at each step. We first trained each algorithm in a game for certain number of steps. After that, each algorithm was tested by running 100100 test episodes with ϵ\epsilon-greedy where ϵ=0.01\epsilon=0.01. Results were averaged over 2020 runs for each algorithm, with learning curves shown for the best hyper-parameter setting (see Appendix C.3 for the parameter sensitivity curves).

We see from Figure 4 that Maxmin DQN performs as well as or better than other algorithms. In environments where final performance is noticeably better—-Pixelcopter, Lunarlander and Asterix—the initial learning is slower. A possible explanation for this is that the Maxmin agent more extensively explored early on, promoting better final performance. We additionally show on Pixelcopter and Asterix that for smaller NN, Maxmin DQN learns faster but reaches suboptimal performance—behaving more like Q-learning—and for larger NN learns more slowly but reaches better final performance.

Refer to caption
(a) Catcher
Refer to caption
(b) Lunarlander
Refer to caption
(c) Space Invaders
Refer to caption
(d) Breakout
Refer to caption
(e) Pixelcopter
Refer to caption
(f) Asterix
Refer to caption
(g) Seaquest
Refer to caption
(h) Pixelcopter with varying NN
Refer to caption
(i) Asterix with varying NN
Figure 4: Learning curves on the seven benchmark environments. The depicted return is averaged over the last 100100 episodes, and the curves are smoothed using an exponential average, to match previous reported results (Young & Tian, 2019). The results were averaged over 20 runs, with the shaded area representing one standard error. Plots (h)(h) and (i)(i) show the performance of Maxmin DQN on Pixelcopter and Asterix, with different NN, highlighting that larger NN seems to result in slower early learning but better final performance in both environments.

6 Convergence Analysis of Maxmin Q-learning

In this section, we show Maxmin Q-learning is convergent in the tabular setting. We do so by providing a more general result for what we call Generalized Q-learning: Q-learning where the bootstrap target uses a function GG of NN action values. The main condition on GG is that it maintains relative maximum values, as stated in Assumption 1. We use this more general result to prove Maxmin Q-learning is convergent, and then discuss how it provides convergence results for Q-learning, Ensemble Q-learning, Averaged Q-learning and Historical Best Q-learning as special cases.

Many variants of Q-learning have been proposed, including Double Q-learning (van Hasselt, 2010), Weighted Double Q-learning (Zhang et al., 2017), Ensemble Q-learning (Anschel et al., 2017), Averaged Q-learning (Anschel et al., 2017), and Historical Best Q-learning (Yu et al., 2018). These algorithms differ in their estimate of the one-step bootstrap target. To encompass all variants, the target action-value of Generalized Q-learning YGQY^{GQ} is defined based on action-value estimates from both dimensions:

YGQ=r+γQsGQ(t1)Y^{GQ}=r+\gamma Q_{s^{\prime}}^{GQ}(t-1) (2)

where tt is the current time step and the action-value function QsGQ(t)Q_{s}^{GQ}(t) is a function of Qs1(tK),,Qs1(t1),,QsN(tK),,QsN(t1)Q_{s}^{1}(t-K),\ldots,Q_{s}^{1}(t-1),\ldots,Q_{s}^{N}(t-K),\ldots,Q_{s}^{N}(t-1):

QsGQ(t)=G(Qs1(tK)Qs1(t1)Qs2(tK)Qs2(t1)QsN(tK)QsN(t1))Q_{s}^{GQ}(t)=G\left(\begin{array}[]{cccc}Q_{s}^{1}(t-K)&\ldots&Q_{s}^{1}(t-1)\\ Q_{s}^{2}(t-K)&\ldots&Q_{s}^{2}(t-1)\\ \vdots&\ddots&\vdots\\ Q_{s}^{N}(t-K)&\ldots&Q_{s}^{N}(t-1)\end{array}\right) (3)

For simplicity, the vector (QsaGQ(t))a𝒜(Q^{GQ}_{sa}(t))_{a\in\mathcal{A}} is denoted as QsGQ(t)Q^{GQ}_{s}(t), same for Qsi(t)Q^{i}_{s}(t). The corresponding update rule is

Qsai(t)Qsai(t1)+αsai(t1)(YGQQsai(t1))Q_{sa}^{i}(t)\leftarrow Q_{sa}^{i}(t-1)+\alpha_{sa}^{i}(t-1)(Y^{GQ}-Q_{sa}^{i}(t-1)) (4)

For different GG functions, Generalized Q-learning reduces to different variants of Q-learning, including Q-learning itself. For example, Generalized Q-learning can be reduced to Q-learning simply by setting K=1K=1, N=1N=1 with G(Qs)=maxaAQsaG(Q_{s})=\max_{a\in A}Q_{sa}. Double Q-learning can be specified with K=1K=1, N=2N=2, and G(Qs1,Qs2)=Qs,argmaxaAQsa12G(Q_{s}^{1},Q_{s}^{2})=Q^{2}_{s,\arg\max_{a^{\prime}\in A}Q_{sa^{\prime}}^{1}}.

We first introduce Assumption 1 for function GG in Generalized Q-learning, and then state the theorem. The proof can be found in Appendix B.

Assumption 1 (Conditions on GG)

Let G:nNKG:\mathbb{R}^{nNK}\mapsto\mathbb{R} and G(Q)=qG(Q)=q where Q=(Qaij)nNKQ=(Q_{a}^{ij})\in\mathbb{R}^{nNK}, a𝒜a\in\mathcal{A} and |𝒜|=n|\mathcal{A}|=n, i{1,,N},j{0,,K1}i\in\{1,\ldots,N\},j\in\{0,\ldots,K-1\} and qq\in\mathbb{R}.

  1. (i)

    If Qaij=QaklQ_{a}^{ij}=Q_{a}^{kl}, i,k\forall i,k, j,l\forall j,l, and a\forall a, then q=maxaQaijq=\max_{a}{Q_{a}^{ij}}.

  2. (ii)

    Q,QnNK\forall Q,Q^{\prime}\in\mathbb{R}^{nNK}, G(Q)G(Q)maxa,i,jQaijQaij\mid G(Q)-G(Q^{\prime})\mid\leq\max_{a,i,j}\mid Q_{a}^{ij}-{Q^{\prime}}_{a}^{ij}\mid.

We can verify that Assumption 1 holds for Maxmin Q-learning. Set K=1K=1 and set NN to be a positive integer. Let Qs=(Qs1,,QsN)Q_{s}=(Q^{1}_{s},\ldots,Q^{N}_{s}) and define GMQ(Qs)=maxaAmini{1,,N}QsaiG^{MQ}(Q_{s})=\max_{a\in A}\min_{i\in\{1,\ldots,N\}}Q^{i}_{sa}. It is easy to check that part (i) of Assumption 1 is satisfied. Part (ii) is also satisfied because

G(Qs)G(Qs)maxaminiQsaimaxaminiQsaimaxa,iQsaiQsai.\mid G(Q_{s})-G(Q_{s}^{\prime})\mid\leq\mid\max_{a}\min_{i}Q^{i}_{sa}-\max_{a^{\prime}}\min_{i^{\prime}}Q^{{\prime}i^{\prime}}_{sa^{\prime}}\mid\leq\max_{a,i}\mid Q^{i}_{sa}-Q^{{\prime}i}_{sa}\mid.
Assumption 2 (Conditions on the step-sizes)

There exists some (deterministic) constant CC such that for every (s,a)𝒮×𝒜,i{1,,N}(s,a)\in\mathcal{S}\times\mathcal{A},i\in\{1,\ldots,N\}, 0αsai(t)10\leq\alpha_{sa}^{i}(t)\leq 1, and with probability 11,

t=0(αsai(t))2C,t=0αsai(t)=\sum_{t=0}^{\infty}(\alpha_{sa}^{i}(t))^{2}\leq C,\quad\sum_{t=0}^{\infty}\alpha_{sa}^{i}(t)=\infty
Theorem 2

Assume a finite MDP (𝒮,𝒜,P,R)(\mathcal{S},\mathcal{A},\mathrm{P},R) and that Assumption 1 and 2 hold. Then the action-value functions in Generalized Q-learning, using the tabular update in Equation (3), will converge to the optimal action-value function with probability 11, in either of the following cases: (i) γ<1\gamma<1, or (ii) γ=1\gamma=1, a𝒜,Qs1ai(t=0)=0\forall a\in\mathcal{A},Q^{i}_{s_{1}a}(t=0)=0 where s1s_{1} is an absorbing state and all policies are proper.

As shown above, because the function GG for Maxmin Q-learning satisfies Assumption 1, then by Theorem 2 it converges. Next, we apply Theorem 2 to Q-learning and its variants, proving the convergence of these algorithms in the tabular case. For Q-learning, set K=1K=1 and N=1N=1. Let GQ(Qs)=maxaAQsaG^{Q}(Q_{s})=\max_{a\in A}Q_{sa}. It is straightforward to check that Assumption 1 holds for function GQG^{Q}. For Ensemble Q-learning, set K=1K=1 and set NN to be a positive integer. Let GEQ((Qs1,,QsN))=maxaA1Ni=1NQsaiG^{EQ}((Q^{1}_{s},\ldots,Q^{N}_{s}))=\max_{a\in A}\frac{1}{N}\sum_{i=1}^{N}Q^{i}_{sa}. Easy to check that Assumption 1 is satisfied. For Averaged Q-learning, the proof is similar to Ensemble Q-learning except that N=1N=1 and KK is a positive integer. For Historical Best Q-learning, set N=1N=1 and KK to be a positive integer. We assume that all auxiliary action-value functions are selected from action-value functions at most KK updates ago. Define GHBQG^{HBQ} to be the largest action-value among Qsa(t1),,Qsa(tK)Q_{sa}(t-1),\ldots,Q_{sa}(t-K) for state ss. Assumption 1 is satisfied and the convergence is guaranteed.

7 Conclusion

Overestimation bias is a byproduct of Q-learning, stemming from the selection of a maximal value to estimate the expected maximal value. In practice, overestimation bias leads to poor performance in a variety of settings. Though multiple Q-learning variants have been proposed, Maxmin Q-learning is the first solution that allows for a flexible control of bias, allowing for overestimation or underestimation determined by the choice of NN and the environment. We showed theoretically that we can decrease the estimation bias and the estimation variance by choosing an appropriate number NN of action-value functions. We empirically showed that advantages of Maxmin Q-learning, both on toy problems where we investigated the effect of reward noise and on several benchmark environments. Finally, we introduced a new Generalized Q-learning framework which we used to prove the convergence of Maxmin Q-learning as well as several other Q-learning variants that use NN action-value estimates.

Acknowledgments

We would like to thank Huizhen Yu and Yi Wan for their valuable feedback and helpful discussion.

References

  • Anschel et al. (2017) Oron Anschel, Nir Baram, and Nahum Shimkin. Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning. In International Conference on Machine Learning, pp. 176–185, 2017.
  • Bertsekas & Tsitsiklis (1989) Dimitri P Bertsekas and John N Tsitsiklis. Parallel and Distributed Computation: Numerical Methods, volume 23. Prentice hall Englewood Cliffs, NJ, 1989.
  • Bertsekas & Tsitsiklis (1996) Dimitri P Bertsekas and John N Tsitsiklis. Neuro-dynamic Programming, volume 5. Athena Scientific Belmont, MA, 1996.
  • Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016.
  • David & Nagaraja (2004) Herbert Aron David and Haikady Navada Nagaraja. Order Statistics. Encyclopedia of Statistical Sciences, 2004.
  • Fujimoto et al. (2018) Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pp. 1587–1596, 2018.
  • Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1861–1870, 2018.
  • Hado van Hasselt et al. (2016) Hado Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning with Double Q-learning. In AAAI Conference on Artificial Intelligence, 2016.
  • Lee et al. (2013) Donghun Lee, Boris Defourny, and Warren B. Powell. Bias-corrected Q-learning to Control Max-operator Bias in Q-learning. In IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp.  93–99, 2013.
  • Melo & Ribeiro (2007) Francisco S Melo and M Isabel Ribeiro. Q-learning with Linear Function Approximation. In International Conference on Computational Learning Theory, pp.  308–322, 2007.
  • Strehl et al. (2009) Alexander L. Strehl, Lihong Li, and Michael L. Littman. Reinforcement Learning in Finite MDPs: PAC Analysis. Journal of Machine Learning Research, 10(Nov):2413–2444, 2009.
  • Sutton & Barto (2018) Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, second edition, 2018.
  • Szita & Lőrincz (2008) István Szita and András Lőrincz. The Many Faces of Optimism: A Unifying Approach. In International Conference on Machine learning, pp. 1048–1055. ACM, 2008.
  • Tasfi (2016) Norman Tasfi. Pygame learning environment. https://github.com/ntasfi/PyGame-Learning-Environment, 2016.
  • Thrun & Schwartz (1993) Sebastian Thrun and Anton Schwartz. Issues in Using Function Approximation for Reinforcement Learning. In Fourth Connectionist Models Summer School, 1993.
  • Tsitsiklis (1994) John N Tsitsiklis. Asynchronous Stochastic Approximation and Q-learning. Machine learning, 1994.
  • van Hasselt (2010) Hado van Hasselt. Double Q-learning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010.
  • Watkins (1989) Chris Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Cambridge, 1989.
  • Young & Tian (2019) Kenny Young and Tian Tian. MinAtar: An Atari-inspired Testbed for More Efficient Reinforcement Learning Experiments. arXiv preprint arXiv:1903.03176, 2019.
  • Yu et al. (2018) Wenwu Yu, Rui Wang, Ruiying Li, Jing Gao, and Xiaohui Hu. Historical Best Q-Networks for Deep Reinforcement Learning. In International Conference on Tools with Artificial Intelligence, pp.  6–11, 2018.
  • Zhang et al. (2017) Zongzhang Zhang, Zhiyuan Pan, and Mykel J. Kochenderfer. Weighted Double Q-learning. In International Joint Conference on Artificial Intelligence, pp.  3455–3461, 2017.

Appendix A The Proof of Theorem 1

We first present Lemma 1 here as a tool to prove Theorem 1. Note that the first three properties in this lemma are well-known results of order statistics (David & Nagaraja, 2004).

Lemma 1

Let X1,,XNX_{1},\ldots,X_{N} be NN i.i.d. random variables from an absolutely continuous distribution with probability density function(PDF) f(x)f(x) and cumulative distribution function (CDF) F(x)F(x). Denote μ=defE[Xi]\mu\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}E[X_{i}] and σ2=defVar[Xi]<+\sigma^{2}\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}Var[X_{i}]<+\infty. Set X1:N=defmini{1,,N}XiX_{1:N}\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}min_{i\in\{1,\ldots,N\}}X_{i} and XN:N=defmaxi{1,,N}XiX_{N:N}\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}max_{i\in\{1,\ldots,N\}}X_{i}. Denote the PDF and CDF of X1:NX_{1:N} as f1:N(x)f_{1:N}(x) and F1:N(x)F_{1:N}(x), respectively. Similarly, denote the PDF and CDF of XN:NX_{N:N} as fN:N(x)f_{N:N}(x) and FN:N(x)F_{N:N}(x), respectively. We then have

  1. (i)

    μ(N1)σ2n1E[X1:N]μ\mu-\frac{(N-1)\sigma}{\sqrt{2n-1}}\leq E[X_{1:N}]\leq\mu and E[X1:N+1]E[X1:N]E[X_{1:N+1}]\leq E[X_{1:N}].

  2. (ii)

    F1:N(x)=1(1F(x))NF_{1:N}(x)=1-(1-F(x))^{N}. f1:N(x)=Nf(x)(1F(x))N1f_{1:N}(x)=Nf(x)(1-F(x))^{N-1}.

  3. (iii)

    FN:N(x)=(F(x))NF_{N:N}(x)=(F(x))^{N}. fN:N(x)=Nf(x)(F(x))N1f_{N:N}(x)=Nf(x)(F(x))^{N-1}.

  4. (iv)

    If X1,,XNU(τ,τ)X_{1},\ldots,X_{N}\sim U(-\tau,\tau), we have Var(X1:N)=4Nτ2(N+1)2(N+2)Var(X_{1:N})=\frac{4N{\tau}^{2}}{(N+1)^{2}(N+2)} and Var(X1:N+1)<Var(X1:N)Var(X1:1)=σ2Var(X_{1:N+1})<Var(X_{1:N})\leq Var(X_{1:1})={\sigma}^{2} for any positive integer NN.

Proof.

  1. (i)

    By the definition of X1:NX_{1:N}, we have X1:N+1X1:NX_{1:N+1}\leq X_{1:N}. Thus E[X1:N+1]E[X1:N]E[X_{1:N+1}]\leq E[X_{1:N}]. Since E[X1:1]=E[X1]=μE[X_{1:1}]=E[X_{1}]=\mu, E[X1:N]E[X1:1]=μE[X_{1:N}]\leq E[X_{1:1}]=\mu. The proof of μ(N1)σ2N1E[X1:N]\mu-\frac{(N-1)\sigma}{\sqrt{2N-1}}\leq E[X_{1:N}] can be found in (David & Nagaraja, 2004, Chapter 4 Section 4.2).

  2. (ii)

    We first consider the cdf of X1:NX_{1:N}. F1:N(x):=P(X1:Nx)=1P(X1:N>x)=1P(X1>x,,XM>x)=1P(X1>x)P(XN>x)=1(1F(x))NF_{1:N}(x):=P(X_{1:N}\leq x)=1-P(X_{1:N}>x)=1-P(X_{1}>x,\ldots,X_{M}>x)=1-P(X_{1}>x)\cdots P(X_{N}>x)=1-(1-F(x))^{N}. Then the pdf of X1:NX_{1:N} is f1:N(x):=dF1:Ndx=Nf(x)(1F(x))N1f_{1:N}(x):=\frac{dF_{1:N}}{dx}=Nf(x)(1-F(x))^{N-1}.

  3. (iii)

    Similar to (ii), we first consider cdf of XN:NX_{N:N}. FN:N(x):=P(XN:Nx)=P(X1x,,XNx)=P(X1x)P(XMx)=(F(x))NF_{N:N}(x):=P(X_{N:N}\leq x)=P(X_{1}\leq x,\ldots,X_{N}\leq x)=P(X_{1}\leq x)\cdots P(X_{M}\leq x)=(F(x))^{N}. Then the pdf of XN:NX_{N:N} is fN:N(x):=dFN:Ndx=Nf(x)(F(x))N1f_{N:N}(x):=\frac{dF_{N:N}}{dx}=Nf(x)(F(x))^{N-1}.

  4. (iv)

    Since X1,,XNUniform(τ,τ)X_{1},\ldots,X_{N}\sim Uniform(-\tau,\tau), we have F(x)=12+x2τF(x)=\frac{1}{2}+\frac{x}{2\tau} and f(x)=12τf(x)=\frac{1}{2\tau}. Var(X1:N)=E[X1:N2]E[X1:N]2=4τ2(2(N+1)(N+2)1(N+1)2)=4nτ2(N+1)2(N+2)Var(X_{1:N})=E[{X_{1:N}}^{2}]-{E[X_{1:N}]}^{2}=4{\tau}^{2}(\frac{2}{(N+1)(N+2)}-\frac{1}{(N+1)^{2}})=\frac{4n{\tau}^{2}}{(N+1)^{2}(N+2)}. It is easy to check that Var(X1:N+1)<Var(X1:N)Var(X1:1)=σ2Var(X_{1:N+1})<Var(X_{1:N})\leq Var(X_{1:1})={\sigma}^{2} for any positive integer NN.

 

Next, we prove Theorem 1.

Proof. Let f(x)f(x) and F(x)F(x) be the cdf and pdf of esae_{sa}, respectively. Similarly, Let fN(x)f_{N}(x) and FN(x)F_{N}(x) be the cdf and pdf of mini{1,,N}esai\min_{i\in\{1,\ldots,N\}}e^{i}_{sa}. Since esae_{sa} is sampled from Uniform(τ,τ)Uniform(-\tau,\tau), it is easy to get f(x)=12τf(x)=\frac{1}{2\tau} and F(x)=12+x2τF(x)=\frac{1}{2}+\frac{x}{2\tau}. By Lemma 1, we have fN(x)=Nf(x)[1F(x)]N1=N2τ(12x2τ)N1f_{N}(x)=Nf(x)[1-F(x)]^{N-1}=\frac{N}{2\tau}(\frac{1}{2}-\frac{x}{2\tau})^{N-1} and FN(x)=1(1F(x))N=1(12x2τ)NF_{N}(x)=1-(1-F(x))^{N}=1-(\frac{1}{2}-\frac{x}{2\tau})^{N}. The expectation of ZMNZ_{MN} is

E[ZMN]\displaystyle E[Z_{MN}] =γE[(maxaQsaminmaxaQsa)]\displaystyle=\gamma E[(\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{min}-\max_{a^{\prime}}Q_{s^{\prime}a^{\prime}}^{*})]
=γE[maxamini{1,,N}esai]\displaystyle=\gamma E[\max_{a^{\prime}}\min_{i\in\{1,\ldots,N\}}e^{i}_{s^{\prime}a^{\prime}}]
=γττMxfN(x)FN(x)M1𝑑x\displaystyle=\gamma\int_{-\tau}^{\tau}Mxf_{N}(x){F_{N}(x)}^{M-1}dx
=γττMNx2τ(12x2τ)N1[1(12x2τ)N]M1𝑑x\displaystyle=\gamma\int_{-\tau}^{\tau}MN\frac{x}{2\tau}(\frac{1}{2}-\frac{x}{2\tau})^{N-1}[1-(\frac{1}{2}-\frac{x}{2\tau})^{N}]^{M-1}dx
=γττxd[1(12x2τ)N]M\displaystyle=\gamma\int_{-\tau}^{\tau}xd{[1-(\frac{1}{2}-\frac{x}{2\tau})^{N}]^{M}}
=γτγττ[1(12x2τ)N]M𝑑x\displaystyle=\gamma\tau-\gamma\int_{-\tau}^{\tau}[1-(\frac{1}{2}-\frac{x}{2\tau})^{N}]^{M}dx
=γτ[1201(1yN)M𝑑y](y=def12x2τ)\displaystyle=\gamma\tau[1-2\int_{0}^{1}(1-y^{N})^{M}dy]\quad\quad(y\mathrel{\overset{\makebox[0.0pt]{\mbox{def}}}{=}}\frac{1}{2}-\frac{x}{2\tau})

Let tMN=01(1yN)M𝑑yt_{MN}=\int_{0}^{1}(1-y^{N})^{M}dy, so that E[ZMN]=γτ[12tMN]E[Z_{MN}]=\gamma\tau[1-2t_{MN}]. Substitute yy by tt where t=yNt=y^{N}, then

tMN\displaystyle t_{MN} =1N01t1N1(1t)M𝑑t\displaystyle=\frac{1}{N}\int_{0}^{1}t^{\frac{1}{N}-1}(1-t)^{M}dt
=1NB(1N,M+1)\displaystyle=\frac{1}{N}B(\frac{1}{N},M+1)
=1NΓ(M+1)Γ(1N)Γ(M+1N+1)\displaystyle=\frac{1}{N}\frac{\Gamma(M+1)\Gamma(\frac{1}{N})}{\Gamma(M+\frac{1}{N}+1)}
=Γ(M+1)Γ(1+1N)Γ(M+1N+1)\displaystyle=\frac{\Gamma(M+1)\Gamma(1+\frac{1}{N})}{\Gamma(M+\frac{1}{N}+1)}
=M(M1)1(M+1N)(M1+1N)(1+1N)\displaystyle=\frac{M(M-1)\cdots 1}{(M+\frac{1}{N})(M-1+\frac{1}{N})\cdots(1+\frac{1}{N})}

Each term in the denominator decreases as NN increases, because 1/N1/N gets smaller. Therefore, tM,N=1=1M+1t_{M,N=1}=\frac{1}{M+1} and tM,N=1t_{M,N\rightarrow\infty}=1. Using this, we conclude that E[ZMN]E[Z_{MN}] decreases as NN increases and E[ZM,N=1]=γτM1M+1E[Z_{M,N=1}]=\gamma\tau\frac{M-1}{M+1} and E[ZM,N]=γτE[Z_{M,N\rightarrow\infty}]=-\gamma\tau.

By Lemma 1, the variance of QsaminQ_{sa}^{min} is

Var[Qsamin]=4Nτ2(N+1)2(N+2)Var[Q_{sa}^{min}]=\frac{4N{\tau}^{2}}{(N+1)^{2}(N+2)}

Var[Qsamin]Var[Q_{sa}^{min}] decreases as NN increases. In particular, Var[Qsamin]=τ23Var[Q_{sa}^{min}]=\frac{{\tau}^{2}}{3} for N=1N=1 and Var[Qsamin]=0Var[Q_{sa}^{min}]=0 for NN\rightarrow\infty.   

The bias-variance trade-off of Maxmin Q-learning is illustrated by the empirical results in Figure 5, which support Theorem 1. For each MM, NN can be selected such that the absolute value of the expected estimation bias is close to 0 according to Theorem 1. As MM increases, we can adjust NN to reduce both the estimation variance and the estimation bias.

Refer to caption
(a) Bias control
Refer to caption
(b) Variance reduction
Figure 5: Empirical results of Theorem 1. MM is the number of available actions for some state ss. NN is the number of action-value functions in Maxmin Q-learning. In Figure 5 (a)(a), we show a heat map of bias control in Maxmin Q-learning. In Figure 5 (b)(b), we show how the variance ratio of QsaminQ_{sa}^{min} and QsaQ_{sa} (i.e. Var[Qsamin]/Var[Qsa]Var[Q_{sa}^{min}]/Var[Q_{sa}]) reduces as NN increases. For a better comparison, we set γτ=1{\gamma\tau}=1.

Finally, we prove the result of the Corollary.

Corollary 1 Assuming the nsan_{sa} samples are evenly allocated amongst the NN estimators, then τ=3σ2N/nsa\tau=\sqrt{3\sigma^{2}N/n_{sa}} where σ2\sigma^{2} is the variance of samples for (s,a)(s,a) and, for QsaQ_{sa} the estimator that uses all nsan_{sa} samples for a single estimate,

Var[Qsamin]=12N2(N+1)2(N+2)Var[Qsa].Var[Q_{sa}^{min}]=\frac{12N^{2}}{(N+1)^{2}(N+2)}Var[Q_{sa}].

Under this uniform random noise assumption, for N8N\geq 8, Var[Qsamin]<Var[Qsa]Var[Q_{sa}^{min}]<Var[Q_{sa}].

Proof. Because QsaiQ^{i}_{sa} is a sample mean, its variance is σ2N/nsa\sigma^{2}N/n_{sa} where σ2\sigma^{2} is the variance of samples for (s,a)(s,a) and its mean is QsaQ_{sa}^{*} (because it is an unbiased sample average). Consequently, esae_{sa} has mean zero and variance σ2N/nsa\sigma^{2}N/n_{sa}. Because esae_{sa} is a uniform random variable which has variance 13τ2\frac{1}{3}\tau^{2}, we know that τ=3σ2N/nsa\tau=\sqrt{3\sigma^{2}N/n_{sa}}. Plugging this value into the variance formula in Theorem 1, we get that

Var[Qsamin]\displaystyle Var[Q_{sa}^{min}] =4Nτ2(N+1)2(N+2)\displaystyle=\frac{4N{\tau}^{2}}{(N+1)^{2}(N+2)}
=12N2σ2/nsa(N+1)2(N+2)\displaystyle=\frac{12N^{2}\sigma^{2}/n_{sa}}{(N+1)^{2}(N+2)}
=12N2(N+1)2(N+2)Var[Qsa]\displaystyle=\frac{12N^{2}}{(N+1)^{2}(N+2)}Var[Q_{sa}]

because Var[Qsa]=σ2/nsaVar[Q_{sa}]=\sigma^{2}/n_{sa} for the sample average QsaQ_{sa} that uses all the samples for one estimator. Easy to verify that for N8N\geq 8, Var[Qsamin]<Var[Qsa]Var[Q_{sa}^{min}]<Var[Q_{sa}].

 

Appendix B The Convergence Proof of Generalized Q-learning

The convergence proof of Generalized Q-learning is based on Tsitsiklis (1994). The key steps to use this result for Generalized Q-learning include showing that the operator is a contraction and verifying the noise conditions. We first show these two steps in Lemma 2 and Lemma 3. We then use these lemmas to make the standard argument for convergence.

B.1 Problem Setting for Generalized Q-learning

Consider a Markov decision problem defined on a finite state space 𝒮\mathcal{S}. For every state s𝒮s\in\mathcal{S}, there is a finite set 𝒜\mathcal{A} of possible actions for state ss and a set of non-negative scalars pss(a)p_{ss^{\prime}}(a), a𝒜a\in\mathcal{A}, s𝒮s^{\prime}\in\mathcal{S}, such that jSpss(a)=1\sum_{j\in S}p_{ss^{\prime}}(a)=1 for all a𝒜a\in\mathcal{A}. The scalar pss(a)p_{ss^{\prime}}(a) is interpreted as the probability of a transition to ss^{\prime}, given that the current state is ss and action aa is applied. Furthermore, for every state ss and action aa, there is a random variable rsar_{sa} which represents the reward if action aa is applied at state ss. We assume that the variance of rsar_{sa} is finite for every ss and a𝒜a\in\mathcal{A}.

A stationary policy is a function π\pi defined on 𝒮\mathcal{S} such that π(s)𝒜\pi(s)\in\mathcal{A} for all s𝒮s\in\mathcal{S}. Given a stationary policy, we obtain a discrete-time Markov chain fπ(t)f^{\pi}(t) with transition probabilities

Pr(fπ(t+1)=s|fπ(t)=s)=pss(π(s))\operatorname{Pr}(f^{\pi}(t+1)=s^{\prime}|f^{\pi}(t)=s)=p_{ss^{\prime}}(\pi(s)) (5)

Let γ[0,1]\gamma\in[0,1] be a discount factor. For any stationary policy π\pi and initial state ss, the state value VsπV_{s}^{\pi} is defined by

Vsπ=limTE[t=0Tγtrfπ(t),π(fπ(t))|fπ(0)=s]V_{s}^{\pi}=\lim_{T\rightarrow\infty}E[\sum_{t=0}^{T}\gamma^{t}r_{f^{\pi}(t),\pi(f^{\pi}(t))}|f^{\pi}(0)=s] (6)

The optimal state value function VV^{*} is defined by

Vs=supπVsπ,sSV_{s}^{*}=\sup_{\pi}V_{s}^{\pi},\quad s\in S (7)

The Markov decision problem is to evaluate the function VV^{*}. Once this is done, an optimal policy is easily determined.

Markov decision problems are easiest when the discount γ\gamma is strictly smaller than 11. For the undiscounted case (γ=1\gamma=1), we will assume throughout that there is a reward-free state, say state 11, which is absorbing; that is, p11(a)=1p_{11}(a)=1 and r1u=0r_{1u}=0 for all a𝒜a\in\mathcal{A}. The objective is then to reach that state at maximum expected reward. We say that a stationary policy is proper if the probability of being at the absorbing state converges to 11 as time converges to infinity; otherwise, we say that the policy is improper.

We define the dynamic programming operator T:|𝒮||𝒮|T:\mathbb{R}^{|\mathcal{S}|}\mapsto\mathbb{R}^{|\mathcal{S}|}, with components TiT_{i}, by letting

Ts(V)=maxa𝒜{E[rsa]+γs𝒮pss(a)Vs}T_{s}(V)=\max_{a\in\mathcal{A}}\{E[r_{sa}]+\gamma\sum_{s^{\prime}\in\mathcal{S}}p_{ss^{\prime}}(a)V_{s^{\prime}}\} (8)

It is well known that if γ<1\gamma<1, then TT is a contraction with respect to the norm \|\cdot\|_{\infty} and VV^{*} is its unique fixed point.

For Generalized Q-learning algorithm, assume that there are NN estimators of action-values Q1,,QNQ^{1},\ldots,Q^{N}. Let mm be the cardinality of 𝒮\mathcal{S} and nn be the cardinality of 𝒜\mathcal{A}. We use a discrete index variable tt in order to count iterations. Denote Qij(t)=Qi(t+j)Q^{ij}(t)=Q^{i}(t+j). After tt iterations, we have a vector Q(t)wQ(t)\in\mathbb{R}^{w} and w=mnNKw=mnNK, with components Qsaij(t)Q^{ij}_{sa}(t), (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A}, i{1,,N}i\in\{1,\ldots,N\}, and j{0,,K1}j\in\{0,\ldots,K-1\}.

By definition, for j{1,,K1}j\in\{1,\ldots,K-1\}, we have

Qsaij(t+1)=Qsai,j1(t).Q^{ij}_{sa}(t+1)=Q^{i,j-1}_{sa}(t). (9)

For j=0j=0, we have Qsai0=QsaiQ^{i0}_{sa}=Q^{i}_{sa}. And we update according to the formula

Qsai(t+1)=Qsai(t)+αsai(t)[YGQ(t)Qsai(t)]Q^{i}_{sa}(t+1)=Q^{i}_{sa}(t)+\alpha^{i}_{sa}(t)[Y^{GQ}(t)-Q^{i}_{sa}(t)] (10)

where

YGQ(t)=rsa+γQf(s,a)GQ(t).Y^{GQ}(t)=r_{sa}+\gamma Q^{GQ}_{f(s,a)}(t). (11)

Here, each αsai(t)\alpha_{sa}^{i}(t) is a nonnegative step-size coefficient which is set to zero for those (s,a)𝒮×𝒜(s,a)\in\mathcal{S}\times\mathcal{A} and i{1,,N}i\in\{1,\ldots,N\} for which QsaiQ_{sa}^{i} is not to be updated at the current iteration. Furthermore, rsar_{sa} is a random sample of the immediate reward if action aa is applied at state ss. f(s,a)f(s,a) is a random successor state which is equal to ss^{\prime} with probability pss(a)p_{ss^{\prime}}(a). Finally, QsGQ(t)Q^{GQ}_{s}(t) is defined as

QsGQ(t)=G(Qs(t))Q^{GQ}_{s}(t)=G(Q_{s}(t)) (12)

where GG is a mapping from nNK\mathbb{R}^{nNK} to \mathbb{R}. It is understood that all random samples that are drawn in the course of the algorithm are drawn independently.

Since for j{1,,K1}j\in\{1,\ldots,K-1\}, we just preserve current available action-values, we only focus on the case that j=0j=0 in the sequel. Let FF be the mapping from mnNK\mathbb{R}^{mnNK} into mnN\mathbb{R}^{mnN} with components FsaiF^{i}_{sa} defined by

Fsai(Q)=E[rsa]+γE[Qf(s,a)GQ]F^{i}_{sa}(Q)=E[r_{sa}]+\gamma E[Q^{GQ}_{f(s,a)}] (13)

and note that

E[QsGQ]=s𝒮pss(a)QsGQE[Q^{GQ}_{s}]=\sum_{s^{\prime}\in\mathcal{S}}p_{ss^{\prime}}(a)Q^{GQ}_{s^{\prime}} (14)

If Fsai(Q(t))=Q(t)saiF^{i}_{sa}(Q(t))=Q(t)^{i}_{sa}, we can do KK more updates such that Q(t)aij=Q(t)aklQ(t)_{a}^{ij}=Q(t)_{a}^{kl}, i,k{1,,N}\forall i,k\in\{1,\ldots,N\}, j,l{0,,K1}\forall j,l\in\{0,\ldots,K-1\}, and a𝒜\forall a\in\mathcal{A}.

In view of Equation 13, Equation 10 can be written as

Qsai(t+1)=Qsai(t)+αsai(t)[Fsai(Q(t))Qsai(t)+wsai(t)]Q^{i}_{sa}(t+1)=Q^{i}_{sa}(t)+\alpha^{i}_{sa}(t)[F^{i}_{sa}(Q(t))-Q^{i}_{sa}(t)+w^{i}_{sa}(t)] (15)

where

wsai(t)=rsaE[rsa]+γ(Qf(s,a)GQ(t)E[Qf(s,a)GQ(t)|(t)])w^{i}_{sa}(t)=r_{sa}-E[r_{sa}]+\gamma(Q^{GQ}_{f(s,a)}(t)-E[Q^{GQ}_{f(s,a)}(t)|\mathcal{F}(t)]) (16)

and (t)\mathcal{F}(t) represents the history of the algorithm during the first tt iterations. The expectation in the expression E[Qf(s,a)GQ(t)|(t)]E[Q^{GQ}_{f(s,a)}(t)|\mathcal{F}(t)] is with respect to f(s,a)f(s,a).

B.2 Key Lemmas and the Proofs

Lemma 2

Assume Assumption 1 holds for function GG in Generalized Q-learning. Then we have

E[wsa2(t)|(t)]Var(rsa)+maxi{1,,N}maxτtmax(s,a)𝒮×𝒜|Qsai(τ)|2.E[w_{sa}^{2}(t)|\mathcal{F}(t)]\leq Var(r_{sa})+\max_{i\in\{1,\ldots,N\}}\max_{\tau\leq t}\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}{|Q^{i}_{sa}(\tau)|}^{2}.

Proof. Under Assumption 1, the conditional variance of Qf(s,a)GQQ^{GQ}_{f(s,a)} given (t)\mathcal{F}(t), is bounded above by the largest possible value that this random variable could take, which is maxi{1,,N}maxj{0,,K1}max(s,a)𝒮×𝒜|Qsai(tj)|2\max_{i\in\{1,\ldots,N\}}\max_{j\in\{0,\ldots,K-1\}}\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}{|Q^{i}_{sa}(t-j)|}^{2}. We then take the conditional variance of both sides of Equation 16, to obtain

E[wsa2(t)|(t)]Var(rsa)+maxi{1,,N}maxτtmax(s,a)𝒮×𝒜|Qsai(τ)|2E[w_{sa}^{2}(t)|\mathcal{F}(t)]\leq Var(r_{sa})+\max_{i\in\{1,\ldots,N\}}\max_{\tau\leq t}\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}{|Q^{i}_{sa}(\tau)|}^{2} (17)

We have assumed here that rsar_{sa} is independent from f(s,a)f(s,a). If it is not, the right-hand side in the last inequality must be multiplied by 22, but the conclusion does not change.   

Lemma 3

FF is a contraction mapping, in each of the following cases:

  1. (i)

    γ<1\gamma<1.

  2. (ii)

    γ=1\gamma=1 and a𝒜,Qs1ai(t=0)=0\forall a\in\mathcal{A},Q^{i}_{s_{1}a}(t=0)=0 where s1s_{1} is an absorbing state. All policies are proper.

Proof. For discounted problems (γ<1\gamma<1), Equation 13 easily yields Q,Q\forall Q,Q^{\prime},

|Fsai(Q)Fsai(Q)|γmaxs𝒮|QsGQQsGQ||F^{i}_{sa}(Q)-F^{i}_{sa}(Q^{\prime})|\leq\gamma\max_{s\in\mathcal{S}}|Q^{GQ}_{s}-{Q^{\prime}_{s}}^{GQ}| (18)

In particular, FF is a contraction mapping, with respect to the maximum norm \|\cdot\|_{\infty}.

For undiscounted problems (γ=1\gamma=1), our assumptions on the absorbing state s1s_{1} imply that the update equation for Qs1aiQ^{i}_{s_{1}a} degenerates to Qs1ai(t+1)=Qs1ai(t)Q^{i}_{s_{1}a}(t+1)=Q^{i}_{s_{1}a}(t), for all tt. We will be assuming in the sequel, that Qs1aiQ^{i}_{s_{1}a} is initialized at zero. This leads to an equivalent description of the algorithm in which the mappings FsaiF^{i}_{sa} of Equation 13 are replaced by mappings F~sai\tilde{F}^{i}_{sa} satisfying F~sai=Fsai\tilde{F}^{i}_{sa}=F^{i}_{sa} if ss1s\neq s_{1} and F~s1ai(Q)=0\tilde{F}^{i}_{s_{1}a}(Q)=0 for all a𝒜a\in\mathcal{A}, i{1,,N}i\in\{1,\ldots,N\} and QnQ\in\mathbb{R}^{n}.

Let us consider the special case where every policy is proper. By Proposition 2.2 in the work of (Bertsekas & Tsitsiklis, 1996), there exists a vector v>0v>0 such that TT is a contraction with respect to the norm v\|\cdot\|_{v}. In fact, a close examination of the proof of this Proposition 2.2 shows that this proof is easily extended to show that the mapping F~\tilde{F} (with components F~sai\tilde{F}^{i}_{sa}) is a contraction with respect to the norm z\|\cdot\|_{z}, where zsai=vsz^{i}_{sa}=v_{s} for every a𝒜a\in\mathcal{A} and i{1,,N}i\in\{1,\ldots,N\}.   

B.3 Models and Assumptions

In this section, we describe the algorithmic model to be employed and state some assumptions that will be imposed.

The algorithm consists of noisy updates of a vector xnx\in\mathbb{R}^{n}, for the purpose of solving a system of equations of the form F(x)=xF(x)=x. Here FF is assumed to be a mapping from n\mathbb{R}^{n} into itself. Let F1,,FnF_{1},\dots,F_{n}: n\mathbb{R}^{n}\mapsto\mathbb{R} be the corresponding component mappings; that is, F(x)=(F1(x),,Fn(x))F(x)=(F_{1}(x),\dots,F_{n}(x)) for all xnx\in\mathbb{R}^{n}.

Let 𝒩\mathcal{N} be the set of non-negative integers. We employ a discrete ”time” variable tt, taking values in 𝒩\mathcal{N}. This variable need not have any relation with real time; rather, it is used to index successive updates. Let x(t)x(t) be the value of the vector xx at time tt and let xi(t)x_{i}(t) denote its iith component. Let TiT^{i} be an infinite subset of 𝒩\mathcal{N} indicating the set of times at which an update of xix_{i} is performed. We assume that

xi(t+1)=xi(t),tTix_{i}(t+1)=x_{i}(t),\quad t\notin T^{i} (19)

Regarding the times that xix_{i} is updated, we postulate an update equation of the form

xi(t+1)=xi(t)+αi(t)(Fi(xi(t))xi(t)+wi(t)),tTix_{i}(t+1)=x_{i}(t)+\alpha_{i}(t)(F_{i}(x^{i}(t))-x_{i}(t)+w_{i}(t)),\quad t\in T^{i} (20)

Here, α(t)\alpha(t) is a step-size parameter belonging to [0,1][0,1], wi(t)w_{i}(t) is a noise term, and xi(t)x_{i}(t) is a vector of possibly outdated components of xx. In particular, we assume that

xi(t)=(x1(τ1i(t)),,xn(τni(t))),tTix^{i}(t)=(x_{1}(\tau_{1}^{i}(t)),\ldots,x_{n}(\tau_{n}^{i}(t))),\quad t\in T^{i} (21)

where each τji(t)\tau_{j}^{i}(t) is an integer satisfying 0τji(t)t0\leq\tau_{j}^{i}(t)\leq t. If no information is outdated, we have τji(t)=t\tau_{j}^{i}(t)=t and xi(t)=x(t)x^{i}(t)=x(t) for all tt; the reader may wish to think primarily of this case. For an interpretation of the general case, see (Bertsekas & Tsitsiklis, 1989). In order to bring Eqs. 19 and 20 into a unified form, it is convenient to assume that αi(t),wi(t)\alpha_{i}(t),w_{i}(t), and τji(t)\tau_{j}^{i}(t) are defined for every ii, jj, and tt, but that αi(t)=0\alpha_{i}(t)=0 and τji(t)=t\tau_{j}^{i}(t)=t for tTit\notin T^{i}.

We will now continue with our assumptions. All variables introduced so far (x(t),τji(t),αi(t),wi(t))(x(t),\tau_{j}^{i}(t),\alpha_{i}(t),w_{i}(t)) are viewed as random variables defined on a probability space (Ω,,𝒫)(\Omega,\mathcal{F},\mathcal{P}) and the assumptions deal primarily with the dependencies between these random variables. Our assumptions also involve an increasing sequence {(t)}t=0\{\mathcal{F}(t)\}_{t=0}^{\infty} of subfields of \mathcal{F}. Intuitively, (t)\mathcal{F}(t) is meant to represent the history of the algorithm up to, and including the point at which the step-sizes αi(t)\alpha_{i}(t) for the ttth iteration are selected, but just before the noise term wi(t)w_{i}(t) is generated. Also, the measure-theoretic terminology that ”a random variable ZZ is (t)\mathcal{F}(t)-measurable” has the intuitive meaning that ZZ is completely determined by the history represented by (t)\mathcal{F}(t).

The first assumption, which is the same as the total asynchronism assumption of Bertsekas & Tsitsiklis (1989), guarantees that even though information can be outdated, any old information is eventually discarded.

Assumption 3

For any ii and jj, limtτji(t)=\lim_{t\rightarrow\infty}\tau_{j}^{i}(t)=\infty, with probability 1.

Our next assumption refers to the statistics of the random variables involved in the algorithm.

Assumption 4

Let {(t)}t=0\{\mathcal{F}(t)\}_{t=0}^{\infty} be an increasing sequence of subfields of \mathcal{F}.

  1. (i)

    x(0)x(0) is (0)\mathcal{F}(0)-measurable.

  2. (ii)

    For every ii and t,wi(t)t,w_{i}(t) is (t+1)\mathcal{F}(t+1)-measurable.

  3. (iii)

    For every ii, jj and tt, αi(t)\alpha_{i}(t) and τji(t)\tau_{j}^{i}(t) are (t)\mathcal{F}(t)-measurable.

  4. (iv)

    For every ii and tt, we have E[wi(t)|(t)]=0E[w_{i}(t)|\mathcal{F}(t)]=0.

  5. (v)

    There exist (deterministic) constants AA and BB such that

    E[wi2(t)|(t)]A+Bmaxjmaxτt|xj(τ)|2,i,tE[w_{i}^{2}(t)|\mathcal{F}(t)]\leq A+B\max_{j}\max_{\tau\leq t}|x_{j}(\tau)|^{2},\quad\forall i,t (22)

Assumption 4 allows for the possibility of deciding whether to update a particular component xix_{i} at time tt, based on the past history of the process. In this case, the step-size αi(t)\alpha_{i}(t) becomes a random variable. However, part (iii)(iii) of the assumption requires that the choice of the components to be updated must be made without anticipatory knowledge of the noise variables wiw_{i} that have not yet been realized.

Finally, we introduce a few alternative assumptions on the structure of the iteration mapping FF. We first need some notation: if x,ynx,y\in\mathbb{R}^{n}, the inequality xyx\leq y is to be interpreted as xiyix_{i}\leq y_{i} for all ii. Furthermore, for any positive vector v=(v1,,vn)v=(v_{1},\dots,v_{n}), we define a norm v\|\cdot\|_{v} on n\mathbb{R}^{n} by letting

xv=maxi|xi|vi,xn\|x\|_{v}=\max_{i}\frac{\left|x_{i}\right|}{v_{i}},\quad x\in\mathbb{R}^{n} (23)

Notice that in the special case where all components of vv are equal to 11, v\|\cdot\|_{v} is the same as the maximum norm \|\cdot\|_{\infty}.

Assumption 5

Let F:nnF:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}.

  1. (i)

    The mapping FF is monotone; that is, if xyx\leq y, then F(x)F(y)F(x)\leq F(y).

  2. (ii)

    The mapping FF is continuous.

  3. (iii)

    The mapping FF has a unique fixed point xx^{*} .

  4. (iv)

    If ene\in\mathbb{R}^{n} is the vector with all components equal to 11, and rr is a positive scalar, then

    F(x)reF(xre)F(x+re)F(x)+reF(x)-re\leq F(x-re)\leq F(x+re)\leq F(x)+re (24)
Assumption 6

There exists a vector xnx^{*}\in\mathbb{R}^{n}, a positive vector vv, and a scalar β[0,1)\beta\in[0,1), such that

F(x)xvβxxv,xn\|F(x)-x^{*}\|_{v}\leq\beta\|x-x^{*}\|_{v},\quad\forall x\in\mathbb{R}^{n} (25)
Assumption 7

There exists a positive vector vv, a scalar β[0,1)\beta\in[0,1), and a scalar DD such that

F(x)vβxv+D,xn\|F(x)\|_{v}\leq\beta\|x\|_{v}+D,\quad\forall x\in\mathbb{R}^{n} (26)
Assumption 8

There exists at least one proper stationary policy. Every improper stationary policy yields infinite expected cost for at least one initial state.

Theorem 3

Let Assumptions 3, 4, 2, and 7 hold. Then the sequence x(t)x(t) is bounded with probability 1.

Theorem 4

Let Assumptions 3, 4, 2, and 5 hold. Furthermore, suppose that x(t)x(t) is bounded with probability 11. Then x(t)x(t) converges to xx^{*} with probability 11.

Theorem 5

Let Assumptions 3, 4, 2, and 6 hold. Then x(t)x(t) converges to xx^{*} with probability 11.

Detailed proofs of Theorems 3, 4, and 5 can be found in the work of Bertsekas & Tsitsiklis (1989).

B.4 Proof of Theorem 2

We first state Theorem 2 here again and then show the proof.

Theorem 2 Assume a finite MDP (𝒮,𝒜,P,R)(\mathcal{S},\mathcal{A},\mathrm{P},R) and that Assumption 1 and 2 hold. Then the action-value functions in Generalized Q-learning, using tabular update in Equation (3), will converge to the optimal action-value function with probability 11, in each of the following cases:

  1. (i)

    γ<1\gamma<1.

  2. (ii)

    γ=1\gamma=1 and a𝒜,Qs1ai(t=0)=0\forall a\in\mathcal{A},Q^{i}_{s_{1}a}(t=0)=0 where s1s_{1} is an absorbing state. All policies are proper.

Proof. We first check Assumptions 3, 4, 2, and 6 in Section B.3 are satisfied. Then we simply apply Theorem 5 to Generalized Q-learning.

Assumption 3 is satisfied in the special case where τji(t)=t\tau^{i}_{j}(t)=t, which is what was implicitly assumed in Equation 10, but can be also satisfied even if we allow for outdated information.

Regarding Assumption 4, parts (i)(i) and (ii)(ii) of the assumption are then automatically valid. Part (iii)(iii) is quite natural: in particular, it assumes that the required samples are generated after we decide which components to update during the current iteration. Part (iv)(iv) is automatic from Equation 16. Part (v)(v) is satisfied by Lemma 2.

Assumption 2 needs to be imposed on the step-sizes employed by the Generalized Q-learning algorithm. This assumption is standard for stochastic approximation algorithms. In particular, it requires that every state-action pair (s,a)(s,a) is simulated an infinite number of times.

By Lemma 3, FF is a contraction mapping. Assumption 6 is satisfied.

All assumptions required by Theorem 5 are verified, convergence then follows from Theorem 5.

 

Appendix C Additional Empirical Results

C.1 MDP results

Comparison of three algorithms using the simple MDP in Figure 1 with different values of μ\mu is shown in Figure 6. For μ=+0.1\mu=+0.1, the learning curves of action value Q(A,Left)Q(A,\text{Left}) are shown in (a)(a). Here, the true action value Q(A,Left)Q(A,\text{Left}) is +0.1+0.1. For μ=0.1\mu=-0.1, the learning curves of action value Q(A,Left)Q(A,\text{Left}) are shown in (b)(b). The true action value Q(A,Left)Q(A,\text{Left}) is 0.1-0.1. All results were averaged over 5,0005,000 runs.

Refer to caption
(a) μ=+0.1\mu=+0.1
Refer to caption
(b) μ=0.1\mu=-0.1
Figure 6: MDP results

C.2 Mountain Car results

Refer to caption
(a) Reward𝒩(1,0)Reward\sim\mathcal{N}(-1,0)
Refer to caption
(b) Reward𝒩(1,1)Reward\sim\mathcal{N}(-1,1)
Refer to caption
(c) Reward𝒩(1,10)Reward\sim\mathcal{N}(-1,10)
Refer to caption
(d) Reward𝒩(1,50)Reward\sim\mathcal{N}(-1,50)
Figure 7: Mountain Car results

Comparison of four algorithms on Mountain Car under different reward settings is shown in Figure 7. All experimental results were averaged over 100100 runs. Note that for reward variance σ2=50\sigma^{2}=50, both Q-learning and Averaged Q-learning fail to reach the goal position in 5,0005,000 steps so there are no learning curves shown in Figure 77 (d)(d) for these two algorithms.

C.3 Benchmark Environment results

The sensitivity analysis results of seven benchmark environment are shown in Figure 8.

Refer to caption
(a) Catcher
Refer to caption
(b) Pixelcopter
Refer to caption
(c) Lunarlander
Refer to caption
(d) Asterix
Refer to caption
(e) Space Invaders
Refer to caption
(f) Breakout
Refer to caption
(g) Seaquest
Figure 8: Sensitivity analysis