This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Yaroslav-the-Wise Novgorod State University, Velikiy Novgorod 173003, Russia 11email: [email protected]

Stochastic differential equations for limiting description of UCB rule for Gaussian multi-armed banditsthanks: The reported study was funded by RFBR, project number 20-01-00062.

Sergey Garbar 11 0000-0002-5205-5252
Abstract

We consider the upper confidence bound strategy for Gaussian multi-armed bandits with known control horizon sizes NN and build its limiting description with a system of stochastic differential equations and ordinary differential equations. Rewards for the arms are assumed to have unknown expected values and known variances. A set of Monte-Carlo simulations was performed for the case of close distributions of rewards, when mean rewards differ by the magnitude of order N1/2N^{-1/2}, as it yields the highest normalized regret, to verify the validity of the obtained description. The minimal size of the control horizon when the normalized regret is not noticeably larger than maximum possible was estimated.

Keywords:
Gaussian multi-armed bandit UCB stochactic differential equations limiting description

1 Introduction

We consider a multi-armed bandit (MAB) problem. MAB can be viewed as a slot machine with JJ arms. Each one of the arms can be selected for play, which yields some random income (reward). It is suggested that a number of times that a gambler can play is known beforehand. This number NN is called the control horizon size. Formally MAB is a controlled stochastic process ξ(n),n=1,2,,N\xi(n),n=1,2,\ldots,N. Value ξ(n)\xi(n) only depends on the chosen arm. Gambler’s task is to maximize the total cumulative expected reward during the time of control.

MABs and corresponding algorithms are thoroughly analyzed in [13]. This is a reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma, so it is also studied in machine learning [1][3]. This problem is also known as the problem of adaptive control in a random environment [15] and the problem of expedient behavior [16]. MABs have also been used to model problems such as managing research projects in a large organization like a science foundation or a pharmaceutical company [9][2].

Further we consider a Gaussian MAB, when rewards are normally distributed with probability density functions

fl(x)=(2πDl)1/2e(xml)22Dl,l=1,2,,J.f_{l}\left(x\right)=\left(2\pi D_{l}\right)^{-1/2}e^{-\dfrac{\left(x-m_{l}\right)^{2}}{2D_{l}}},\ l=1,2,\ldots,J.

We assume that mean rewards m1,m2,,mJm_{1},m_{2},\ldots,m_{J} are unknown. In [11, 6] it is shown that incorrect values for variances only slightly affect overall rewards, therefore they can be estimated during the initial steps of control. That is the reason we assume that the variances of rewards D1,D2,,DJD_{1},D_{2},\ldots,D_{J} are assumed to be known. Therefore, the considered MAB can be described with vector parameter θ=(m1,,mJ)\theta=(m_{1},\ldots,m_{J}).

Gaussian MAB can be useful in a batch processing setting [14, 4, 10]. In that scenario gambler can only change the choice of the arm after they have used it for a given number of times (a batch). According to the central limit theorem, the sum of rewards for a batch will have a normal distribution (if batch size is relatively large) for a wide range of distributions of single-step rewards (if variances of rewards are finite). Batch processing can be also performed in parallel.

When some control strategy σ\sigma is used, the expected cumulative reward will be lower than the maximally possible. That happens due to the fact that some number of steps will be spent to explore the distributions of arms’ rewards. Maximal cumulative reward would be obtained if the most lucrative arm was known and used on every step of the control. The expected difference between the maximal possible and the obtained rewards is called the regret and can be expressed as

LN(σ,θ)=Eσ,θ(n=1N(max(m1,,mJ)ξ(n))).L_{N}(\sigma,\theta)=E_{\sigma,\theta}\left(\sum_{n=1}^{N}\left(\max(m_{1},\ldots,m_{J})-\xi(n)\right)\right).

Eσ,θ()E_{\sigma,\theta}(\cdot) denotes the expected value calculated with respect to measure generated by strategy σ\sigma and parameter θ\theta. Therefore, the goal of control is to minimize the regret.

Regret is dependent on the horizon size. To compare the strategies for different horizon sizes it is reasonable to consider the scaled regret

L^N(σ,θ)=(DN)1/2LN(σ,θ),\hat{L}_{N}(\sigma,\theta)=(DN)^{-1/2}L_{N}(\sigma,\theta),

for D=maxJDJD=\max_{J}{D_{J}}.

Single-step income on step nn can be expressed as

ξ(n)=ml+Dlηl,n,\xi(n)=m_{l}+\sqrt{D_{l}}\eta_{l,n},

where ηl,n\eta_{l,n} is a standard normal random variable, and l=1,2,,Jl=1,2,\ldots,J, n=1,2,,Nn=1,2,\ldots,N.

Then cumulative reward for using ll-th arm on step nn can be expressed as

Xl(n)=nlml+nlDlη,X_{l}(n)=n_{l}m_{l}+\sqrt{n_{l}D_{l}}\eta,

where nlDlη\sqrt{n_{l}D_{l}}\eta is a normal random variable zero mean and variance nlDln_{l}D_{l} which we obtain when add up all of the rewards for the arm, nln_{l} is the number of times the ll-th arm was used, l=1,2,,Jl=1,2,\ldots,J; n1++nJ=nn_{1}+\ldots+n_{J}=n.

UCB strategy (as described by Lai [12]) with the parameter aa is defined as selection of arm maximizing the value

Ul(n)=Xl(n)nl+aDllog(n/nl)nl,l=1,2,,J;n=1,2,,N.U_{l}(n)=\dfrac{X_{l}(n)}{n_{l}}+\dfrac{\sqrt{aD_{l}\log(n/n_{l}})}{\sqrt{n_{l}}},l=1,2,\ldots,J;n=1,2,\ldots,N.

Strategy prescribes to use each of the arms once in the beginning of control to make the initial estimations regarding the mean rewards.

Presence of slowly growing term 2Dllog(n/nl)/nl\sqrt{2D_{l}\log(n/n_{l})/n_{l}} ensures the necessity to use every arm from time to time even if the estimated mean reward Xl(nl)/nlX_{l}(n_{l})/n_{l} from its usage so far was lower than from the other arms due to some reason. That is the way the UCB strategy negotiates the exploration–exploitation dilemma: the gambler strives to maximize his or her reward by using the arm with the highest estimated mean, but also collects the information about the less explored arms.

We aim to build a limiting description (with a system of stochastic differential equations) of UCB strategy for a Gaussian multi-armed bandit when NN\to\infty.

2 System of stochastic differential equations to describe UCB strategy for Gaussian two-armed bandit

First, we give a limiting description for the case of two-armed bandit.

In [5, 7] an invariant description of a UCB strategy for a MAB was given. Invariant description scales the control horizon to a unit size by scaling the step number to smaller values (proportional to NN). If NN\rightarrow\infty, then step value is infinitely small.

Differentials of cumulative reward for using ll-th arm can be expressed as

dXl=mldtl+DldWl,l=1,2.dX_{l}=m_{l}dt_{l}+\sqrt{D_{l}}dW_{l},\ l=1,2.

Here W1,W2W_{1},W_{2} are independent Wiener processes, tlt_{l} is a continuous variable that corresponds to the usage ratio of each arm. Each of those equations correspond to a Wiener process with a constant drift. At each moment only one of values t1,t2t_{1},t_{2} can be increased as only one of the arms is used. Therefore, we consider two areas: in the first t1t_{1} increases with time tt, in the second t2t_{2} increases. We’ll use indicators I1(t),I2(t)I_{1}\left(t\right),I_{2}\left(t\right) for convenience.

If the first arm is chosen, then

{dt1=dt,dt2=0,\begin{cases}dt_{1}=dt,\\ dt_{2}=0,\end{cases} and {I1(t)=1,I2(t)=0.\begin{cases}I_{1}(t)=1,\\ I_{2}(t)=0.\end{cases}

If the second arm is chosen, then

{dt1=0,dt2=dt,\begin{cases}dt_{1}=0,\\ dt_{2}=dt,\end{cases} and {I1(t)=0,I2(t)=1.\begin{cases}I_{1}(t)=0,\\ I_{2}(t)=1.\end{cases}

We’ll take notice that I2(t)=1I1(t)I_{2}\left(t\right)=1-I_{1}\left(t\right) and dt2=dtdt1dt_{2}=dt-dt_{1}. Also remember that t2=tt1t_{2}=t-t_{1}: one and only one arm is chosen at every moment.

Each of the areas is determined according to the chosen strategy. UCB rule can be rewritten as

Ul(t,tl)=Xl(t1)tl+aDllogttltl,l=1,2.U_{l}\left(t,t_{l}\right)=\dfrac{X_{l}\left(t_{1}\right)}{t_{l}}+\dfrac{\sqrt{aD_{l}\log{\dfrac{t}{t_{l}}}}}{\sqrt{t_{l}}},\ l=1,2.

The area where the first arm is chosen will be defined by the condition U1(t,t1)>U2(t,t2)U_{1}\left(t,t_{1}\right)>U_{2}\left(t,t_{2}\right). Otherwise the second arm is chosen. To write this condition formally we use the Heaviside step function H()H(\cdot) notation. Its value is zero for negative arguments and one for positive arguments.

For the UCB rule the indicator I1(t)I_{1}\left(t\right) will be the function of arguments t,t1,X1,X2t,t_{1},X_{1},X_{2}:

I1(t,t1,X1,X2)=H(X1t1+aD1logtt1t1X2tt1aD2logttt1tt1).I_{1}\left(t,t_{1},X_{1},X_{2}\right)=H\left(\dfrac{X_{1}}{t_{1}}+\dfrac{\sqrt{aD_{1}\log{\dfrac{t}{t_{1}}}}}{\sqrt{t_{1}}}-\dfrac{X_{2}}{t-t_{1}}-\dfrac{\sqrt{aD_{2}\log{\dfrac{t}{t-t_{1}}}}}{\sqrt{t-t_{1}}}\right).

Using this notation, we write the system of two Itô stochastic differential equations and one ordinary differential equation equations to describe the UCB strategy for Gaussian two-armed bandit as

{dX1=I1m1dt+D1dW1,dX2=(1I1)m2dt+D2dW2,dt1=I1dt.\begin{cases}dX_{1}=I_{1}m_{1}dt+\sqrt{D_{1}}dW_{1},\\ dX_{2}=\left(1-I_{1}\right)m_{2}dt+\sqrt{D_{2}}dW_{2},\\ dt_{1}=I_{1}dt.\end{cases}

3 Simulation results

A series of Monte-Carlo simulations was performed to verify how the obtained system of equations describes the behavior of the Gaussian two-armed bandit when the UCB strategy is used. Without the loss of generality, we consider a Gaussian two-armed bandit with the zero mean reward for the first arm m1=0m_{1}=0. We observe how the normalized regret depends on the value of the mean reward for the second arm. Also, we assume D1=D2=D=1D_{1}=D_{2}=D=1: in [7] it is shown that the maximum normalized regret is observed when the variances of the arm rewards are equal. Also, we set a=1a=1.

Further we consider the case of “close” distributions of rewards as it is when the highest values of regret are observed [17]. Its definitive feature is that the difference of mean rewards has order N1/2N^{-1/2}:

{ml=m+clD/N;mR,|cl|C<,l=1,2}.\left\{m_{l}=m+c_{l}\sqrt{D/N};m\in{R},\left|c_{l}\right|\leq C<\infty,l=1,2\right\}.

Without the loss of generality we set c1=0c_{1}=0.

For the Gaussian multi-armed bandit simulations, we consider the cases of horizon sizes N=200, 400N=200,\ 400. Figure 1 contains the plots of normalized regret vs different values of c2c_{2}. Data for the plots are averaged over 10000 simulations. Figure 1 also shows the normalized regrets calculated for numerical solutions of presented stochastic differential equations for different values of c2c_{2}, averaged over 10000 simulations. We used the Euler–Maruyama method with step sizes of 103{10}^{-3} and 51045\cdot{10}^{-4}.

Refer to caption
Figure 1: Normalized regret for various differences of mean rewards (parameterized by c2c_{2}) obtained by Monte Carlo simulations and as solution of system of stochastic differential equations

We see that normalized regret is about the same for different horizon sizes. Also, the obtained system of equations gives the fitting description of the UCB strategy. We note that maximal normalized regret (approximately 0.73) in these simulations is observed when c23.6c_{2}\approx 3.6.

Having the limiting description allows to answer the question at what horizon size NN the normalized regret will not be noticeably larger than found maximal normalized regret. This question is of importance as the batch version of the algorithm (reported in [5, 7]) allows to vary the batch sizes. Parallel processing is possible in that case, and it can result in shorter processing times.

Figure 2 shows maximum normalized regret (in domain of close distributions of rewards) vs the control horizon size (left plot shows N[3,100]N\in\left[3,100\right], right plot shows N[36,100]N\in\left[36,100\right]). Data is averaged over 10000 simulations.

Refer to caption
Figure 2: Maximum normalized regret vs the control horizon size NN

We see that after N=45N=45 the normalized regret exceeds the value found via the limiting description by no more than 2%2\%. That means that for the batch processing (including parallel), the batch size may be chosen fairly large as long as the number of processed batches stays small of moderate (i.e. 50 or more).

4 System of stochastic differential equations to describe UCB strategy for Gaussian MAB

Further we obtain a limiting description with a system of stochastic differential equations for the UCB strategy for Gaussian multi-armed bandit with JJ arms using the similar reasoning as before. Considered MAB has unknown mean rewards m1,,mJm_{1},\ldots,m_{J} and known variances of rewards D1,,DJD_{1},\ldots,D_{J}.

First, we express tJt_{J} as

tJ=ti=1J1ti.t_{J}=t-\sum_{i=1}^{J-1}t_{i}.

Indicator for the arm will be equal to one if the corresponding upper confidence bound is greater than the confidence bounds for all other arms. We express it as the product of step functions

Il(t,t1,\displaystyle I_{l}(t,t_{1}, ,tJ1,X1,,XJ)=\displaystyle\ldots,t_{J-1},X_{1},\ldots,X_{J})=
=i=1ilJH(Xltl+aDllogttltlXitiaDilogttiti),\displaystyle=\prod_{\begin{subarray}{c}i=1\\ i\neq l\end{subarray}}^{J}{H\left(\dfrac{X_{l}}{t_{l}}+\dfrac{\sqrt{aD_{l}\log\dfrac{t}{t_{l}}}}{\sqrt{t_{l}}}-\dfrac{X_{i}}{t_{i}}-\dfrac{\sqrt{aD_{i}\log\dfrac{t}{t_{i}}}}{\sqrt{t_{i}}}\right)},

for l=1,,J1l=1,\ldots,J-1.

IJ(t,t1,,tJ1,X1,,XJ)=1i=1J1Ii(t,t1,,tJ1,X1,,XJ).I_{J}\left(t,t_{1},\ldots,t_{J-1},X_{1},\ldots,X_{J}\right)=1-\sum_{i=1}^{J-1}{I_{i}\left(t,t_{1},\ldots,t_{J-1},X_{1},\ldots,X_{J}\right)}.

Then the system of JJ stochastic differential equations and J1J-1 ordinary differential equations that describes the UCB strategy takes form

{dX1=I1m1dt+D1dW1,dXJ1=IJ1mJ1dt+DJ1dWJ1,dXJ=(1i=1J1Ii)mJdt+DJdWJ,dt1=I1dt,dtJ1=IJ1dt.\begin{cases}dX_{1}=I_{1}m_{1}dt+\sqrt{D_{1}}dW_{1},\\ …\\ dX_{J-1}=I_{J-1}m_{J-1}dt+\sqrt{D_{J-1}}dW_{J-1},\\ dX_{J}=\left(1-\sum_{i=1}^{J-1}I_{i}\right)m_{J}dt+\sqrt{D_{J}}dW_{J},\\ dt_{1}=I_{1}dt,\\ …\\ dt_{J-1}=I_{J-1}dt.\end{cases}

Presented system of equations also can be used to determine the horizon size at which the normalized regret converges. That can be used as the number of processed batches for the batch version of strategy, described in [5].

We consider a case of three-armed bandit as an example. We also consider the case of close distribution of rewards, described in section 3. The highest normalized regret is observed when c1=c2=0c_{1}=c_{2}=0 and c34.5c_{3}\approx 4.5 [8]. This happens as in this situation the strategy must distinguish between the best arm and two competing arms with fairly similar but lower rewards. Simulations show that the normalized regret converges at horizon size N=70N=70, i.e. is no more than 2%2\% greater than its value for larger horizon sizes.

5 Conclusion

We built and examined a limiting description (for horizon size NN\rightarrow\infty) with a system of stochastic differential equations for the UCB strategy for the Gaussian multi-armed bandit.

A series of Monte-Carlo simulations was performed to verify the validity of obtained results for the case of two-armed bandit in the case of “close” distributions of rewards. Maximum normalized regret for the case NN\rightarrow\infty was determined. The minimal size of the control horizon when the normalized regret is not noticeably larger than maximum possible was estimated.

5.0.1 Acknowledgements

The reported study was funded by RFBR, project number 20-01-00062.

References

  • [1] Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research 3, 397–422 (2002). https://doi.org/10.1162/153244303321897663
  • [2] Berry, D.A., Fristedt, B.: Bandit problems. sequential allocation of experiments. Chapman and Hall, London/New York (1985)
  • [3] Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press (2006). https://doi.org/10.1017/CBO9780511546921
  • [4] Gao, Z., Han, Y., Ren, Z., Zhou, Z.: Batched multi-armed bandits problem. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada (2019)
  • [5] Garbar, S.: Invariant description for batch version of ucb strategy for multi-armed bandit. Journal of Physics: Conference Series 1658, 012015 (2020). https://doi.org/10.1088/1742-6596/1658/1/012015
  • [6] Garbar, S.: Dependency of regret on accuracy of variance estimation for different versions of ucb strategy for gaussian multi-armed bandits. Journal of Physics: Conference Series 2052, 012013 (2021). https://doi.org/10.1088/1742-6596/2052/1/012013
  • [7] Garbar, S.: Invariant description of ucb strategy for multi-armed bandits for batch processing scenario. 24th International Conference on Circuits, Systems, Communications and Computers (CSCC), Chania, Greece pp. 75–78 (2020). https://doi.org/10.1109/CSCC49995.2020.00021
  • [8] Garbar, S., Kolnogorov, A.: Invariant description for regret of ucb strategy for gaussian multi-arm bandit. Proceedings of the 6th Stochastic Modeling Techniques and Data Analysis International Conference with Demographics Workshop 44, 251–260 (2020)
  • [9] Gittins, J.: Multi-armed bandit allocation indices. Wiley-Interscience Series in Systems and Optimization., Chichester: John Wiley & Sons, Ltd. (1989)
  • [10] Kolnogorov, A.: Parallel design of robust control in the stochastic environment (the two-armed bandit problem). Automation and Remote Control 73 (2012). https://doi.org/10.1134/S000511791204008X
  • [11] Kolnogorov, A.: Gaussian two-armed bandit and optimization of batch data processing. Problems of Information Transmission 54, 84–100 (2018). https://doi.org/10.1134/S0032946018010076
  • [12] Lai, T.: Adaptive treatment allocation and the multi-armed bandit problem. The Annals of Statistics 15 (1987). https://doi.org/10.1214/aos/1176350495
  • [13] Lattimore, T., Szepesvári, C.: Bandit Algorithms. Cambridge University Press (2020). https://doi.org/10.1017/9781108571401
  • [14] Perchet, V., Rigollet, P., Chassang, S., Snowberg, E.: Batched bandit problems. The Annals of Statistics 44 (2015). https://doi.org/10.1214/15-AOS1381
  • [15] Sragovich, V.: Mathematical theory of adaptive control. World Scientific Publishing (2005)
  • [16] Tsetlin, M.: Automaton theory and the modelling of biological systems. Academic Press, New York (1973)
  • [17] Vogel, W.: An asymptotic minimax theorem for the two armed bandit problem. The Annals of Mathematical Statistics 31 (1960). https://doi.org/10.1214/aoms/1177705907