This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fixed-Budget Best-Arm Identification with Heterogeneous Reward Variances

Anusha Lalitha Kousha Kalantari Yifei Ma Anoop Deoras Branislav Kveton
Abstract

We study the problem of best-arm identification (BAI) in the fixed-budget setting with heterogeneous reward variances. We propose two variance-adaptive BAI algorithms for this setting: πš‚π™·πš…πšŠπš›\tt SHVar for known reward variances and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar for unknown reward variances. The key idea in our algorithms is to adaptively allocate more budget to arms with higher reward variances. The main algorithmic novelty is in the design of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, which allocates budget greedily based on overestimating unknown reward variances. We bound the probabilities of misidentifying best arms in both πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. Our analyses rely on novel lower bounds on the number of arm pulls in BAI that do not require closed-form solutions to the budget allocation problem. One of our budget allocation problems is equivalent to the optimal experiment design with unknown variances and thus of a broad interest. We also evaluate our algorithms on synthetic and real-world problems. In most settings, πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar outperform all prior algorithms.

1 Introduction

The problem of best-arm identification (BAI) in the fixed-budget setting is a pure exploration bandit problem which can be briefly described as follows. An agent interacts with a stochastic multi-armed bandit with KK arms and its goal is to identify the arm with the highest mean reward within a fixed budget nn of arm pulls [Bubeck etΒ al., 2009, Audibert etΒ al., 2010]. This problem arises naturally in many applications in practice, such as online advertising, recommender systems, and vaccine tests [Lattimore and Szepesvari, 2019]. It is also common in applications where observations are costly, such as Bayesian optimization [Krause etΒ al., 2008]. Another commonly studied setting is fixed-confidence BAI [Even-Dar etΒ al., 2006, Soare etΒ al., 2014]. Here the goal is to identify the best arm within a prescribed confidence level while minimizing the budget. Some works also studied both settings [Gabillon etΒ al., 2012, Karnin etΒ al., 2013, Kaufmann etΒ al., 2016].

Our work can be motivated by the following example. Consider an A/B test where the goal is to identify a movie with the highest average user rating from a set of KK movies. This problem can be formulated as BAI by treating the movies as arms and user ratings as stochastic rewards. Some movies get either unanimously good or bad ratings, and thus their ratings have a low variance. Others get a wide range of ratings, because they are rated highly by their target audience and poorly by others; and hence their ratings have a high variance. For this setting, we can design better BAI policies that take the variance into account. Specifically, movies with low-variance ratings can be exposed to fewer users in the A/B test than movies with high-variance ratings.

An analogous synthetic example is presented in FigureΒ 1. In this example, reward variances increase with mean arm rewards for a half of the arms, while the remaining arms have very low variances. The knowledge of the reward variances can be obviously used to reduce the number of pulls of arms with low-variance rewards. However, in practice, the reward variances are rarely known in advance, such as in our motivating A/B testing example, and this makes the design and analysis of variance-adaptive BAI algorithms challenging. We revisit these two examples in our empirical studies in SectionΒ 5.

We propose and analyze two variance-adaptive BAI algorithms: πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. πš‚π™·πš…πšŠπš›\tt SHVar assumes that the reward variances are known and is a stepping stone for our fully-adaptive BAI algorithm πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, which estimates them. πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar utilizes high-probability upper confidence bounds on the reward variances. Both algorithms are motivated by sequential halving (πš‚π™·\tt SH) of Karnin etΒ al. [2013], a near-optimal solution for fixed-budget BAI with homogeneous reward variances.

Our main contributions are:

  • β€’

    We design two variance-adaptive algorithms for fixed-budget BAI: πš‚π™·πš…πšŠπš›\tt SHVar for known reward variances and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar for unknown reward variances. πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is only a third algorithm for this setting [Gabillon etΒ al., 2011, Faella etΒ al., 2020] and only a second that can be implemented as analyzed [Faella etΒ al., 2020]. The key idea in πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is to solve a budget allocation problem with unknown reward variances by a greedy algorithm that overestimates them. This idea can be applied to other elimination algorithms in the cumulative regret setting [Auer and Ortner, 2010] and is of independent interest to the field of optimal experiment design [Pukelsheim, 1993].

  • β€’

    We prove upper bounds on the probability of misidentifying the best arm for both πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. The analysis of πš‚π™·πš…πšŠπš›\tt SHVar extends that of Karnin etΒ al. [2013] to heterogeneous variances. The analysis of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar relies on a novel lower bound on the number of pulls of an arm that scales linearly with its unknown reward variance. This permits an analysis of sequential halving without requiring a closed form for the number of pulls of each arm.

  • β€’

    We evaluate our methods empirically on Gaussian bandits and the MovieLens dataset [Lam and Herlocker, 2016]. In most settings, πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar outperform all prior algorithms.

The paper is organized as follows. In SectionΒ 2, we present the fixed-budget BAI problem. We present our algorithms in SectionΒ 3 and analyze them in SectionΒ 4. The algorithms are empirically evaluated in SectionΒ 5. We review prior works in SectionΒ 6 and conclude in SectionΒ 7.

2 Setting

We use the following notation. Random variables are capitalized, except for Greek letters like ΞΌ\mu. For any positive integer nn, we define [n]={1,…,n}[n]=\left\{1,\dots,n\right\}. The indicator function is denoted by πŸ™β€‹{β‹…}\mathds{1}\!\left\{\cdot\right\}. The ii-th entry of vector vv is viv_{i}. If the vector is already indexed, such as vjv_{j}, we write vj,iv_{j,i}. The big O notation up to logarithmic factors is O~\tilde{O}.

We have a stochastic bandit with KK arms and denote the set of arms by π’œ=[K]\mathcal{A}=[K]. When the arm is pulled, its reward is drawn i.i.d.Β from its reward distribution. The reward distribution of arm iβˆˆπ’œi\in\mathcal{A} is sub-Gaussian with mean ΞΌi\mu_{i} and variance proxy Οƒi2\sigma^{2}_{i}. The best arm is the arm with the highest mean reward,

iβˆ—=arg​maxiβˆˆπ’œβ‘ΞΌi.\displaystyle\textstyle i_{*}=\operatorname*{arg\,max\,}_{i\in\mathcal{A}}\mu_{i}\,.

Without loss of generality, we make an assumption that the arms are ordered as ΞΌ1>ΞΌ2β‰₯…β‰₯ΞΌK\mu_{1}>\mu_{2}\geq\ldots\geq\mu_{K}. Therefore, arm iβˆ—=1i_{*}=1 is a unique best arm. The agent has a budget of nn observations and the goal is to identify iβˆ—i_{*} as accurately as possible after pulling all arms nn times. Specifically, let I^\hat{I} denote the arm returned by the agent after nn pulls. Then our objective is to minimize the probability of misidentifying the best arm ℙ​(I^β‰ iβˆ—)\mathbb{P}\left(\hat{I}\neq i_{*}\right), which we also call a mistake probability. This setting is known as fixed-budget BAI [Bubeck etΒ al., 2009, Audibert etΒ al., 2010]. When observations are costly, it is natural to limit them by a fixed budget nn.

Another commonly studied setting is fixed-confidence BAI [Even-Dar etΒ al., 2006, Soare etΒ al., 2014]. Here the agent is given an upper bound on the mistake probability Ξ΄\delta as an input and the goal is to attain ℙ​(I^β‰ iβˆ—)≀δ\mathbb{P}\left(\hat{I}\neq i_{*}\right)\leq\delta at minimum budget nn. Some works also studied both the fixed-budget and fixed-confidence settings [Gabillon etΒ al., 2012, Karnin etΒ al., 2013, Kaufmann etΒ al., 2016].

Refer to caption
Figure 1: Mean rewards and variances for K=64K=64 arms in the Gaussian bandit in SectionΒ 5.1.

3 Algorithms

A near-optimal solution for fixed-budget BAI with homogeneous reward variances is sequential halving [Karnin et al., 2013]. The key idea is to sequentially eliminate suboptimal arms in log2⁑K\log_{2}K stages. In each stage, all arms are pulled equally and the worst half of the arms are eliminated at the end of the stage. At the end of the last stage, only one arm I^\hat{I} remains and that arm is the estimated best arm.

Algorithm 1 Meta-algorithm for sequential halving.
1:Input: Budget nn, base algorithm π™°πš•πš\tt Alg
2:
3:Number of stages mβ†βŒˆlog2⁑KβŒ‰m\leftarrow\left\lceil\log_{2}K\right\rceil
4:π’œ1β†π’œ\mathcal{A}_{1}\leftarrow\mathcal{A}
5:forΒ s=1,…,ms=1,\dots,mΒ do
6:Β Β Β Β Β Per-stage budget nsβ†βŒŠn/mβŒ‹\displaystyle n_{s}\leftarrow\left\lfloor n/m\right\rfloor
7:Β Β Β Β Β forΒ t=1,…,nst=1,\dots,n_{s}Β do
8:Β Β Β Β Β Β Β Β Β Is,tβ†π™°πš•πšβ€‹(𝚜,𝚝)I_{s,t}\leftarrow\tt Alg(s,t)
9:Β Β Β Β Β Β Β Β Β Observe reward Ys,t,Is,tY_{s,t,I_{s,t}} of arm Is,tI_{s,t} Β Β Β Β Β 
10:Β Β Β Β Β forΒ iβˆˆπ’œsi\in\mathcal{A}_{s}Β do
11:Β Β Β Β Β Β Β Β Β Ns,iβ†βˆ‘t=1nsπŸ™β€‹{Is,t=i}\displaystyle N_{s,i}\leftarrow\sum_{t=1}^{n_{s}}\mathds{1}\!\left\{I_{s,t}=i\right\}
12:Β Β Β Β Β Β Β Β Β ΞΌ^s,i←1Ns,iβ€‹βˆ‘t=1nsπŸ™β€‹{Is,t=i}​Ys,t,i\displaystyle\hat{\mu}_{s,i}\leftarrow\frac{1}{N_{s,i}}\sum_{t=1}^{n_{s}}\mathds{1}\!\left\{I_{s,t}=i\right\}Y_{s,t,i} Β Β Β Β Β 
13:Β Β Β Β Β π’œs+1←{⌈|π’œs|/2βŒ‰Β armsΒ iβˆˆπ’œsΒ with highestΒ ΞΌ^s,i}\mathcal{A}_{s+1}\leftarrow\left\{\textrm{$\left\lceil\left|\mathcal{A}_{s}\right|/2\right\rceil$ arms $i\in\mathcal{A}_{s}$ with highest $\hat{\mu}_{s,i}$}\right\}
14:
15:Output: The last remaining arm I^\hat{I} in π’œm+1\mathcal{A}_{m+1}
Algorithm 2 πš‚π™·\tt SH: Pulled arm in sequential halving.
1:Input: Stage ss, round tt
2:
3:k←(tβˆ’1)mod|π’œs|+1k\leftarrow(t-1)\bmod\left|\mathcal{A}_{s}\right|+1
4:Is,t←k-th arm inΒ π’œsI_{s,t}\leftarrow\textrm{$k$-th arm in $\mathcal{A}_{s}$}
5:
6:Output: Arm to pull Is,tI_{s,t}
Algorithm 3 πš‚π™·πš…πšŠπš›\tt SHVar: Pulled arm in sequential halving with known heterogeneous reward variances.
1:Input: Stage ss, round tt
2:
3:forΒ iβˆˆπ’œsi\in\mathcal{A}_{s}Β do
4:Β Β Β Β Β Ns,t,iβ†βˆ‘β„“=1tβˆ’1πŸ™β€‹{Is,β„“=i}\displaystyle N_{s,t,i}\leftarrow\sum_{\ell=1}^{t-1}\mathds{1}\!\left\{I_{s,\ell}=i\right\}
5:Is,t←arg​maxiβˆˆπ’œs⁑σi2Ns,t,i\displaystyle I_{s,t}\leftarrow\operatorname*{arg\,max\,}_{i\in\mathcal{A}_{s}}\frac{\sigma_{i}^{2}}{N_{s,t,i}}
6:
7:Output: Arm to pull Is,tI_{s,t}

The main algorithmic contribution of our work is that we generalize sequential halving of Karnin etΒ al. [2013] to heterogeneous reward variances. All of our algorithms can be viewed as instances of a meta-algorithm (AlgorithmΒ 1), which we describe in detail next. Its inputs are a budget nn on the number of observations and base algorithm π™°πš•πš\tt Alg. The meta-algorithm has mm stages (line 2) and the budget is divided equally across the stages, with a per-stage budget ns=⌊n/mβŒ‹n_{s}=\left\lfloor n/m\right\rfloor (line 5). In stage ss, all remaining arms π’œs\mathcal{A}_{s} are pulled according to π™°πš•πš\tt Alg (lines 6–8). At the end of stage ss, the worst half of the remaining arms, as measured by their estimated mean rewards, is eliminated (lines 9–12). Here Ys,t,iY_{s,t,i} is the stochastic reward of arm ii in round tt of stage ss, Is,tβˆˆπ’œsI_{s,t}\in\mathcal{A}_{s} is the pulled arm in round tt of stage ss, Ns,iN_{s,i} is the number of pulls of arm ii in stage ss, and ΞΌ^s,i\hat{\mu}_{s,i} is its mean reward estimate from all observations in stage ss.

The sequential halving of Karnin etΒ al. [2013] is an instance of AlgorithmΒ 1 for π™°πš•πš=πš‚π™·\tt Alg=\tt SH. The pseudocode of πš‚π™·\tt SH, which pulls all arms in stage ss equally, is in AlgorithmΒ 2. We call the resulting algorithm πš‚π™·\tt SH. This algorithm misidentifies the best arm with probability [Karnin etΒ al., 2013]

ℙ​(I^β‰ 1)≀3​log2⁑K​exp⁑[βˆ’n8​H2​log2⁑K],\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)\leq 3\log_{2}K\exp\left[-\frac{n}{8H_{2}\log_{2}K}\right]\,, (1)

where

H2=maxiβˆˆπ’œβˆ–{1}⁑iΞ”i2\displaystyle H_{2}=\max_{i\in\mathcal{A}\setminus\left\{1\right\}}\frac{i}{\Delta_{i}^{2}} (2)

is a complexity parameter and Ξ”i=ΞΌ1βˆ’ΞΌi\Delta_{i}=\mu_{1}-\mu_{i} is the suboptimality gap of arm ii. The bound in (1) decreases as budget nn increases and problem complexity H2H_{2} decreases.

πš‚π™·\tt SH is near optimal only in the setting of homogeneous reward variances. In this work, we study the general setting where the reward variances of arms vary, potentially as extremely as in our motivating example in FigureΒ 1. In this example, πš‚π™·\tt SH would face arms with both low and high variances in each stage. A variance-adaptive πš‚π™·\tt SH could adapt its budget allocation in each stage to the reward variances and thus eliminate suboptimal arms more effectively.

3.1 Known Heterogeneous Reward Variances

We start with the setting of known reward variances. Let

Οƒi2=var​[Ys,t,i]=𝔼​[(Ys,t,iβˆ’ΞΌi)2]\displaystyle\sigma_{i}^{2}=\mathrm{var}\left[Y_{s,t,i}\right]=\mathbb{E}\left[(Y_{s,t,i}-\mu_{i})^{2}\right] (3)

be a known reward variance of arm ii. Our proposed algorithm is an instance of AlgorithmΒ 1 for π™°πš•πš=πš‚π™·πš…πšŠπš›\tt Alg=\tt SHVar. The pseudocode of πš‚π™·πš…πšŠπš›\tt SHVar is in AlgorithmΒ 3. The key idea is to pull the arm with the highest variance of its mean reward estimate. The variance of the mean reward estimate of arm ii in round tt of stage ss is Οƒi2/Ns,t,i\sigma_{i}^{2}/N_{s,t,i}, where Οƒi2\sigma_{i}^{2} is the reward variance of arm ii and Ns,t,iN_{s,t,i} is the number of pulls of arm ii up to round tt of stage ss. We call the resulting algorithm πš‚π™·πš…πšŠπš›\tt SHVar.

Note that πš‚π™·\tt SH is an instance of πš‚π™·πš…πšŠπš›\tt SHVar. Specifically, when all Οƒi=Οƒ\sigma_{i}=\sigma for some Οƒ>0\sigma>0, πš‚π™·πš…πšŠπš›\tt SHVar pulls all arms equally, as in πš‚π™·\tt SH. πš‚π™·πš…πšŠπš›\tt SHVar can be also viewed as pulling any arm ii in stage ss for

Ns,iβ‰ˆΟƒi2βˆ‘jβˆˆπ’œsΟƒj2​ns\displaystyle N_{s,i}\approx\frac{\sigma_{i}^{2}}{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}n_{s} (4)

times. This is stated formally and proved below.

Lemma 1.

Fix stage ss and let the ideal number of pulls of arm iβˆˆπ’œsi\in\mathcal{A}_{s} be

Ξ»s,i=Οƒi2βˆ‘jβˆˆπ’œsΟƒj2​ns.\displaystyle\lambda_{s,i}=\frac{\sigma_{i}^{2}}{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}n_{s}\,.

Let all Ξ»s,i\lambda_{s,i} be integers. Then πš‚π™·πš…πšŠπš›\tt SHVar pulls arm ii in stage ss exactly Ξ»s,i\lambda_{s,i} times.

Proof.

First, suppose that πš‚π™·πš…πšŠπš›\tt SHVar pulls each arm ii exactly Ξ»s,i\lambda_{s,i} times. Then the variances of all mean reward estimates at the end of stage ss are identical, because

Οƒi2Ns,i=Οƒi2Ξ»s,i=Οƒi2Οƒi2βˆ‘jβˆˆπ’œsΟƒj2​ns=βˆ‘jβˆˆπ’œsΟƒj2ns.\displaystyle\frac{\sigma_{i}^{2}}{N_{s,i}}=\frac{\sigma_{i}^{2}}{\lambda_{s,i}}=\frac{\sigma_{i}^{2}}{\frac{\sigma_{i}^{2}}{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}n_{s}}=\frac{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}{n_{s}}\,.

Now suppose that this is not true. This implies that there exists an over-pulled arm iβˆˆπ’œsi\in\mathcal{A}_{s} and an under-pulled arm kβˆˆπ’œsk\in\mathcal{A}_{s} such that

Οƒi2Ns,i<βˆ‘jβˆˆπ’œsΟƒj2ns<Οƒk2Ns,k.\displaystyle\frac{\sigma_{i}^{2}}{N_{s,i}}<\frac{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}{n_{s}}<\frac{\sigma_{k}^{2}}{N_{s,k}}\,. (5)

Since arm iβˆˆπ’œsi\in\mathcal{A}_{s} is over-pulled and Ξ»s,i\lambda_{s,i} is an integer, there must exist a round t∈[ns]t\in[n_{s}] such that

Οƒi2Ns,t,i=Οƒi2Ξ»s,i=βˆ‘jβˆˆπ’œsΟƒj2ns.\displaystyle\frac{\sigma_{i}^{2}}{N_{s,t,i}}=\frac{\sigma_{i}^{2}}{\lambda_{s,i}}=\frac{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}{n_{s}}\,.

Let tt be the last round where this equality holds, meaning that arm ii is pulled in round tt.

Now we combine the second inequality in (5) with Ns,kβ‰₯Ns,t,kN_{s,k}\geq N_{s,t,k}, which holds by definition, and get

βˆ‘jβˆˆπ’œsΟƒj2ns<Οƒk2Ns,k≀σk2Ns,t,k.\displaystyle\frac{\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}{n_{s}}<\frac{\sigma_{k}^{2}}{N_{s,k}}\leq\frac{\sigma_{k}^{2}}{N_{s,t,k}}\,.

The last two sets of inequalities lead to a contradiction. On one hand, we know that arm ii is pulled in round tt. On the other hand, we have Οƒi2/Ns,t,i<Οƒk2/Ns,t,k\sigma_{i}^{2}/N_{s,t,i}<\sigma_{k}^{2}/N_{s,t,k}, which means that arm ii cannot be pulled. This completes the proof. ∎

LemmaΒ 1 says that each arm iβˆˆπ’œsi\in\mathcal{A}_{s} is pulled O​(Οƒi2)O(\sigma_{i}^{2}) times. Since the mean reward estimate of arm ii at the end of stage ss has variance Οƒi2/Ns,i\sigma_{i}^{2}/N_{s,i}, the variances of all estimates at the end of stage ss are identical, (βˆ‘iβˆˆπ’œsΟƒi2)/ns\left(\sum_{i\in\mathcal{A}_{s}}\sigma_{i}^{2}\right)/n_{s}. This relates our problem to the G-optimal design [Pukelsheim, 1993]. Specifically, the GG-optimal design for independent experiments iβˆˆπ’œsi\in\mathcal{A}_{s} is an allocation of observations (Ns,i)iβˆˆπ’œs(N_{s,i})_{i\in\mathcal{A}_{s}} such that βˆ‘iβˆˆπ’œsNs,i=ns\sum_{i\in\mathcal{A}_{s}}N_{s,i}=n_{s} and the maximum variance

maxiβˆˆπ’œs⁑σi2Ns,i\displaystyle\max_{i\in\mathcal{A}_{s}}\frac{\sigma_{i}^{2}}{N_{s,i}} (6)

is minimized. This happens precisely when all Οƒi2/Ns,i\sigma_{i}^{2}/N_{s,i} are identical, when Ns,i=Ξ»s,iN_{s,i}=\lambda_{s,i} for Ξ»s,i\lambda_{s,i} in LemmaΒ 1.

3.2 Unknown Heterogeneous Reward Variances

Our second proposal is an algorithm for unknown reward variances. One natural idea, which is expected to be practical but hard to analyze, is to replace Οƒi2\sigma_{i}^{2} in πš‚π™·πš…πšŠπš›\tt SHVar with its empirical estimate from the past tβˆ’1t-1 rounds in stage ss,

Οƒ^s,t,i2=1Ns,t,iβˆ’1β€‹βˆ‘β„“=1tβˆ’1πŸ™β€‹{Is,β„“=i}​(Ys,β„“,iβˆ’ΞΌ^s,t,i)2,\displaystyle\hat{\sigma}_{s,t,i}^{2}=\frac{1}{N_{s,t,i}-1}\sum_{\ell=1}^{t-1}\mathds{1}\!\left\{I_{s,\ell}=i\right\}(Y_{s,\ell,i}-\hat{\mu}_{s,t,i})^{2}\,,

where

ΞΌ^s,t,i=1Ns,t,iβ€‹βˆ‘β„“=1tβˆ’1πŸ™β€‹{Is,β„“=i}​Ys,β„“,i\displaystyle\hat{\mu}_{s,t,i}=\frac{1}{N_{s,t,i}}\sum_{\ell=1}^{t-1}\mathds{1}\!\left\{I_{s,\ell}=i\right\}Y_{s,\ell,i}

is the empirical mean reward of arm ii in round tt of stage ss. This design would be hard to analyze because Οƒ^s,t,i\hat{\sigma}_{s,t,i} can underestimate Οƒi\sigma_{i}, and thus is not an optimistic estimate.

The key idea in our solution is to act optimistically using an upper confidence bound (UCB) on the reward variance. To derive it, we make an assumption that the reward noise is Gaussian. Specifically, the reward of arm ii in round tt of stage ss is distributed as Ys,t,iβˆΌπ’©β€‹(ΞΌi,Οƒi2)Y_{s,t,i}\sim\mathcal{N}(\mu_{i},\sigma_{i}^{2}). This allows us to derive the following upper and lower bounds on the unkown variance Οƒi2\sigma_{i}^{2}.

Algorithm 4 πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar: Pulled arm in sequential halving with unknown heterogeneous reward variances.
1:Input: Stage ss, round tt
2:
3:ifΒ t≀|π’œs|​(4​log⁑(1/Ξ΄)+1)t\leq\left|\mathcal{A}_{s}\right|(4\log(1/\delta)+1)Β then
4:Β Β Β Β Β k←(tβˆ’1)mod|π’œs|+1k\leftarrow(t-1)\bmod\left|\mathcal{A}_{s}\right|+1
5:Β Β Β Β Β Is,t←k-th arm inΒ π’œsI_{s,t}\leftarrow\textrm{$k$-th arm in $\mathcal{A}_{s}$}
6:else
7:Β Β Β Β Β forΒ iβˆˆπ’œsi\in\mathcal{A}_{s}Β do
8:Β Β Β Β Β Β Β Β Β Ns,t,iβ†βˆ‘β„“=1tβˆ’1πŸ™β€‹{Is,β„“=i}\displaystyle N_{s,t,i}\leftarrow\sum_{\ell=1}^{t-1}\mathds{1}\!\left\{I_{s,\ell}=i\right\} Β Β Β Β Β 
9:Β Β Β Β Β Is,t←arg​maxiβˆˆπ’œs⁑Us,t,iNs,t,i\displaystyle I_{s,t}\leftarrow\operatorname*{arg\,max\,}_{i\in\mathcal{A}_{s}}\frac{U_{s,t,i}}{N_{s,t,i}}
10:
11:Output: Arm to pull Is,tI_{s,t}
Lemma 2.

Fix stage ss, round t∈[ns]t\in[n_{s}], arm iβˆˆπ’œsi\in\mathcal{A}_{s}, and failure probability δ∈(0,1)\delta\in(0,1). Let

N=Ns,t,iβˆ’1\displaystyle N=N_{s,t,i}-1

and suppose that N>4​log⁑(1/Ξ΄)N>4\log(1/\delta). Then

ℙ​(Οƒi2β‰₯Οƒ^s,t,i21βˆ’2​log⁑(1/Ξ΄)N)≀δ\displaystyle\mathbb{P}\left(\sigma_{i}^{2}\geq\frac{\hat{\sigma}_{s,t,i}^{2}}{1-2\sqrt{\frac{\log(1/\delta)}{N}}}\right)\leq\delta

holds with probability at least 1βˆ’Ξ΄1-\delta. Analogously,

ℙ​(Οƒ^s,t,i2β‰₯Οƒi2​[1+2​log⁑(1/Ξ΄)N+2​log⁑(1/Ξ΄)N])≀δ\displaystyle\mathbb{P}\left(\hat{\sigma}_{s,t,i}^{2}\geq\sigma_{i}^{2}\left[1+2\sqrt{\frac{\log(1/\delta)}{N}}+\frac{2\log(1/\delta)}{N}\right]\right)\leq\delta

holds with probability at least 1βˆ’Ξ΄1-\delta.

Proof.

The first claim is proved as follows. By Cochran’s theorem, we have that Οƒ^s,t,i2​N/Οƒi2\hat{\sigma}_{s,t,i}^{2}N/\sigma_{i}^{2} is a Ο‡2\chi^{2} random variable with NN degrees of freedom. Its concentration was analyzed in Laurent and Massart [2000]. More specifically, by (4.4) in Laurent and Massart [2000], an immediate corollary of their Lemma 1, we have

ℙ​(Nβˆ’Οƒ^s,t,i2​NΟƒi2β‰₯2​N​log⁑(1/Ξ΄))≀δ.\displaystyle\mathbb{P}\left(N-\frac{\hat{\sigma}_{s,t,i}^{2}N}{\sigma_{i}^{2}}\geq 2\sqrt{N\log(1/\delta)}\right)\leq\delta\,.

Now we divide both sides in the probability by NN, multiply by Οƒi2\sigma_{i}^{2}, and rearrange the formula as

ℙ​(Οƒi2​(1βˆ’2​log⁑(1/Ξ΄)/N)β‰₯Οƒ^s,t,i2)≀δ.\displaystyle\mathbb{P}\left(\sigma_{i}^{2}\left(1-2\sqrt{\log(1/\delta)/N}\right)\geq\hat{\sigma}_{s,t,i}^{2}\right)\leq\delta\,.

When 1βˆ’2​log⁑(1/Ξ΄)/N>01-2\sqrt{\log(1/\delta)/N}>0, we can divide both sides by it and get the first claim in LemmaΒ 2.

The second claim is proved analogously. Specifically, by (4.3) in Laurent and Massart [2000], an immediate corollary of their Lemma 1, we have

ℙ​(Οƒ^s,t,i2​NΟƒi2βˆ’Nβ‰₯2​N​log⁑(1/Ξ΄)+2​log⁑(1/Ξ΄))≀δ.\displaystyle\mathbb{P}\left(\frac{\hat{\sigma}_{s,t,i}^{2}N}{\sigma_{i}^{2}}-N\geq 2\sqrt{N\log(1/\delta)}+2\log(1/\delta)\right)\leq\delta\,.

Now we divide both sides in the probability by NN, multiply by Οƒi2\sigma_{i}^{2}, and obtain the second claim in LemmaΒ 2. This concludes the proof. ∎

By LemmaΒ 2, when Ns,t,i>4​log⁑(1/Ξ΄)+1N_{s,t,i}>4\log(1/\delta)+1,

Us,t,i=Οƒ^s,t,i21βˆ’2​log⁑(1/Ξ΄)Ns,t,iβˆ’1\displaystyle U_{s,t,i}=\frac{\hat{\sigma}_{s,t,i}^{2}}{1-2\sqrt{\frac{\log(1/\delta)}{N_{s,t,i}-1}}} (7)

is a high-probability upper bound on the reward variance of arm ii in round tt of stage ss, which holds with probability at least 1βˆ’Ξ΄1-\delta. This bound decreases as the number of observations Ns,t,iN_{s,t,i} increases and confidence Ξ΄\delta decreases. To apply the bound across multiple stages, rounds, and arms, we use a union bound.

The bound in (7) leads to our algorithm that overestimates the variance. The algorithm is an instance of AlgorithmΒ 1 for π™°πš•πš=πš‚π™·π™°πšπšŠπš…πšŠπš›\tt Alg=\tt SHAdaVar. The pseudocode of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is in AlgorithmΒ 4. To guarantee Ns,t,i>4​log⁑(1/Ξ΄)+1N_{s,t,i}>4\log(1/\delta)+1, we pull all arms π’œs\mathcal{A}_{s} in any stage ss for 4​log⁑(1/Ξ΄)+14\log(1/\delta)+1 times initially. We call the resulting algorithm πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar.

Note that πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar can be viewed as a variant of πš‚π™·πš…πšŠπš›\tt SHVar where Us,t,iU_{s,t,i} replaces Οƒi2\sigma_{i}^{2}. Therefore, it can also be viewed as solving the G-optimal design in (6) without knowing reward variances Οƒi2\sigma_{i}^{2}; and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is of a broader interest to the optimal experiment design community [Pukelsheim, 1993]. We also note that the assumption of Gaussian noise in the design of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is limiting. To address this issue, we experiment with non-Gaussian noise in SectionΒ 5.2.

4 Analysis

This section comprises three analyses. In SectionΒ 4.1, we bound the probability that πš‚π™·πš…πšŠπš›\tt SHVar, an algorithm that knows reward variances, misidentifies the best arm. In SectionΒ 4.2, we provide an alternative analysis that does not rely on the closed form in (4). Finally, in SectionΒ 4.3, we bound the probability that πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, an algorithm that learns reward variances, misidentifies the best arm.

All analyses are under the assumption of Gaussian reward noise. Specifically, the reward of arm ii in round tt of stage ss is distributed as Ys,t,iβˆΌπ’©β€‹(ΞΌi,Οƒi2)Y_{s,t,i}\sim\mathcal{N}(\mu_{i},\sigma_{i}^{2}).

4.1 Error Bound of πš‚π™·πš…πšŠπš›\tt SHVar

We start with analyzing πš‚π™·πš…πšŠπš›\tt SHVar, which is a stepping stone for analyzing πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. To simplify the proof, we assume that both mm and nsn_{s} are integers. We also assume that all budget allocations have integral solutions in LemmaΒ 1.

Theorem 3.

πš‚π™·πš…πšŠπš›\tt SHVar misidentifies the best arm with probability

ℙ​(I^β‰ 1)≀2​log2⁑K​exp⁑[βˆ’n​Δmin24​log2⁑Kβ€‹βˆ‘jβˆˆπ’œΟƒj2],\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)\leq 2\log_{2}K\exp\left[-\frac{n\Delta_{\min}^{2}}{4\log_{2}K\sum_{j\in\mathcal{A}}\sigma_{j}^{2}}\right]\,,

where Ξ”min=ΞΌ1βˆ’ΞΌ2\Delta_{\min}=\mu_{1}-\mu_{2} is the minimum gap.

Proof.

The claim is proved in Appendix A. We follow the outline in Karnin et al. [2013]. The novelty is in extending the proof to heterogeneous reward variances. This requires a non-uniform budget allocation, where arms with higher reward variances are pulled more (Lemma 1). ∎

The bound in TheoremΒ 3 depends on all quantities as expected. It decreases as budget nn and minimum gap Ξ”min\Delta_{\min} increase, and the number of arms KK and variances Οƒj2\sigma_{j}^{2} decrease. πš‚π™·πš…πšŠπš›\tt SHVar reduces to πš‚π™·\tt SH in Karnin etΒ al. [2013] when Οƒi2=1/4\sigma_{i}^{2}=1/4 for all arms iβˆˆπ’œi\in\mathcal{A}. The bounds of πš‚π™·\tt SH and πš‚π™·πš…πšŠπš›\tt SHVar become comparable when we apply H2≀K/Ξ”min2H_{2}\leq K/\Delta_{\min}^{2} in (1) and note that βˆ‘jβˆˆπ’œΟƒj2=K/4\sum_{j\in\mathcal{A}}\sigma_{j}^{2}=K/4 in TheoremΒ 3. The extra factor of 88 in the exponent of (1) is due to a different proof, which yields a finer dependence on gaps.

4.2 Alternative Error Bound of πš‚π™·πš…πšŠπš›\tt SHVar

Now we analyze πš‚π™·πš…πšŠπš›\tt SHVar differently. The resulting bound is weaker than that in TheoremΒ 3 but its proof can be easily extended to πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar.

Theorem 4.

πš‚π™·πš…πšŠπš›\tt SHVar misidentifies the best arm with probability

ℙ​(I^β‰ 1)≀2​log2⁑K​exp⁑[βˆ’(nβˆ’K​log⁑K)​Δmin24​σmax2​K​log2⁑K],\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)\leq 2\log_{2}K\exp\left[-\frac{(n-K\log K)\Delta_{\min}^{2}}{4\sigma_{\max}^{2}K\log_{2}K}\right]\,,

where Ξ”min=ΞΌ1βˆ’ΞΌ2\Delta_{\min}=\mu_{1}-\mu_{2} is the minimum gap and Οƒmax2=maxiβˆˆπ’œβ‘Οƒi2\sigma_{\max}^{2}=\max_{i\in\mathcal{A}}\sigma_{i}^{2} is the maximum reward variance.

Proof.

The claim is proved in AppendixΒ B. The key idea in the proof is to derive a lower bound on the number of pulls of any arm ii in stage ss, instead of using the closed form of Ns,iN_{s,i} in (4). The lower bound is

Ns,iβ‰₯Οƒi2Οƒmax2​(ns|π’œs|βˆ’1).\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{\max}^{2}}\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,.

An important property of the bound is that it is Ω​(Οƒi2​ns)\Omega(\sigma_{i}^{2}n_{s}), similarly to Ns,iN_{s,i} in (4). Therefore, the rest of the proof is similar to that of TheoremΒ 3. ∎

As in TheoremΒ 3, the bound in TheoremΒ 4 depends on all quantities as expected. It decreases as budget nn and minimum gap Ξ”min\Delta_{\min} increase, and the number of arms KK and maximum variance Οƒmax2\sigma_{\max}^{2} decrease. The bound approaches that in TheoremΒ 3 when all reward variances are identical.

4.3 Error Bound of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar

Now we analyze πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar.

Theorem 5.

Suppose that Ξ΄<1/(K​n)\delta<1/(Kn) and

nβ‰₯K​log2⁑K​(4​log⁑(K​n/Ξ΄)+1).\displaystyle n\geq K\log_{2}K(4\log(Kn/\delta)+1)\,.

Then πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar misidentifies the best arm with probability

ℙ​(I^β‰ 1)≀2​log2⁑K​exp⁑[βˆ’Ξ±β€‹(nβˆ’K​log⁑K)​Δmin24​σmax2​K​log2⁑K],\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)\leq 2\log_{2}K\exp\left[-\alpha\frac{(n-K\log K)\Delta_{\min}^{2}}{4\sigma_{\max}^{2}K\log_{2}K}\right]\,,

where Ξ”min\Delta_{\min} and Οƒmax2\sigma_{\max}^{2} are defined in TheoremΒ 4, and

Ξ±=1βˆ’2​log⁑(K​n/Ξ΄)n/Kβˆ’21+2​log⁑(K​n/Ξ΄)n/Kβˆ’2+2​log⁑(K​n/Ξ΄)n/Kβˆ’2.\displaystyle\alpha=\frac{1-2\sqrt{\frac{\log(Kn/\delta)}{n/K-2}}}{1+2\sqrt{\frac{\log(Kn/\delta)}{n/K-2}}+\frac{2\log(Kn/\delta)}{n/K-2}}\,.
Proof.

The claim is proved in AppendixΒ C. The key idea in the proof is to derive a lower bound on the number of pulls of any arm ii in stage ss, similarly to that in TheoremΒ 4. The lower bound is

Ns,iβ‰₯Οƒi2Οƒmax2​α​(|π’œs|,ns,Ξ΄)​(ns|π’œs|βˆ’1)\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{\max}^{2}}\alpha(\left|\mathcal{A}_{s}\right|,n_{s},\delta)\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)

and holds with probability at least 1βˆ’Ξ΄1-\delta. Since the bound is Ω​(Οƒi2​ns)\Omega(\sigma_{i}^{2}n_{s}), as in the proof of TheoremΒ 4, the rest of the proof is similar. The main difference from TheoremΒ 4 is in factor α​(|π’œs|,ns,Ξ΄)\alpha(\left|\mathcal{A}_{s}\right|,n_{s},\delta), which converges to 11 as nsβ†’βˆžn_{s}\to\infty. ∎

The bound in TheoremΒ 5 depends on all quantities as expected. It decreases as budget nn and minimum gap Ξ”min\Delta_{\min} increase, and the number of arms KK and maximum variance Οƒmax2\sigma_{\max}^{2} decrease. As nβ†’βˆžn\to\infty, we get Ξ±β†’1\alpha\to 1 and the bound converges to that in TheoremΒ 4.

5 Experiments

In this section, we empirically evaluate our proposed algorithms, πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, and compare them to algorithms from prior works. We choose the following baselines: uniform allocation (πš„πš—πš’πš\tt Unif), sequential halving (πš‚π™·\tt SH) [Karnin etΒ al., 2013], gap-based exploration (π™ΆπšŠπš™π™΄\tt GapE) [Gabillon etΒ al., 2011], gap-based exploration with variance (π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V) [Gabillon etΒ al., 2011], and variance-based rejects (πš…π™±πš\tt VBR) [Faella etΒ al., 2020].

πš„πš—πš’πš\tt Unif allocates equal budget to all arms and πš‚π™·\tt SH was originally proposed for homogeneous reward variances. Neither πš„πš—πš’πš\tt Unif nor πš‚π™·\tt SH can adapt to heterogenuous reward variances. π™ΆπšŠπš™π™΄\tt GapE, π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V and πš…π™±πš\tt VBR are variance-adaptive BAI methods from related works (SectionΒ 6). In π™ΆπšŠπš™π™΄\tt GapE, we use HH from Theorem 1 of Gabillon etΒ al. [2011]. In π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V, we use HH from Theorem 2 of Gabillon etΒ al. [2011]. Both π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V assume bounded reward distributions with support [0,b][0,b]. We choose b=maxiβˆˆπ’œβ‘ΞΌi+Οƒi​log⁑nb=\max_{i\in\mathcal{A}}\mu_{i}+\sigma_{i}\sqrt{\log n}, since this is a high-probability upper bound on the absolute value of nn independent observations from 𝒩​(ΞΌi,Οƒi2)\mathcal{N}(\mu_{i},\sigma_{i}^{2}). In πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, we set Ξ΄=0.05\delta=0.05, and thus our upper bounds on reward variances hold with probability 0.950.95. In πš…π™±πš\tt VBR, Ξ³=1.96\gamma=1.96, which means that the mean arm rewards lie between their upper and lower bounds with probability 0.950.95. Faella etΒ al. [2020] showed that πš…π™±πš\tt VBR performs well with Gaussian noise when Ξ³β‰ˆ2\gamma\approx 2. All reported results are averaged over 5 0005\,000 runs.

π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V have O​(exp⁑[βˆ’c​n/H])O(\exp[-cn/H]) error bounds on the probability of misidentifying the best arm, where nn is the budget, HH is the complexity parameter, and c=1/144c=1/144 for π™ΆπšŠπš™π™΄\tt GapE and c=1/512c=1/512 for π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V. Our error bounds are O​(exp⁑[βˆ’c′​n/Hβ€²])O(\exp[-c^{\prime}n/H^{\prime}]), where Hβ€²H^{\prime} is a comparable complexity parameter and cβ€²=1/(4​log2⁑K)c^{\prime}=1/(4\log_{2}K). Even for moderate KK, cβ‰ͺcβ€²c\ll c^{\prime}. Therefore, when πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar are implemented as analyzed, they provide stronger guarantees on identifying the best arm than π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V. To make the algorithms comparable, we set HH of π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V to H​c/cβ€²Hc/c^{\prime}, by increasing their confidence widths. Since HH is an input to both π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V, note that they have an advantage over our algorithms that do not require it.

5.1 Synthetic Experiments

Our first experiment is on a Gaussian bandit with KK arms. The mean reward of arm ii is ΞΌi=1βˆ’(iβˆ’1)/K\mu_{i}=1-\sqrt{(i-1)/K}. We choose this setting because πš‚π™·\tt SH is known to perform well in it. Specifically, note that the complexity parameter H2H_{2} in (2) is minimized when i/Ξ”i2i/\Delta_{i}^{2} are equal for all iβˆˆπ’œβˆ–{1}i\in\mathcal{A}\setminus\left\{1\right\}. For our ΞΌi\mu_{i}, Ξ”i2=(iβˆ’1)/Kβ‰ˆi/K\Delta_{i}^{2}=(i-1)/K\approx i/K and thus i/Ξ”i2β‰ˆKi/\Delta_{i}^{2}\approx K. We set the reward variance as Οƒi2=0.9​μi2+0.1\sigma^{2}_{i}=0.9\mu^{2}_{i}+0.1 when arm ii is even and Οƒi2=0.1\sigma^{2}_{i}=0.1 when arm ii is odd. We additionally perturb ΞΌi\mu_{i} and Οƒi2\sigma_{i}^{2} with additive 𝒩​(0,0.052)\mathcal{N}(0,0.05^{2}) and multiplicative Unif​(0.5,1.5)\mathrm{Unif}(0.5,1.5) noise, respectively. We visualize the mean rewards ΞΌi\mu_{i} and the corresponding variances Οƒi2\sigma^{2}_{i}, for K=64K=64 arms, in FigureΒ 1. The variances are chosen so that every stage of sequential halving involves both high-variance and low-variance arms. Therefore, an algorithm that adapts its budget allocation to the reward variances of the remaining arms eliminates the best arm with a lower probability than the algorithm that does not.

Refer to caption
Figure 2: Probability of misidentifying the best arm in the Gaussian bandit in SectionΒ 5.1, as budget nn increases. The number of arms is K=64K=64 and the results are averaged over 5 0005\,000 runs.

In FigureΒ 2, we report the probability of misidentifying the best arm among K=64K=64 arms (FigureΒ 1) as budget nn increases. As expected, the naive algorithm πš„πš—πš’πš\tt Unif performs the worst. π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V perform only slightly better. When the algorithms have comparable error guarantees to πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, their confidence intervals are too wide to be practical. πš‚π™·\tt SH performs surprisingly well. As observed by Karnin etΒ al. [2013] and confirmed by Li etΒ al. [2018], πš‚π™·\tt SH is a superior algorithm in the fixed-budget setting because it aggressively eliminates a half of the remaining arms in each stage. Therefore, it outperforms π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V. We note that πš‚π™·πš…πšŠπš›\tt SHVar outperforms all algorithms for all budgets nn. For smaller budgets, πš…π™±πš\tt VBR outperforms πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. However, as the budget nn increases, πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar outperforms πš…π™±πš\tt VBR; and without any additional information about the problem instance approaches the performance of πš‚π™·πš…πšŠπš›\tt SHVar, which knows the reward variances. This shows that our variance upper bounds improve quickly with larger budgets, as is expected based on the algebraic form in (7).

Refer to caption
Figure 3: Probability of misidentifying the best arm in the Gaussian bandit in SectionΒ 5.1, as the number of arms KK increases. The budget is fixed at n=5 000n=5\,000 and the results are averaged over 5 0005\,000 runs.

In the next experiment, we take same Gaussian bandit as in FigureΒ 2. The budget is fixed at n=5 000n=5\,000 and we vary the number of arms KK from 3232 to 6464. In FigureΒ 3, we show the probability of misidentifying the best arm as the number of arms KK increases. We observe two major trends. First, the relative order of the algorithms, as measured by their probability of a mistake, is similar to FigureΒ 2. Second, all algorithms get worse as the number of arms KK increases because the problem instance becomes harder. This experiment shows that πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar can perform well for a wide range of KK, they have the lowest probabilities of a mistake for all KK. While the other algorithms perform well at K=32K=32, their probability of a mistake is around 0.050.05 or below; they perform poorly at K=64K=64, their probability of a mistake is above 0.10.1.

5.2 MovieLens Experiments

Our next experiment is motivated by the A/B testing problem in SectionΒ 1. The objective is to identify the movie with the highest mean rating from a pool of KK movies, where movies are arms and their ratings are rewards. The movies, users, and ratings are simulated using the MovieLens 1M dataset [Lam and Herlocker, 2016]. This dataset contains one million ratings given by 6 0406\,040 users to 3 9523\,952 movies. We complete the missing ratings using low-rank matrix factorization with rank 55, which is done using alternating least squares [Davenport and Romberg, 2016]. The result is a 6 040Γ—3 9526\,040\times 3\,952 matrix MM, where Mi,jM_{i,j} is the estimated rating given by user ii to movie jj.

Refer to caption
Figure 4: Means and variances of ratings of KK movies from the MovieLens dataset. A new sample is generated in each run of the experiment, as described in SectionΒ 5.2.

This experiment is averaged over 5 0005\,000 runs. In each run, we randomly choose new movies according to the following procedure. For all arms iβˆˆπ’œi\in\mathcal{A}, we generate mean ΞΌ~i\tilde{\mu}_{i} and variance Οƒ~i2\tilde{\sigma}_{i}^{2} as described in SectionΒ 5.1. Then, for each ii, we find the closest movie in the MovieLens dataset with mean ΞΌi\mu_{i} and variance Οƒi2\sigma_{i}^{2}, the movie that minimizes the distance (ΞΌiβˆ’ΞΌ~i)2+(Οƒi2βˆ’Οƒ~i2)2(\mu_{i}-\tilde{\mu}_{i})^{2}+(\sigma^{2}_{i}-\tilde{\sigma}_{i}^{2})^{2}. The means and variances of movie ratings from two runs are shown in FigureΒ 4. As in SectionΒ 5.1, the movies are selected so that sequential elimination with halving is expected to perform well. The variance of movie ratings in FigureΒ 4 is intrinsic to our domain: movies are often made for specific audiences and thus can have a huge variance in their ratings. For instance, a child may not like a horror movie, while a horror enthusiast would enjoy it. Because of this, an algorithm that adapts its budget allocation to the rating variances of the remaining movies can perform better. The last notable difference from SectionΒ 5.1 is that movie ratings are realistic. In particular, when arm ii is pulled, we choose a random user jj and return Mj,iM_{j,i} as its stochastic reward. Therefore, this experiment showcases the robustness of our algorithms beyond Gaussian noise.

Refer to caption
Figure 5: Probability of misidentifying the best movie in the MovieLens bandit in SectionΒ 5.2, as budget nn increases. The number of movies is K=64K=64 and the results are averaged over 5 0005\,000 runs.

In FigureΒ 5, we report the probability of misidentifying the best movie from K=64K=64 as budget nn increases. πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar perform the best for most budgets, although the reward distributions are not Gaussian. The relative performance of the algorithms is similar to SectionΒ 5.1: πš„πš—πš’πš\tt Unif is the worst, and π™ΆπšŠπš™π™΄\tt GapE and π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V improve upon it. The only exception is πš…π™±πš\tt VBR: it performs poorly for smaller budgets, and on par with πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar for larger budgets.

We increase the number of movies next. In FigureΒ 6, we report the probability of misidentifying the best movie from K=128K=128 as budget nn increases. The trends are similar to K=64K=64, except that πš…π™±πš\tt VBR performs poorly for all budgets. This is because πš…π™±πš\tt VBR has KK stages and eliminates one arm per stage even when the number of observations is small. In comparison, our algorithms have log2⁑K\log_{2}K stages.

Refer to caption
Figure 6: Probability of misidentifying the best movie in the MovieLens bandit in SectionΒ 5.2, as budget nn increases. The number of movies is K=128K=128 and the results are averaged over 5 0005\,000 runs.

6 Related Work

Best-arm identification has been studied extensively in both fixed-budget [Bubeck etΒ al., 2009, Audibert etΒ al., 2010] and fixed-confidence [Even-Dar etΒ al., 2006] settings. The two closest prior works are Gabillon etΒ al. [2011] and Faella etΒ al. [2020], both of which studied fixed-budget BAI with heterogeneous reward variances. All other works on BAI with heterogeneous reward variances are in the fixed-confidence setting [Lu etΒ al., 2021, Zhou and Tian, 2022, Jourdan etΒ al., 2022].

The first work on variance-adaptive BAI was in the fixed-budget setting [Gabillon etΒ al., 2011]. This paper proposed algorithm π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V and showed that its probability of mistake decreases exponentially with budget nn. Our error bounds are comparable to Gabillon etΒ al. [2011]. The main shortcoming of the analyses in Gabillon etΒ al. [2011] is that they assume that the complexity parameter is known and used by π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V. Since the complexity parameter depends on unknown gaps and reward variances, it is typically unknown in practice. To address this issue, Gabillon etΒ al. [2011] introduced an adaptive variant of π™ΆπšŠπš™π™΄βˆ’πš…\tt GapE\mathchar 45\relax V, π™°βˆ’π™ΆπšŠπš™π™΄βˆ’πš…\tt A\mathchar 45\relax GapE\mathchar 45\relax V, where the complexity parameter is estimated. This algorithm does not come with any guarantee.

The only other work that studied variance-adaptive fixed-budget BAI is Faella etΒ al. [2020]. This paper proposed and analyzed a variant of successive rejects algorithm [Audibert etΒ al., 2010]. Since πš‚π™·\tt SH of Karnin etΒ al. [2013] has a comparable error bound to successive rejects of Audibert etΒ al. [2010], our variance-adaptive sequential halving algorithms have comparable error bounds to variance-adaptive successive rejects of Faella etΒ al. [2020]. Roughly speaking, all bounds can be stated as exp⁑[βˆ’n/H]\exp[-n/H], where HH is a complexity parameter that depends on the number of arms KK, their variances, and their gaps.

We propose variance-adaptive sequential halving for fixed-budget BAI. Our algorithms have state-of-the-art performance in our experiments (SectionΒ 5). They are conceptually simpler than prior works [Gabillon etΒ al., 2011, Faella etΒ al., 2020] and can be implemented as analyzed, unlike Gabillon etΒ al. [2011].

7 Conclusions

We study best-arm identification in the fixed-budget setting where the reward variances vary across the arms. We propose two variance-adaptive elimination algorithms for this problem: πš‚π™·πš…πšŠπš›\tt SHVar for known reward variances and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar for unknown reward variances. Both algorithms proceed in stages and pull arms with higher reward variances more often than those with lower variances. While the design and analysis of πš‚π™·πš…πšŠπš›\tt SHVar are of interest, they are a stepping stone for πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, which adapts to unknown reward variances. The novelty in πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is in solving an optimal design problem with unknown observation variances. Its analysis relies on a novel lower bound on the number of arm pulls in BAI that does not require closed-form solutions to the budget allocation problem. Our numerical simulations show that πš‚π™·πš…πšŠπš›\tt SHVar and πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar are not only theoretically sound, but also competitive with state-of-the-art baselines.

Our work leaves open several questions of interest. First, the design of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar is for Gaussian reward noise. The reason for this choice is that our initial experiments showed quick concentration and also robustness to noise misspecification. Concentration of general random variables with unknown variances can be analyzed using empirical Bernstein bounds [Maurer and Pontil, 2009]. This approach was taken by Gabillon etΒ al. [2011] and could also be applied in our setting. For now, to address the issue of Gaussian noise, we experiment with non-Gaussian noise in SectionΒ 5.2. Second, while our error bounds depend on all parameters of interest as expected, we do not provide a matching lower bound. When the reward variances are known, we believe that a lower bound can be proved by building on the work of Carpentier and Locatelli [2016]. Finally, our algorithms are not contextual, which limits their application because many bandit problems are contextual [Li etΒ al., 2010, Wen etΒ al., 2015, Zong etΒ al., 2016].

References

  • Audibert etΒ al. [2010] Jean-Yves Audibert, Sebastien Bubeck, and Remi Munos. Best arm identification in multi-armed bandits. In Proceeding of the 23rd Annual Conference on Learning Theory, pages 41–53, 2010.
  • Auer and Ortner [2010] Peter Auer and Ronald Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 61(1-2):55–65, 2010.
  • Boucheron etΒ al. [2013] Stephane Boucheron, Gabor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
  • Bubeck etΒ al. [2009] Sebastien Bubeck, Remi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory, pages 23–37, 2009.
  • Carpentier and Locatelli [2016] Alexandra Carpentier and Andrea Locatelli. Tight (lower) bounds for the fixed budget best arm identification bandit problem. In Proceeding of the 29th Annual Conference on Learning Theory, pages 590–604, 2016.
  • Davenport and Romberg [2016] Mark Davenport and Justin Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622, 2016.
  • Even-Dar etΒ al. [2006] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. Journal of Machine Learning Research, 7:1079–1105, 2006.
  • Faella etΒ al. [2020] Marco Faella, Alberto Finzi, and Luigi Sauro. Rapidly finding the best arm using variance. In Proceedings of the 24th European Conference on Artificial Intelligence, 2020.
  • Gabillon etΒ al. [2011] Victor Gabillon, Mohammad Ghavamzadeh, Alessandro Lazaric, and Sebastien Bubeck. Multi-bandit best arm identification. In Advances in Neural Information Processing Systems 24, 2011.
  • Gabillon etΒ al. [2012] Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. Best arm identification: A unified approach to fixed budget and fixed confidence. In Advances in Neural Information Processing Systems 25, 2012.
  • Jourdan etΒ al. [2022] Marc Jourdan, Remy Degenne, and Emilie Kaufmann. Dealing with unknown variances in best-arm identification. CoRR, abs/2210.00974, 2022. URL https://arxiv.org/abs/2210.00974.
  • Karnin etΒ al. [2013] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In Proceedings of the 30th International Conference on Machine Learning, pages 1238–1246, 2013.
  • Kaufmann etΒ al. [2016] Emilie Kaufmann, Olivier Cappe, and Aurelien Garivier. On the complexity of best-arm identification in multi-armed bandit models. Journal of Machine Learning Research, 17(1):1–42, 2016.
  • Krause etΒ al. [2008] Andreas Krause, AjitΒ Paul Singh, and Carlos Guestrin. Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9:235–284, 2008.
  • Lam and Herlocker [2016] Shyong Lam and Jon Herlocker. MovieLens Dataset. http://grouplens.org/datasets/movielens/, 2016.
  • Lattimore and Szepesvari [2019] Tor Lattimore and Csaba Szepesvari. Bandit Algorithms. Cambridge University Press, 2019.
  • Laurent and Massart [2000] B.Β Laurent and P.Β Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302–1338, 2000.
  • Li etΒ al. [2010] Lihong Li, Wei Chu, John Langford, and Robert Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, 2010.
  • Li etΒ al. [2018] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 18(185):1–52, 2018.
  • Lu etΒ al. [2021] Pinyan Lu, Chao Tao, and Xiaojin Zhang. Variance-dependent best arm identification. In Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence, 2021.
  • Maurer and Pontil [2009] Andreas Maurer and Massimiliano Pontil. Empirical Bernstein bounds and sample-variance penalization. In Proceedings of the 22nd Annual Conference on Learning Theory, 2009.
  • Pukelsheim [1993] Friedrich Pukelsheim. Optimal Design of Experiments. John Wiley & Sons, 1993.
  • Soare etΒ al. [2014] Marta Soare, Alessandro Lazaric, and Remi Munos. Best-arm identification in linear bandits. In Advances in Neural Information Processing Systems 27, pages 828–836, 2014.
  • Wen etΒ al. [2015] Zheng Wen, Branislav Kveton, and Azin Ashkan. Efficient learning in large-scale combinatorial semi-bandits. In Proceedings of the 32nd International Conference on Machine Learning, 2015.
  • Zhou and Tian [2022] Ruida Zhou and Chao Tian. Approximate top-mm arm identification with heterogeneous reward variances. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, 2022.
  • Zong etΒ al. [2016] Shi Zong, Hao Ni, Kenny Sung, NanΒ Rosemary Ke, Zheng Wen, and Branislav Kveton. Cascading bandits for large-scale recommendation problems. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence, 2016.

Appendix A Proof of Theorem 3

First, we decompose the probability of choosing a suboptimal arm. For any s∈[m]s\in[m], let Es={1βˆˆπ’œs+1}E_{s}=\left\{1\in\mathcal{A}_{s+1}\right\} be the event that the best arm is not eliminated in stage ss and EΒ―s\bar{E}_{s} be its complement. Then by the law of total probability,

β„™(I^β‰ 1)=β„™(EΒ―m)=βˆ‘s=1mβ„™(EΒ―s,Esβˆ’1…,E1)β‰€βˆ‘s=1mβ„™(EΒ―s|Esβˆ’1…,E1).\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)=\mathbb{P}\left(\bar{E}_{m}\right)=\sum_{s=1}^{m}\mathbb{P}\left(\bar{E}_{s},E_{s-1}\dots,E_{1}\right)\leq\sum_{s=1}^{m}\mathbb{P}\left(\bar{E}_{s}\,\middle|\,E_{s-1}\dots,E_{1}\right)\,.

We bound β„™(EΒ―s|Esβˆ’1…,E1)\mathbb{P}\left(\bar{E}_{s}\,\middle|\,E_{s-1}\dots,E_{1}\right) based on the observation that the best arm can be eliminated only if the estimated mean rewards of at least a half of the arms in π’œs\mathcal{A}_{s} are at least as high as that of the best arm. Specifically, let π’œsβ€²=π’œsβˆ–{1}\mathcal{A}_{s}^{\prime}=\mathcal{A}_{s}\setminus\left\{1\right\} be the set of all arms in stage ss but the best arm and

Nsβ€²=βˆ‘iβˆˆπ’œsβ€²πŸ™β€‹{ΞΌ^s,iβ‰₯ΞΌ^s,1}.\displaystyle N_{s}^{\prime}=\sum_{i\in\mathcal{A}_{s}^{\prime}}\mathds{1}\!\left\{\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right\}\,.

Then by the Markov’s inequality,

β„™(EΒ―s|Esβˆ’1…,E1)≀ℙ(Nsβ€²β‰₯ns2|Esβˆ’1…,E1)≀2𝔼[Nsβ€²|Esβˆ’1…,E1]ns.\displaystyle\mathbb{P}\left(\bar{E}_{s}\,\middle|\,E_{s-1}\dots,E_{1}\right)\leq\mathbb{P}\left(N_{s}^{\prime}\geq\frac{n_{s}}{2}\,\middle|\,E_{s-1}\dots,E_{1}\right)\leq\frac{2\,\mathbb{E}\left[N_{s}^{\prime}\,\middle|\,E_{s-1}\dots,E_{1}\right]}{n_{s}}\,.

The key step in bounding the above expectation is understanding the probability that any arm has a higher estimated mean reward than the best one. We bound this probability next.

Lemma 6.

For any stage s∈[m]s\in[m] with the best arm, 1βˆˆπ’œs1\in\mathcal{A}_{s}, and any suboptimal arm iβˆˆπ’œsi\in\mathcal{A}_{s}, we have

ℙ​(ΞΌ^s,iβ‰₯ΞΌ^s,1)≀exp⁑[βˆ’ns​Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2].\displaystyle\mathbb{P}\left(\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right)\leq\exp\left[-\frac{n_{s}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]\,.
Proof.

The proof is based on concentration inequalities for sub-Gaussian random variables [Boucheron etΒ al., 2013]. In particular, since ΞΌ^s,iβˆ’ΞΌi\hat{\mu}_{s,i}-\mu_{i} and ΞΌ^s,1βˆ’ΞΌ1\hat{\mu}_{s,1}-\mu_{1} are sub-Gaussian with variance proxies Οƒi2/Ns,i\sigma_{i}^{2}/N_{s,i} and Οƒ12/Ns,1\sigma_{1}^{2}/N_{s,1}, respectively; their difference is sub-Gaussian with a variance proxy Οƒi2/Ns,i+Οƒ12/Ns,1\sigma_{i}^{2}/N_{s,i}+\sigma_{1}^{2}/N_{s,1}. It follows that

ℙ​(ΞΌ^s,iβ‰₯ΞΌ^s,1)\displaystyle\mathbb{P}\left(\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right) =ℙ​(ΞΌ^s,iβˆ’ΞΌ^s,1β‰₯0)=ℙ​((ΞΌ^s,iβˆ’ΞΌi)βˆ’(ΞΌ^s,1βˆ’ΞΌ1)>Ξ”i)\displaystyle=\mathbb{P}\left(\hat{\mu}_{s,i}-\hat{\mu}_{s,1}\geq 0\right)=\mathbb{P}\left((\hat{\mu}_{s,i}-\mu_{i})-(\hat{\mu}_{s,1}-\mu_{1})>\Delta_{i}\right)
≀exp⁑[βˆ’Ξ”i22​(Οƒi2Ns,i+Οƒ12Ns,1)]=exp⁑[βˆ’ns​Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2],\displaystyle\leq\exp\left[-\frac{\Delta_{i}^{2}}{2\left(\frac{\sigma_{i}^{2}}{N_{s,i}}+\frac{\sigma_{1}^{2}}{N_{s,1}}\right)}\right]=\exp\left[-\frac{n_{s}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]\,,

where the last step follows from the definitions of Ns,iN_{s,i} and Ns,1N_{s,1} in Lemma 1. ∎

The last major step is bounding 𝔼[Nsβ€²|Esβˆ’1…,E1]\mathbb{E}\left[N_{s}^{\prime}\,\middle|\,E_{s-1}\dots,E_{1}\right] with the help of LemmaΒ 6. Starting with the union bound, we get

𝔼[Nsβ€²|Esβˆ’1…,E1]\displaystyle\mathbb{E}\left[N_{s}^{\prime}\,\middle|\,E_{s-1}\dots,E_{1}\right] β‰€βˆ‘iβˆˆπ’œs′ℙ​(ΞΌ^s,iβ‰₯ΞΌ^s,1)β‰€βˆ‘iβˆˆπ’œsβ€²exp⁑[βˆ’ns​Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2]\displaystyle\leq\sum_{i\in\mathcal{A}_{s}^{\prime}}\mathbb{P}\left(\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right)\leq\sum_{i\in\mathcal{A}_{s}^{\prime}}\exp\left[-\frac{n_{s}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]
≀ns​maxiβˆˆπ’œs′⁑exp⁑[βˆ’ns​Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2]=ns​exp⁑[βˆ’ns​miniβˆˆπ’œs′⁑Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2].\displaystyle\leq n_{s}\max_{i\in\mathcal{A}_{s}^{\prime}}\exp\left[-\frac{n_{s}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]=n_{s}\exp\left[-\frac{n_{s}\min_{i\in\mathcal{A}_{s}^{\prime}}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]\,.

Now we chain all inequalities and get

ℙ​(I^β‰ 1)≀2β€‹βˆ‘s=1mexp⁑[βˆ’ns​miniβˆˆπ’œs′⁑Δi24β€‹βˆ‘jβˆˆπ’œsΟƒj2].\displaystyle\mathbb{P}\left(\hat{I}\neq 1\right)\leq 2\sum_{s=1}^{m}\exp\left[-\frac{n_{s}\min_{i\in\mathcal{A}_{s}^{\prime}}\Delta_{i}^{2}}{4\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}}\right]\,.

To get the final claim, we use that

m=log2⁑K,ns=nlog2⁑K,miniβˆˆπ’œs′⁑Δi2β‰₯Ξ”min2,βˆ‘jβˆˆπ’œsΟƒj2β‰€βˆ‘jβˆˆπ’œΟƒj2.\displaystyle m=\log_{2}K\,,\quad n_{s}=\frac{n}{\log_{2}K}\,,\quad\min_{i\in\mathcal{A}_{s}^{\prime}}\Delta_{i}^{2}\geq\Delta_{\min}^{2}\,,\quad\sum_{j\in\mathcal{A}_{s}}\sigma_{j}^{2}\leq\sum_{j\in\mathcal{A}}\sigma_{j}^{2}\,.

This concludes the proof.

Appendix B Proof of Theorem 4

This proof has the same steps as that in AppendixΒ A. The only difference is that Ns,iN_{s,i} and Ns,1N_{s,1} in LemmaΒ 6 are replaced with their lower bounds, based on the following lemma.

Lemma 7.

Fix stage ss and arm iβˆˆπ’œsi\in\mathcal{A}_{s} in πš‚π™·πš…πšŠπš›\tt SHVar. Then

Ns,iβ‰₯Οƒi2Οƒmax2​(ns|π’œs|βˆ’1),\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{\max}^{2}}\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,,

where Οƒmax=maxiβˆˆπ’œβ‘Οƒi\sigma_{\max}=\max_{i\in\mathcal{A}}\sigma_{i} is the maximum reward noise and nsn_{s} is the budget in stage ss.

Proof.

Let JJ be the most pulled arm in stage ss and β„“βˆˆ[ns]\ell\in[n_{s}] be the round where arm JJ is pulled the last time. By the design of πš‚π™·πš…πšŠπš›\tt SHVar, since arm JJ is pulled in round β„“\ell,

ΟƒJ2Ns,β„“,Jβ‰₯Οƒi2Ns,β„“,i\displaystyle\frac{\sigma_{J}^{2}}{N_{s,\ell,J}}\geq\frac{\sigma_{i}^{2}}{N_{s,\ell,i}}

holds for any arm iβˆˆπ’œsi\in\mathcal{A}_{s}. This can be further rearranged as

Ns,β„“,iβ‰₯Οƒi2ΟƒJ2​Ns,β„“,J.\displaystyle N_{s,\ell,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{J}^{2}}N_{s,\ell,J}\,.

Since arm JJ is the most pulled arm in stage ss and β„“\ell is the round of its last pull,

Ns,β„“,J=Ns,Jβˆ’1β‰₯ns|π’œs|βˆ’1.\displaystyle N_{s,\ell,J}=N_{s,J}-1\geq\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\,.

Moreover, Ns,iβ‰₯Ns,β„“,iN_{s,i}\geq N_{s,\ell,i}. Now we combine all inequalities and get

Ns,iβ‰₯Οƒi2ΟƒJ2​(ns|π’œs|βˆ’1).\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{J}^{2}}\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,. (8)

To eliminate dependence on random JJ, we use ΟƒJ≀σmax\sigma_{J}\leq\sigma_{\max}. This concludes the proof. ∎

When plugged into LemmaΒ 6, we get

ℙ​(ΞΌ^s,iβ‰₯ΞΌ^s,1)≀exp⁑[βˆ’Ξ”i22​(Οƒi2Ns,i+Οƒ12Ns,1)]≀exp⁑[βˆ’(ns|π’œs|βˆ’1)​Δi24​σmax2].\displaystyle\mathbb{P}\left(\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right)\leq\exp\left[-\frac{\Delta_{i}^{2}}{2\left(\frac{\sigma_{i}^{2}}{N_{s,i}}+\frac{\sigma_{1}^{2}}{N_{s,1}}\right)}\right]\leq\exp\left[-\frac{\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\Delta_{i}^{2}}{4\sigma_{\max}^{2}}\right]\,.

This completes the proof.

Appendix C Proof of Theorem 5

This proof has the same steps as that in AppendixΒ A. The main difference is that Ns,iN_{s,i} and Ns,1N_{s,1} in LemmaΒ 6 are replaced with their lower bounds, based on the following lemma.

Lemma 8.

Fix stage ss and arm iβˆˆπ’œsi\in\mathcal{A}_{s} in πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar. Then

Ns,iβ‰₯Οƒi2Οƒmax2​α​(|π’œs|,ns,Ξ΄)​(ns|π’œs|βˆ’1),\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{\max}^{2}}\alpha(\left|\mathcal{A}_{s}\right|,n_{s},\delta)\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,,

where Οƒmax=maxiβˆˆπ’œβ‘Οƒi\sigma_{\max}=\max_{i\in\mathcal{A}}\sigma_{i} is the maximum reward noise, nsn_{s} is the budget in stage ss, and

α​(k,n,Ξ΄)=1βˆ’2​log⁑(1/Ξ΄)n/kβˆ’21+2​log⁑(1/Ξ΄)n/kβˆ’2+2​log⁑(1/Ξ΄)n/kβˆ’2\displaystyle\alpha(k,n,\delta)=\frac{1-2\sqrt{\frac{\log(1/\delta)}{n/k-2}}}{1+2\sqrt{\frac{\log(1/\delta)}{n/k-2}}+\frac{2\log(1/\delta)}{n/k-2}}

is an arm-independent constant.

Proof.

Let JJ be the most pulled arm in stage ss and β„“βˆˆ[ns]\ell\in[n_{s}] be the round where arm JJ is pulled the last time. By the design of πš‚π™·π™°πšπšŠπš…πšŠπš›\tt SHAdaVar, since arm JJ is pulled in round β„“\ell,

Us,β„“,JNs,β„“,Jβ‰₯Us,β„“,iNs,β„“,i\displaystyle\frac{U_{s,\ell,J}}{N_{s,\ell,J}}\geq\frac{U_{s,\ell,i}}{N_{s,\ell,i}}

holds for any arm iβˆˆπ’œsi\in\mathcal{A}_{s}. Analogously to (8), this inequality can be rearranged and loosened as

Ns,iβ‰₯Us,β„“,iUs,β„“,J​(ns|π’œs|βˆ’1).\displaystyle N_{s,i}\geq\frac{U_{s,\ell,i}}{U_{s,\ell,J}}\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,. (9)

We bound Us,β„“,iU_{s,\ell,i} from below using the fact that Us,β„“,iβ‰₯Οƒi2U_{s,\ell,i}\geq\sigma_{i}^{2} holds with probability at least 1βˆ’Ξ΄1-\delta, based on the first claim in LemmaΒ 2. To bound Us,β„“,JU_{s,\ell,J}, we apply the second claim in LemmaΒ 2 to bound Οƒ^s,β„“,J2\hat{\sigma}_{s,\ell,J}^{2} in Us,β„“,JU_{s,\ell,J}, and get that

Us,β„“,J≀σJ2​1+2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’1+2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’11βˆ’2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’1\displaystyle U_{s,\ell,J}\leq\sigma_{J}^{2}\frac{1+2\sqrt{\frac{\log(1/\delta)}{N_{s,\ell,J}-1}}+\frac{2\log(1/\delta)}{N_{s,\ell,J}-1}}{1-2\sqrt{\frac{\log(1/\delta)}{N_{s,\ell,J}-1}}}

holds with probability at least 1βˆ’Ξ΄1-\delta. Finally, we plug both bounds into (9) and get

Ns,iβ‰₯Οƒi2ΟƒJ2​1βˆ’2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’11+2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’1+2​log⁑(1/Ξ΄)Ns,β„“,Jβˆ’1​(ns|π’œs|βˆ’1).\displaystyle N_{s,i}\geq\frac{\sigma_{i}^{2}}{\sigma_{J}^{2}}\frac{1-2\sqrt{\frac{\log(1/\delta)}{N_{s,\ell,J}-1}}}{1+2\sqrt{\frac{\log(1/\delta)}{N_{s,\ell,J}-1}}+\frac{2\log(1/\delta)}{N_{s,\ell,J}-1}}\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\,.

To eliminate dependence on random JJ, we use that ΟƒJ≀σmax\sigma_{J}\leq\sigma_{\max} and Ns,β„“,Jβ‰₯ns/|π’œs|βˆ’1N_{s,\ell,J}\geq n_{s}/\left|\mathcal{A}_{s}\right|-1. This yields our claim and concludes the proof of LemmaΒ 8. ∎

Similarly to LemmaΒ 7, this bound is asymptotically tight when all reward variances are identical. Also α​(|π’œs|,ns,Ξ΄)β†’1\alpha(\left|\mathcal{A}_{s}\right|,n_{s},\delta)\to 1 as nsβ†’βˆžn_{s}\to\infty. Therefore, the bound has the same shape as that in LemmaΒ 7.

The application of LemmaΒ 8 requires more care. Specifically, it relies on high-probability confidence intervals derived in LemmaΒ 2, which need Ns,t,i>4​log⁑(1/Ξ΄)+1N_{s,t,i}>4\log(1/\delta)+1. This is guaranteed whenever nβ‰₯K​log2⁑K​(4​log⁑(1/Ξ΄)+1)n\geq K\log_{2}K(4\log(1/\delta)+1). Moreover, since the confidence intervals need to hold in any stage ss and round tt, and for any arm ii, we need a union bound over K​nKn events. This leads to the following claim.

Suppose that nβ‰₯K​log2⁑K​(4​log⁑(1/Ξ΄)+1)n\geq K\log_{2}K(4\log(1/\delta)+1). Then, when LemmaΒ 8 is plugged into LemmaΒ 6, we get that

ℙ​(ΞΌ^s,iβ‰₯ΞΌ^s,1)≀exp⁑[βˆ’Ξ”i22​(Οƒi2Ns,i+Οƒ12Ns,1)]≀exp⁑[βˆ’Ξ±β€‹(|π’œs|,ns,K​n​δ)​(ns|π’œs|βˆ’1)​Δi24​σmax2].\displaystyle\mathbb{P}\left(\hat{\mu}_{s,i}\geq\hat{\mu}_{s,1}\right)\leq\exp\left[-\frac{\Delta_{i}^{2}}{2\left(\frac{\sigma_{i}^{2}}{N_{s,i}}+\frac{\sigma_{1}^{2}}{N_{s,1}}\right)}\right]\leq\exp\left[-\frac{\alpha(\left|\mathcal{A}_{s}\right|,n_{s},Kn\delta)\left(\frac{n_{s}}{\left|\mathcal{A}_{s}\right|}-1\right)\Delta_{i}^{2}}{4\sigma_{\max}^{2}}\right]\,.

This completes the proof.