This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Understanding the Role of Feedback in Online Learning with Switching Costs

Duo Cheng    Xingyu Zhou    Bo Ji
Abstract

In this paper, we study the role of feedback in online learning with switching costs. It has been shown that the minimax regret is Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) under bandit feedback and improves to Θ~(T)\widetilde{\Theta}(\sqrt{T}) under full-information feedback, where TT is the length of the time horizon. However, it remains largely unknown how the amount and type of feedback generally impact regret. To this end, we first consider the setting of bandit learning with extra observations; that is, in addition to the typical bandit feedback, the learner can freely make a total of BexB_{\mathrm{ex}} extra observations. We fully characterize the minimax regret in this setting, which exhibits an interesting phase-transition phenomenon: when Bex=O(T2/3)B_{\mathrm{ex}}=O(T^{2/3}), the regret remains Θ~(T2/3)\widetilde{\Theta}(T^{2/3}), but when Bex=Ω(T2/3)B_{\mathrm{ex}}=\Omega(T^{2/3}), it becomes Θ~(T/Bex)\widetilde{\Theta}(T/\sqrt{B_{\mathrm{ex}}}), which improves as the budget BexB_{\mathrm{ex}} increases. To design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of BB total observations. We fully characterize the minimax regret in this setting as well and show that it is Θ~(T/B)\widetilde{\Theta}(T/\sqrt{B}), which scales smoothly with the total budget BB. Furthermore, we propose a generic algorithmic framework, which enables us to design different learning algorithms that can achieve matching upper bounds for both settings based on the amount and type of feedback. One interesting finding is that while bandit feedback can still guarantee optimal regret when the budget is relatively limited, it no longer suffices to achieve optimal regret when the budget is relatively large.

Machine Learning, ICML

1 Introduction

Online learning over a finite set of actions is a classical problem in machine learning research. It can be formulated as a TT-round repeated game between a learner and an adversary: at each round, the learner chooses one of the KK actions and suffers the loss of this chosen action, where the loss is determined by the adversary. At the end of each round, the learner receives some feedback and uses it to update her policy at the next round. The goal of the learner is to minimize the regret, defined as the difference between her cumulative loss and that of the best fixed action in hindsight.

In terms of the type of feedback, two important settings have been extensively studied in the literature: bandit and full information. At each round, if the learner observes only the loss of the chosen action, then it is called bandit feedback, and the game is called adversarial multi-armed bandits (MAB) or non-stochastic bandits with adversarial losses (Auer et al., 2002b). On the other hand, if the losses of all KK actions are revealed to the learner, then it is called full-information feedback, and the game becomes prediction with expert advice (Cesa-Bianchi & Lugosi, 2006).

The regret in these two settings has been well understood. Specifically, the minimax regret is Θ(TK)\Theta(\sqrt{TK})111We use standard big OO notations (e.g., OO, Ω\Omega, and Θ\Theta); those with tilde (e.g., O~\widetilde{O}, Ω~\widetilde{\Omega}, and Θ~\widetilde{\Theta}) hide poly-logarithmic factors. under bandit feedback (Auer et al., 2002b; Audibert & Bubeck, 2009) and is Θ(TlnK)\Theta(\sqrt{T\ln{K}}) under full information (Cesa-Bianchi & Lugosi, 2006, Theorems 2.2 and 3.7) (Hazan, 2016) (Orabona, 2019, Section 6.8). These results imply that learning under bandit feedback is slightly harder than under full information, in the sense that the dependency on KK is worse (Θ(K)\Theta(\sqrt{K}) vs. Θ(lnK)\Theta(\sqrt{\ln{K}})). However, the scaling with respect to TT remains the same (i.e., Θ(T)\Theta(\sqrt{T})).

In the above standard settings, the learner is allowed to arbitrarily switch actions at two consecutive rounds. However, in many real-world decision-making problems, switching actions may incur a cost (e.g., due to system reconfiguration and resource reallocation) (Zhang et al., 2005; Kaplan, 2011). Motivated by this practical consideration, a new setting called online learning with switching costs has also been extensively studied (Arora et al., 2012; Cesa-Bianchi et al., 2013). In this setting, the learner needs to pay an additional unit loss whenever she switches actions.

Interestingly, it has been shown that in this new setting, learning under bandit feedback is significantly harder than under full information. Under full-information feedback, even with switching costs, the minimax regret remains Θ(TlnK)\Theta(\sqrt{T\ln{K}}), which can be achieved by several algorithms such as Shrinking Dartboard (SD) (Geulen et al., 2010) and Follow-the-Perturbed-Leader (FTPL) (Devroye et al., 2013). On the other hand, Dekel et al. (2013) shows a (worse) lower bound of Ω~(K1/3T2/3)\widetilde{\Omega}(K^{1/3}T^{2/3}) for the bandit setting, which can be matched (up to poly-logarithmic factors) by the batched EXP3 algorithm (Arora et al., 2012). These results reveal that introducing switching costs makes bandit problems strictly harder than expert problems due to the worse dependency on TT (i.e., Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) vs. Θ~(T)\widetilde{\Theta}(\sqrt{T})).

Our Contributions. While these two special cases have been well studied, it remains largely unknown how feedback impacts regret in general. To close this important gap, we aim to fundamentally understand the role of feedback (in terms of both amount and type) in online learning with switching costs. Our main contributions are as follows.

(i) We first consider the setting of bandit learning with extra observations, where in addition to the typical bandit feedback, the learner can freely make a total of BexB_{\mathrm{ex}} extra observations in an arbitrary form (Section 3). We present a tight characterization of the minimax regret, which exhibits an interesting phase-transition phenomenon (see Fig. 1(a)). Specifically, when Bex=O(T2/3)B_{\mathrm{ex}}=O(T^{2/3}), the regret remains Θ~(T2/3)\widetilde{\Theta}(T^{2/3}), but when Bex=Ω(T2/3)B_{\mathrm{ex}}=\Omega(T^{2/3}), it becomes Θ~(T/Bex)\widetilde{\Theta}(T/\sqrt{B_{\mathrm{ex}}}), which improves as the budget BexB_{\mathrm{ex}} increases.

(ii) To understand this phenomenon and design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of BB total observations (Section 4). We fully characterize the minimax regret in this setting as well and show that it is Θ~(T/B)\widetilde{\Theta}(T/\sqrt{B}), which scales smoothly with the total budget BB (see Fig. 1(b)). Furthermore, we propose a generic algorithmic framework, which enables us to design different learning algorithms that can achieve matching upper bounds for both settings based on the amount and type of feedback.

(iii) Our findings highlight the crucial impact of feedback type (bandit vs. others) in the second setting (see Table 1). In particular, while both bandit and other types of feedback can achieve optimal regret when the budget is relatively limited, pure bandit feedback is no longer sufficient to guarantee optimal regret when the budget is relatively large. However, in the standard setting without switching costs, all three types of feedback we consider can achieve optimal regret in the full range of BB. This reveals that the impact of feedback type is (partly) due to switching costs.

Refer to caption
(a) Bandit feedback plus BexB_{\mathrm{ex}} extra observations
Refer to caption
(b) BB total observations
Figure 1: An illustration of the minimax regret vs. observation budget in log-log plots: (a) the learner receives bandit feedback plus no more than BexB_{\mathrm{ex}} extra observations (Theorem 1); (b) the learner can make no more than BB total observations (Theorem 2).
Table 1: The minimax regret under different types of feedback in the setting of online learning under a total observation budget BB: with (w/) vs. without (w/o) switching costs (SC). A formal description of “Flexible” feedback can be found in Section 4.2.
Feedback Type Minimax Regret
w/ SC      w/o SC
Full-information Θ~(TK/B)\widetilde{\Theta}(T\sqrt{K/B})
Flexible
Bandit
(B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3}))
Bandit
(B=Ω(K1/3T2/3)B=\Omega(K^{1/3}T^{2/3}))
Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3})

2 Problem Setup

In this section, we introduce basic notations and present the problem setup. For any positive integer nn, let [n]:={1,,n}[n]:=\{1,\dots,n\}, and let 1:n\ell_{1:n} be the loss sequence 1,,n\ell_{1},\dots,\ell_{n}. We use 𝕀{}\mathbb{I}_{\{\mathcal{E}\}} to denote the indicator function of event \mathcal{E}: 𝕀{}=1\mathbb{I}_{\{\mathcal{E}\}}=1 if event \mathcal{E} happens, and 𝕀{}=0\mathbb{I}_{\{\mathcal{E}\}}=0 otherwise.

The learning problem can be viewed as a repeated game between a learner and an adversary. Assume that there are K>1K>1 actions the learner can choose. Let TKT\geq K be the length of the time horizon, which is fixed at the beginning of the game and is known to the learner. At each round t[T]t\in[T], the adversary assigns a loss in [0,1][0,1] to each action in [K][K]; the learner samples an action XtX_{t} from a probability distribution wtw_{t} (also determined by the learner) over the action set [K][K]. After taking action XtX_{t}, the learner suffers a loss of the chosen action, i.e., t[Xt]\ell_{t}[X_{t}]. By the end of each round, the learner observes the loss of some actions (specific types of such feedback will be discussed later) and updates probability distribution wt+1w_{t+1} that will be used at the next round. Each time when the learner takes an action different from that at the previous round, one unit of switching cost is incurred. The regret under a learning algorithm π\pi over a loss sequence 1:T\ell_{1:T}, denoted by RTπ(1:T)R^{\pi}_{T}(\ell_{1:T}), is defined as the difference between the cumulative loss (including the switching costs incurred) under algorithm π\pi and that of the optimal (best fixed) action in hindsight:

RTπ(1:T):=t=1T(t[Xt]+𝕀{XtXt1})mink[K]t=1Tt[k].\small R^{\pi}_{T}(\ell_{1:T})\!:=\!\sum_{t=1}^{T}\left(\ell_{t}[X_{t}]\!+\!\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}\right)\!-\!\min_{k\in[K]}\sum_{t=1}^{T}\ell_{t}[k]. (1)

For a randomized algorithm, we consider the expected regret (or simply regret), denoted by 𝔼[RTπ(1:T)]\mathbb{E}\left[R^{\pi}_{T}(\ell_{1:T})\right], where the expectation is taken over the randomness of the algorithm. Without loss of generality, let 𝕀{X1X0}=0\mathbb{I}_{\{X_{1}\neq X_{0}\}}=0, i.e., the first action does not incur any switching cost. The adversary is assumed to be oblivious, in the sense that the whole loss sequence is determined by the adversary before the game begins. In this paper, for any given algorithm π\pi, we are interested in the worst-case (expected) regret over all possible loss sequences (i.e., instance-independent), denoted by RTπR_{T}^{\pi}:

RTπ:=sup1:T[0,1]KT𝔼[RTπ(1:T)].R_{T}^{\pi}:=\sup_{\ell_{1:T}\in[0,1]^{KT}}\mathbb{E}\left[R^{\pi}_{T}(\ell_{1:T})\right]. (2)

Let Π\Pi be the set of all feasible learning algorithms following the specified learning protocol. We define the minimax (or optimal) regret, denoted by RT(Π)R_{T}^{*}(\Pi), as the minimum worst-case regret under all feasible learning algorithms in Π\Pi:

RT(Π):=infπΠRTπ.R_{T}^{*}(\Pi):=\inf_{\pi\in\Pi}R_{T}^{\pi}. (3)

For notational ease, we may drop Π\Pi in RT(Π)R_{T}^{*}(\Pi) and simply use RTR_{T}^{*} whenever there is no ambiguity.

To understand the role of feedback in online learning with switching costs, we will consider two different settings with an observation budget: (i) in addition to the typical bandit feedback, the learner can freely make a total of BexB_{\mathrm{ex}} extra observations (Section 3); (ii) the learner can freely make BB total observations (Section 4). Due to space limitations, in Appendix A we provide motivating examples for the settings with an observation budget we consider.

3 Bandit Learning with Switching Costs under Extra Observation Budget

Observing the gap in the optimal regret bound under bandit and full-information feedback (Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) vs. Θ~(T)\widetilde{\Theta}(\sqrt{T})), it is natural to ask: How much can one improve upon the Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) regret if the learner is allowed to make some extra observations in addition to the typical bandit feedback?

Motivated by this question, we consider the setting of bandit learning with switching costs under an extra observation budget. We consider the learning protocol specified in Section 2, and in addition to the typical bandit feedback, the learner is allowed to freely use at most BexB_{\mathrm{ex}} extra observations of the loss of other action(s) throughout the game, where BexB_{\mathrm{ex}} is an integer in [0,(K1)T][0,(K-1)T]. At the two endpoints of 0 and (K1)T(K-1)T, this new setting recovers the bandit and full-information cases, respectively. In this section, by slightly abusing the notation, we also use Π\Pi to denote the set of all learning algorithms using typical bandit feedback plus BexB_{\mathrm{ex}} extra observations, and we are interested in the minimax regret RTR^{*}_{T} for Bex[0,(K1)T]B_{\mathrm{ex}}\in[0,(K-1)T].

3.1 Minimax Regret

We first present our main result of the minimax regret RTR_{T}^{*} in this setting, which is formally stated in Theorem 1.

Theorem 1.

In the setting of bandit learning with switching costs under an extra observation budget Bex[0,(K1)T]B_{\mathrm{ex}}\in[0,(K-1)T], the minimax regret is given by

RT={Θ~(K1/3T2/3),Bex=O(K1/3T2/3),Θ~(TK/Bex),Bex=Ω(K1/3T2/3).R_{T}^{*}=\left\{\begin{aligned} &\widetilde{\Theta}(K^{1/3}T^{2/3}),&B_{\mathrm{ex}}=O(K^{1/3}T^{2/3}),\\ &\widetilde{\Theta}(T\sqrt{K/B_{\mathrm{ex}}}),&B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}).\end{aligned}\right.
Remark 1.

Interestingly, this minimax regret exhibits a phase-transition phenomenon (see, also, Fig. 1(a)): when the amount of extra observations is relatively small (i.e., Bex=O(K1/3T2/3)B_{\mathrm{ex}}=O(K^{1/3}T^{2/3})), they are insufficient for improving the regret, which remains Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3}); however, when the amount is large enough (i.e., Bex=Ω(K1/3T2/3)B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3})), the regret decreases smoothly as the budget BexB_{\mathrm{ex}} increases.

3.2 Lower Bound

To establish Theorem 1, we will first show a fundamental lower bound, which is formally stated in Proposition 1.

Proposition 1.

For any learning algorithm π\pi that can use a total of BexB_{\mathrm{ex}} extra observations in addition to the typical bandit feedback, there exists a loss sequence 1:T\ell_{1:T} (which may depend on both π\pi and BexB_{\mathrm{ex}}) such that

𝔼[RTπ(1:T)]={Ω~(K1/3T2/3),Bex=O(K1/3T2/3),Ω~(TK/Bex),Bex=Ω(K1/3T2/3).\!\mathbb{E}\left[R_{T}^{\pi}(\ell_{1:T})\right]\!=\!\left\{\begin{aligned} &\widetilde{\Omega}(K^{1/3}T^{2/3}),\!\!\!\!&\!B_{\mathrm{ex}}=O(K^{1/3}T^{2/3}),\\ &\widetilde{\Omega}(T\sqrt{K/B_{\mathrm{ex}}}),\!\!\!\!&\!B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}).\end{aligned}\right.

We provide detailed proof of the above lower bound in Appendix B. Here, we present a proof sketch that mainly focuses on the key steps of the lower bound analysis with necessary explanations. The proof sketch reveals useful insights that not only help explain the interesting phase-transition phenomenon but also shed light on the design of algorithms that can achieve this lower bound.

Proof Sketch of Proposition 1.

We first give an overview of the construction of hard loss sequences in our setting and the main ideas behind the construction.

Generally speaking, the difficulty of bandit problems lies in the exploitation-exploration tradeoff. On the one hand, the learner wants to pull empirically good actions in order to enjoy a low instantaneous loss (i.e., exploitation); on the other hand, she may also want to pull other actions and gain useful information to distinguish the optimal (best fixed) action and suboptimal actions (i.e., exploration).

In the presence of switching costs, Dekel et al. (2013) proposes hard instances (i.e., loss sequences) based on a multi-scale random walk such that useful information toward distinguishability (between the optimal action and suboptimal actions) can only be obtained when the learner switches actions, which, however, incurs switching costs. Using carefully constructed instances, they show that switching costs increase the intrinsic difficulty of bandit learning and result in a regret lower bound of Ω~(K1/3T2/3)\widetilde{\Omega}(K^{1/3}T^{2/3}).

However, the hard instances in Dekel et al. (2013) work for pure bandit feedback only. That is, if the learner can obtain full-information feedback at any of the TT rounds, she would immediately identify the optimal action and suffer no regret in the rest of the game. The reason is that the optimal action has the (unique) lowest loss at all TT rounds.

To make it still hard to learn even when the learner has some extra feedback, we will borrow an idea from Shi et al. (2022) to modify the original hard instance in Dekel et al. (2013): at each round, an additional layer of action-dependent noise is added to the loss of each action. As a result, the optimal action no longer has the lowest loss at all rounds and therefore cannot be trivially identified even when the learner can make extra observations.

In the rest of the proof sketch, we present three key steps of the proof and provide high-level explanations.

Step 1: Establishing the relationship between two regrets. As in Dekel et al. (2013), each loss value in the initial loss sequence we construct, denoted by 1:Tinit\ell_{1:T}^{\mathrm{init}}, may not be bounded in [0,1][0,1]; through truncation, we construct the actual loss sequence 1:T\ell_{1:T} by simply projecting each initial loss value onto [0,1][0,1]. For notational ease, we use RTinitR_{T}^{\mathrm{init}} and RTR_{T} to denote the regret over loss sequences 1:Tinit\ell_{1:T}^{\mathrm{init}} and 1:T\ell_{1:T}, respectively. Recall that the goal is to obtain a lower bound on 𝔼[RT]\mathbb{E}\left[R_{T}\right], which, however, is hard to analyze directly due to the truncation. Instead, we show that it suffices to obtain a lower bound on 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\mathrm{init}}\right] (i.e., the regret under untruncated loss sequence), due to the following relationship:

𝔼[RT]𝔼[RTinit]ϵT6,\displaystyle\mathbb{E}\left[R_{T}\right]\geq\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]-\frac{\epsilon T}{6}, (4)

where ϵ>0\epsilon>0 is the gap between the instantaneous losses of the optimal action and a suboptimal action. The value of ϵ\epsilon will be determined later.

Step 2: Obtaining a lower bound on 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]. Let SS be the expected total number of action switches. Through careful information-theoretic analysis, we obtain the following (informal) lower bound on 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\mathrm{init}}\right] in terms of the number of switches SS and extra observation budget BexB_{\mathrm{ex}}:

𝔼[RTinit]ϵT2𝐀.1Cϵ2TK(S+Bex)𝐀.2+S𝐀.3,\displaystyle\!\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\!\geq\!\underbrace{\!\frac{\epsilon T}{2}}_{\mathbf{A.1}}\!-\!\underbrace{C\frac{\epsilon^{2}T}{\sqrt{K}}(\sqrt{S}\!+\!\sqrt{B_{\mathrm{ex}}})}_{\mathbf{A.2}}\!+\!\underbrace{S}_{\mathbf{A.3}}, (5)

where CC is a positive term that contains some constants and poly-logarithmic terms of TT.

We now explain each term in Eq. (5). Term 𝐀.1\mathbf{A.1} reflects that without any useful information toward distinguishability, the learner may be stuck with a suboptimal action throughout the game, thus suffering Θ(ϵT)\Theta(\epsilon T) regret. Term 𝐀.2\mathbf{A.2} roughly represents the amount of useful information for gaining distinguishability and thus reducing the regret: better distinguishability leads to a larger 𝐀.2\mathbf{A.2} and thus a lower regret. Term 𝐀.3\mathbf{A.3} is simply the switching costs incurred.

Step 3: Choosing a proper value of ϵ\epsilon. Note that the lower bound in Eq. (5) is a quadratic function of S\sqrt{S}. By finding the minimizer of this quadratic function, denoted by SS^{*}, we can further obtain the following lower bound:

𝔼[RTinit]ϵT2𝐁.1C24ϵ4T2K𝐁.2Cϵ2TBexK𝐁.3.\displaystyle\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\geq\underbrace{\frac{\epsilon T}{2}}_{\mathbf{B.1}}-~{}\underbrace{\frac{C^{2}}{4}\cdot\frac{\epsilon^{4}T^{2}}{K}}_{\mathbf{B.2}}-\underbrace{C\frac{\epsilon^{2}T\sqrt{B_{\mathrm{ex}}}}{\sqrt{K}}}_{\mathbf{B.3}}. (6)

It now remains to choose a proper value of ϵ\epsilon based on BexB_{\mathrm{ex}}. By considering two different cases (Bex=Ω(K1/3T2/3)B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}) and Bex=O(K1/3T2/3)B_{\mathrm{ex}}=O(K^{1/3}T^{2/3})) and choosing ϵ\epsilon accordingly, we show that one of 𝐁.2\mathbf{B.2} and 𝐁.3\mathbf{B.3} dominates the other. Then, we can obtain the desired lower bound by combining these two cases. This completes the proof sketch. ∎

Remark 2.

While we use the same instance construction method in Shi et al. (2022), the problem they study is very different from ours. In particular, their learning protocol and the definition of switching costs are different, and they do not consider an observation budget as we do. We present a detailed discussion about the key difference in Section 5.

3.3 Insights from Lower Bound Analysis

Next, we give some useful observations and important insights that can be obtained from the above proof sketch, in particular, from Eq. (5), which provides a unified view of the lower bound in online learning with bandit feedback and flexible extra observations within a budget.

As a warm-up, we begin with the standard bandit case (i.e., Bex=0B_{\mathrm{ex}}=0), which has been extensively studied (Dekel et al., 2013). Recall that under the current instance construction, bandit feedback provides useful information only when the learner switches actions. From Eq. (5), one can observe that there is a tradeoff between exploration and switching costs: on the one hand, in order to better explore and enjoy a lower regret, the learner has to switch frequently (i.e., a larger SS) so as to gain more information (i.e., a larger 𝐀.2\mathbf{A.2}); on the other hand, however, since the learner has to pay one unit of switching cost for each switch (contributing to 𝐀.3\mathbf{A.3}), she should not switch too often. To strike the balance between the two, the best the learner can do is to switch S:=Θ(K1/3T2/3)S^{*}:=\Theta(K^{1/3}T^{2/3}) times; otherwise, the regret can only be worse because SS^{*} is the minimizer of the lower bound in Eq. (5). Finally, choosing ϵ\epsilon to be Θ~(K1/3T1/3)\widetilde{\Theta}(K^{1/3}T^{-1/3}) in Eq. (6) yields the Ω~(K1/3T2/3)\widetilde{\Omega}(K^{1/3}T^{2/3}) bound for the bandit case.

Remark 3.

The above discussion indicates that with switching costs, the worst-case hard instance restrains the learner from obtaining distinguishability from more than Θ(K1/3T2/3)\Theta(K^{1/3}T^{2/3}) rounds (i.e., rounds associated with action switches) rather than TT rounds as in the standard bandit learning setting (without switching costs). This is also the key reason why the minimax regret is worse in bandit learning with switching costs.

Next, we consider the first case: Bex=O(K1/3T2/3)B_{\mathrm{ex}}=O(K^{1/3}T^{2/3}). In this case, one might hope to obtain a smaller regret (compared to the bandit case) with the help of additional feedback. However, we will show that unfortunately, the gain from those additional observations is negligible for improving the regret order-wise, and hence, the previous Ω~(K1/3T2/3)\widetilde{\Omega}(K^{1/3}T^{2/3}) bound remains. To see this, let ϵ\epsilon take the same value as in the bandit case (i.e., ϵ=Θ~(K1/3T1/3)\epsilon=\widetilde{\Theta}(K^{1/3}T^{-1/3})) in Eq. (6); although 𝐁.3\mathbf{B.3} now becomes positive instead of zero (as in the bandit case), it is still dominated by 𝐁.2\mathbf{B.2}, which results in the same Ω~(K1/3T2/3)\widetilde{\Omega}(K^{1/3}T^{2/3}) bound as in the bandit case.

We now turn to the second case: Bex=Ω(K1/3T2/3)B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}). In contrast to the previous case, due to a relatively large budget, the distinguishability provided by those extra observations (which do not contribute to switching costs) is no longer negligible. This leads to a smaller regret. In particular, by choosing ϵ=Θ~(K/Bex)\epsilon=\widetilde{\Theta}(\sqrt{K/B_{\mathrm{ex}}}), we have 𝐁.3\mathbf{B.3} dominate 𝐁.2\mathbf{B.2} and obtain the desired lower bound. In other words, one can reduce the regret through free exploration enabled by such extra observations without incurring switching costs.

3.4 Fundamental Questions about Algorithm Design

The above insights we gain from the lower bound analysis can also shed light on the algorithm design. In fact, these motivate us to ask several fundamental questions, not only about how to achieve optimal regret but also about the role of feedback in online learning with switching costs, in terms of both the amount and type of feedback.

On the one hand, it is straightforward to achieve a matching upper bound when Bex=O(K1/3T2/3)B_{\mathrm{ex}}=O(K^{1/3}T^{2/3}). Specifically, one can simply ignore all the extra observations and use bandit feedback only, e.g., batched EXP3 (Arora et al., 2012), which enjoys a Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3}) regret. Although the bounds match, only Θ(T2/3)\Theta(T^{2/3}) of the bandit feedback from the TT rounds contribute to distinguishability due to the tradeoff introduced by switching costs (see Remark 3). Given this observation, it is natural to ask: (Q1) Can one still achieve the same regret of Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3}) while using bandit feedback from Θ(K1/3T2/3)\Theta(K^{1/3}T^{2/3}) rounds only? Moreover, how would regret scale with the amount of available feedback if the (bandit) feedback is even more limited (e.g., O(K1/3T2/3)O(K^{1/3}T^{2/3}))?

On the other hand, it remains largely unknown how to match the Ω~(TK/Bex)\widetilde{\Omega}(T\sqrt{K/B_{\mathrm{ex}}}) bound when Bex=Ω(K1/3T2/3)B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}). Note that in the derivation of the lower bound, we optimistically view that all BexB_{\mathrm{ex}} extra observations contribute to useful information toward distinguishability (see term 𝐀.2\mathbf{A.2} in Eq. (5)). To achieve this, however, one needs to answer an important question: (Q2) How to carefully design a learning algorithm that can properly use these extra observations to indeed gain sufficient useful information toward distinguishability and match the lower bound? Moreover, since BexB_{\mathrm{ex}} now dominates SS^{*} (order-wise), can one still match the lower bound of Ω~(TK/Bex)\widetilde{\Omega}(T\sqrt{K/B_{\mathrm{ex}}}) using BexB_{\mathrm{ex}} extra observations only (i.e., not using any bandit feedback)?

To address these fundamental questions, it turns out that it would be more instructive to consider a general setting where the learner has a budget for total observations (see Section 4) rather than extra observations. We will show that the results obtained for this general setting will naturally answer the aforementioned questions. In particular, we show that there exist learning algorithms that can match the lower bound (up to poly-logarithmic factors), hence concluding the minimax regret stated in Theorem 1.

4 Online Learning with Switching Costs under Total Observation Budget

In this section, we consider a more general setting of online learning with switching costs under a total observation budget. Specifically, at each round, the learner can freely choose to observe the loss of up to KK actions (which may not necessarily include the action played), as long as the total number of observations over TT rounds does not exceed the budget BB, which is an integer in [K,KT][K,KT]. Without loss of generality, we assume BKB\geq K. We aim to understand the role of feedback in this general setting by studying the following fundamental question: (Q3) How does the minimax regret scale with the amount of available feedback in general? What is the impact of different types of feedback (bandit, full-information, etc.)?

To proceed, we need some additional notations for this section. Let 𝒪t[K]\mathcal{O}_{t}\subseteq[K] be the observation set, i.e., the set of actions whose loss the learner chooses to observe at round t[T]t\in[T], and let NobN_{\mathrm{ob}} be the total number of observations, i.e., Nob:=t=1T|𝒪t|N_{\mathrm{ob}}:=\sum_{t=1}^{T}|\mathcal{O}_{t}|. Naturally, we have NobBKTN_{\mathrm{ob}}\leq B\leq KT. For example, bandit feedback is a special case with 𝒪t={Xt},t[T]\mathcal{O}_{t}=\{X_{t}\},\forall t\in[T] and Nob=B=TN_{\mathrm{ob}}=B=T; full-information feedback is another special case with 𝒪t=[K],t[T]\mathcal{O}_{t}=[K],\forall t\in[T] and Nob=B=KTN_{\mathrm{ob}}=B=KT. By slightly abusing the notation in this section, we also use RTR^{*}_{T} to denote the minimax regret over the set of all learning algorithms that satisfy the learning protocol specified in Section 2 and do not exceed the total observation budget BB.

4.1 Minimax Regret

We first present the main result of this section and fully characterize the minimax regret for this general setting.

Theorem 2.

In the setting of online learning with switching costs under a total observation budget B[K,KT]B\in[K,KT], the minimax regret is given by RT=Θ~(TK/B)R_{T}^{*}=\widetilde{\Theta}(T\sqrt{K/B}).

Remark 4.

This result answers the first part of question (Q3): the minimax regret has a universal Θ(1/B)\Theta(1/\sqrt{B}) scaling across the full range of total budget BB (see Fig. 1 (b)), compared to the phase transition in Section 3 (see Fig. 1 (a)).

To establish this result, we need to obtain both a lower bound and a matching upper bound. For the lower bound, it turns out that it suffices to use an existing lower bound, which was originally derived for standard online learning without switching costs. We restate this lower bound in Lemma 1.

Lemma 1.

(Seldin et al., 2014, Theorem 2) In the setting of online learning (without switching costs) under a total observation budget B[K,KT]B\in[K,KT], the minimax regret is lower bounded by RT=Ω(TK/B)R_{T}^{*}={\Omega}(T\sqrt{K/B}).

Naturally, this serves as a valid lower bound for the setting with switching costs we consider. In fact, we will show that this lower bound is tight (up to poly-logarithmic factors), which in turn offers the following important message.

Remark 5.

If the learner can freely make observations over TT rounds within the budget, introducing switching costs does not increase the intrinsic difficulty of the online learning problem in terms of the minimax regret.

Now, it only remains to show that there exist algorithms that can achieve a matching upper bound (up to poly-logarithmic factors), which will be the main focus of the next subsection.

4.2 Learning Algorithms and Upper Bounds

In this subsection, we show that there indeed exist algorithms that can achieve the lower bound in Lemma 1, which further implies the tight bound in Theorem 2. Instead of focusing on one particular algorithm, we first propose a generic algorithmic framework, which not only enables us to design various optimal learning algorithms in a unified way but also facilitates a fundamental understanding of the problem by distilling its key components.

Our generic framework builds upon the classic Online Mirror Descent (OMD) framework with negative entropy regularizer (also called the Hedge algorithm) (Littlestone & Warmuth, 1989) and incorporates the following three key components to tackle both switching costs and observation budget in a synergistic manner.

Batching Technique. The batching technique was originally proposed for addressing adaptive adversaries (Arora et al., 2012), but naturally provides low switching guarantees. We divide TT rounds into batches and judiciously distribute the available observations across batches. That is, instead of consuming observations at every round as in standard online learning (which could even be infeasible when observation budget BB is relatively small), we use observations only at a single round randomly sampled from each batch. One key step to obtain the desired regret guarantee is to feed the (unbiased estimate of) batch-average loss to the learning algorithm at the end of each batch. While this technique is borrowed from Shi et al. (2022), the problem setup we consider is very different (see Section 5).

Shrinking Dartboard (SD). SD is a calibrated technique for controlling the number of action switches in online learning under a lazy version of Hedge. That is, with a carefully crafted probability distribution, the action tends to remain unchanged across two consecutive rounds (Geulen et al., 2010) while preserving the same marginal distribution as in Hedge. In our algorithmic framework, we generalize this idea to the batching case with general feedback: the same action can be played across two consecutive batches (instead of across rounds), and it is no longer required to use only full-information feedback as in Geulen et al. (2010).

Feedback Type. Recall that the learner is allowed to freely request feedback within the total budget. Hence, our last component lies in the feedback type. That is, the learner has the flexibility to choose the observation set 𝒪ub\mathcal{O}_{u_{b}} (not limited to bandit or full-information feedback only). In order to achieve a matching upper bound, however, the choice of the observation set (i.e., the type of feedback) is crucial in some cases. We will elaborate on this in Section 4.3.

Putting these three components together, we arrive at our unified algorithmic framework, which is presented in Algorithm 1. Given the input TT, KK, and BB of the problem, we need to determine the following input of the algorithm: the number of batches NN, batch size τ\tau, learning rate η\eta, and indicator ISDI_{\mathrm{SD}} (Line 1), along with the initialization of some variables (Line 2). Throughout the game, we maintain a positive weight Wb[k]W_{b}[k] for each action k[K]k\in[K] in each batch b[N]b\in[N]. Both the weights and the action for each batch may be updated only between two consecutive batches. Hence, in each batch bb, we keep playing the chosen action AbA_{b} until the end of the batch (Line 4); we sample a round ubu_{b} uniformly at random from the current batch (Line 5) and choose an observation set 𝒪ub\mathcal{O}_{u_{b}} in a certain way (to be specified later) such that the loss of each action in 𝒪ub\mathcal{O}_{u_{b}} will be observed at round ubu_{b} (Line 6). We then construct an unbiased estimate (Line 7), denoted by ^b=(^b[1],,^b[K])\widehat{\ell}_{b}=(\widehat{\ell}_{b}[1],\dots,\widehat{\ell}_{b}[K]), of the batch-average loss t=(b1)τ+1bτt/τ\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}/\tau (which depends on the choice of 𝒪ub\mathcal{O}_{u_{b}} and will be specified later) and then update the weight and sampling probability of each action accordingly: Wb+1[k]=Wb[k]exp(η^b[k])W_{b+1}[k]=W_{b}[k]\cdot\exp(-\eta\cdot\widehat{\ell}_{b}[k]) and wb+1[k]:=Wb+1[k]/i=1KWb+1[i]w_{b+1}[k]:=W_{b+1}[k]/\sum_{i=1}^{K}W_{b+1}[i] (Line 8). Finally, we determine action Ab+1A_{b+1} for the next batch (Line 9). Specifically, if the SD indicator ISD=0I_{\mathrm{SD}}=0, probability ISDexp(η^b)I_{\mathrm{SD}}\cdot\exp(-\eta\cdot\widehat{\ell}_{b}) is always zero, and hence, action Ab+1A_{b+1} is sampled using fresh randomness with probability proportional to action weights as normally done in Hedge: sample Ab+1A_{b+1} following distribution wb+1=(wb+1[1],,wb+1[K])w_{b+1}=(w_{b+1}[1],\dots,w_{b+1}[K]). If the SD indicator ISD=1I_{\mathrm{SD}}=1, with probability exp(η^b)\exp(-\eta\cdot\widehat{\ell}_{b}), we keep the current action for the next batch (i.e., Ab+1=AbA_{b+1}=A_{b}); otherwise, we sample a new action Ab+1A_{b+1} following distribution wb+1w_{b+1}.

Algorithm 1 Batched Online Mirror Descent with (Optional) Shrinking Dartboard
1:  Input: length of time horizon TT, number of actions KK, and observation budget BB; determine the following based on TT, KK, and BB: number of batches NN, batch size τ=T/N\tau=T/N, learning rate η\eta, and SD indicator ISDI_{\mathrm{SD}}
2:  Initialization: action weight W1[k]=1W_{1}[k]=1 and action sampling distribution w1[k]=1/K,k[K]w_{1}[k]=1/K,\forall k\in[K]; 𝒪t=,t[T]\mathcal{O}_{t}=\emptyset,\forall t\in[T]; choose A1[K]A_{1}\in[K] uniformly at random
3:  for batch b=1:Nb=1:N do
4:     Play action AbA_{b} throughout the current batch bb, i.e., Xt=Ab,t=(b1)τ+1,,bτX_{t}=A_{b},\forall t=(b-1)\tau+1,\dots,b\tau
5:     Sample a round index ubu_{b} uniformly at random from integers in [(b1)τ+1,bτ][(b-1)\tau+1,b\tau]
6:     Choose an observation set 𝒪ub[K]\mathcal{O}_{u_{b}}\subseteq[K] (to be specified later) and observe the loss of each action in 𝒪ub\mathcal{O}_{u_{b}} at round ubu_{b}: {ub[i]:i𝒪ub}\{\ell_{u_{b}}[i]:i\in\mathcal{O}_{u_{b}}\}
7:     Construct unbiased estimate ^b\widehat{\ell}_{b} (to be specified later) of the batch-average loss t=(b1)τ+1bτt/τ\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}/\tau
8:     Run OMD update: update the weight of each action: Wb+1[k]=Wb[k]exp(η^b[k])W_{b+1}[k]=W_{b}[k]\cdot\exp(-\eta\cdot\widehat{\ell}_{b}[k]) and the sampling probability: wb+1[k]=Wb+1[k]i=1KWb+1[i],k[K]w_{b+1}[k]=\frac{W_{b+1}[k]}{\sum_{i=1}^{K}W_{b+1}[i]},\forall k\in[K]
9:     With probability ISDexp(η^b)I_{\mathrm{SD}}\cdot\exp(-\eta\cdot\widehat{\ell}_{b}), keep action Ab+1=AbA_{b+1}=A_{b}; otherwise, sample action Ab+1wb+1A_{b+1}\sim w_{b+1}
10:  end for

With Algorithm 1 in hand, we are ready to introduce several specific instantiations and study their regret guarantees. In particular, for each instantiation we will specify the choice of the following parameters: number of batches NN, batch size τ\tau, learning rate η\eta, SD indicator ISDI_{\mathrm{SD}}, and observation set 𝒪ub\mathcal{O}_{u_{b}}. In the following, we first demonstrate one simple instantiation that uses full-information feedback only. Then, we show how to generalize this instantiation using more flexible feedback (i.e., not limited to full information only) while achieving the same performance guarantee.

Instantiation via Full-information Feedback. In this instantiation of Algorithm 1, we receive full-information feedback at a randomly selected round ubu_{b} in each batch bb (i.e., 𝒪ub=[K]\mathcal{O}_{u_{b}}=[K] and ^b=ub\widehat{\ell}_{b}=\ell_{u_{b}}) and SD is turned on (i.e., ISD=1I_{\mathrm{SD}}=1). At a high level, this can be viewed as a batched generalization of the original SD algorithm (Geulen et al., 2010) with N=B/KN=B/K batches (since we have KK observations in each batch), and hence, the corresponding batch size is τ=T/N=KT/B\tau=T/N=KT/B. For ease of exposition, we assume that NN and τ\tau are integers. Specifically, we have N=B/KN=B/K, τ=KT/B\tau=KT/B, η=2lnK3B\eta=\sqrt{\frac{2\ln K}{3B}}, ISD=1I_{\mathrm{SD}}=1, 𝒪ub=[K]\mathcal{O}_{u_{b}}=[K], and ^b=ub\widehat{\ell}_{b}=\ell_{u_{b}}. We use πfull\pi_{\mathrm{full}} to denote this instantiation and present its regret upper bound in Proposition 2. The proof is provided in Appendix C.

Proposition 2.

The worst-case regret under algorithm πfull\pi_{\mathrm{full}} is upper bounded by RTπfull=O(TKlnK/B)R_{T}^{\pi_{\mathrm{full}}}=O(T\sqrt{K\ln K/B}).

Remark 6.

This result immediately implies an upper bound of the minimax regret: RT=O(TKlnK/B)R^{*}_{T}=O(T\sqrt{K\ln K/B}), which, along with the lower bound in Lemma 1, further implies the tight bound in Theorem 2. Note that there is an additional lnK\sqrt{\ln K} factor in the upper bound. This shares the same pattern as in the setting even without switching costs (see Seldin et al. (2014, Theorem 1)), where the achieved upper bound also has an additional lnK\sqrt{\ln K} factor.

Remark 7.

For the previous setting considered in Section 3, the above result also implies an upper bound of the minimax regret: O~(TK/Bex)\widetilde{O}(T\sqrt{K/B_{\mathrm{ex}}}), when Bex=Ω(K1/3T2/3)B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3}), by simply ignoring all bandit feedback (i.e., B=BexB=B_{\mathrm{ex}}). On the other hand, as discussed in Section 3.4, when Bex=O(K1/3T2/3)B_{\mathrm{ex}}=O(K^{1/3}T^{2/3}), one can simply ignore extra observations and use pure bandit feedback only (e.g., batched EXP3 (Arora et al., 2012)) to achieve a O~(K1/3T2/3)\widetilde{O}(K^{1/3}T^{2/3}) regret. Combining these results, along with the lower bound in Proposition 1, implies the tight bound in Theorem 1. Moreover, this also answers question (Q2) raised in Section 3.

The result of our first instantiation shows that the optimal regret can indeed be achieved (up to a lnK\sqrt{\ln K} factor) when full-information feedback is employed. However, we can also show that the use of full-information feedback is not essential. In fact, it suffices to have an observation set chosen uniformly at random from all subsets of [K][K] with the same cardinality, which leads to a more flexible instantiation of Algorithm 1 presented below.

Instantiation via Flexible Feedback. In this instantiation, instead of having |𝒪ub|=K|\mathcal{O}_{u_{b}}|=K as under full-information feedback, we allow |𝒪ub|=MK|\mathcal{O}_{u_{b}}|=M\leq K. The key to this flexibility is a careful construction of an unbiased estimate of the batch-average loss (i.e., ^b\widehat{\ell}_{b}). Specifically, let MM be any integer that satisfies M[K]M\in[K] if B<TB<T and M[B/T,K]M\in[\lceil B/T\rceil,K] if BTB\geq T.222To fully use the budget, MM cannot be too small when BTB\geq T. Then, we have N=B/MN=B/M, τ=T/N=MT/B\tau=T/N=MT/B, η=M2lnK3KB\eta=M\sqrt{\frac{2\ln K}{3KB}}, ISD=1I_{\mathrm{SD}}=1, 𝒪ub\mathcal{O}_{u_{b}} is chosen uniformly at random from {U2[K]:|U|=M}\{U\in 2^{[K]}:|U|=M\}, and ^b[k]=𝕀{k𝒪ub}ub[k]M/K\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{u_{b}}[k]}{M/K} for all k[K]k\in[K]. We use πflex\pi_{\mathrm{flex}} to denote this instantiation and present its regret upper bound in Proposition 3. The proof is provided in Appendix D.

Proposition 3.

The worst-case regret under algorithm πflex\pi_{\mathrm{flex}} is upper bounded by RTπflex=O(TKlnK/B)R_{T}^{\pi_{\mathrm{flex}}}=O(T\sqrt{K\ln K/B}).

An astute reader may already notice that in the above flexible instantiation, while the number of observations can be one (i.e., |𝒪ub|=1|\mathcal{O}_{u_{b}}|=1), it is not the same as standard bandit feedback. This is because here, 𝒪ub\mathcal{O}_{u_{b}} needs to be chosen uniformly at random rather than simply being the action played in that batch (i.e., 𝒪ub={Ab}\mathcal{O}_{u_{b}}=\{A_{b}\}) as in the standard bandit setting (with a batch size of one). Motivated by this subtle difference, we will devote the next subsection to studying the impact of feedback type.

4.3 Impact of Feedback Type

In this subsection, we study the impact of feedback type by presenting another instantiation of Algorithm 1 via pure bandit feedback only. In this case, we naturally have BTB\leq T.

Instantiation via Bandit Feedback. This instantiation is a generalized version of batched EXP3 (Arora et al., 2012) with flexible batch size. Specifically, we have N=BN=B, τ=T/B\tau=T/B, η=2lnKBK\eta=\sqrt{\frac{2\ln K}{BK}}, ISD=0I_{\mathrm{SD}}=0, 𝒪ub={Ab}\mathcal{O}_{u_{b}}=\{A_{b}\}, and ^b[k]=𝕀{k𝒪ub}ub[k]wb[k]\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{u_{b}}[k]}{w_{b}[k]} for all k[K]k\in[K]. We use πb\pi_{\mathrm{b}} to denote this instantiation. When B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3}), we obtain a regret upper bound for πb\pi_{\mathrm{b}} and state it in Proposition 4. The proof is provided in Appendix E.

Proposition 4.

When B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3}), the worst-case regret under algorithm πb\pi_{\mathrm{b}} is upper bounded by RTπb=O(TKlnK/B)R_{T}^{\pi_{\mathrm{b}}}=O(T\sqrt{K\ln K/B}).

Remark 8.

This result is encouraging, in the sense that when B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3}), even using pure bandit feedback can achieve the optimal minimax regret of Θ~(TK/B)\widetilde{\Theta}(T\sqrt{K/B}). This result also answers question (Q1) raised in Section 3. First, it captures the regret scaling with respect to the amount of bandit feedback (i.e., still Θ(1/B)\Theta(1/\sqrt{B})) when BB is relatively small. Second, it implies that to achieve a regret of Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3}), it suffices to use bandit feedback from only B=Θ(K1/3T2/3)B=\Theta(K^{1/3}T^{2/3}) rounds rather than all TT rounds as in the classic algorithms (Arora et al., 2012). The same minimax regret at these two endpoints (B=Θ(K1/3T2/3)B=\Theta(K^{1/3}T^{2/3}) and B=TB=T) further implies that if only bandit feedback is allowed, the minimax regret is also Θ~(K1/3T2/3)\widetilde{\Theta}(K^{1/3}T^{2/3}) when B=Ω(K1/3T2/3)B=\Omega(K^{1/3}T^{2/3}) (i.e., in-between the two endpoints). In this case, bandit feedback is no longer sufficient to achieve the optimal minimax regret of Θ~(TK/B)\widetilde{\Theta}(T\sqrt{K/B}), although full-information and flexible feedback can still achieve this optimal minimax regret (see Propositions 2 and 3). Clearly, this shows the crucial impact of different types of feedback (when the total budget BB is large), which answers the second part of question (Q3). On the other hand, however, a straightforward result (Proposition 5 in Appendix F), along with Propositions 2 and 3 and Lemma 1, shows that in the standard setting without switching costs, all three types of feedback can achieve optimal regret in the full range of BB. This reveals that the impact of feedback type is partly due to switching costs. We also summarize these results in Table 1.

Remark 9.

Under bandit feedback, adopting a different regularizer called Tsallis entropy (Audibert & Bubeck, 2009) to the OMD framework could further remove the lnK\sqrt{\ln K} factor in the upper bound from Proposition 4 and exactly match the lower bound (order-wise) presented in Lemma 1.

5 Related Work

In this section, we present detailed discussions on several lines of research that are most relevant to ours. We omit the discussion on bandit and expert problems with switching costs as we have discussed this line of work in Section 1.

Online Learning with Total Observation Budget. In this line of research, the focus is on regret minimization when feedback is not always available and hence “limited” within a total budget. For example, in the so-called “label efficient (bandit) game” (Cesa-Bianchi et al., 2004; Audibert & Bubeck, 2010), the learner can ask for full-information/bandit feedback from no more than m[1,T]m\in[1,T] round(s). It is shown that the tight optimal regrets are Θ(TlnK/m)\Theta(T\sqrt{\ln{K}/{m}}) and Θ(TK/m)\Theta(T\sqrt{K/m}) under full-information and bandit feedback, respectively. Seldin et al. (2014) also considers a total observation budget in online learning, where the learner can freely request feedback, as long as the total amount of observed losses does not exceed the given total budget BB. They establish a tight characterization of the minimax regret in their setting (i.e., Θ~(TK/B)\widetilde{\Theta}(T\sqrt{K/B})). However, they do not consider switching costs, nor the case when the total observation budget is smaller than TT in their algorithm design. Interestingly, we show that introducing switching costs does not increase the intrinsic difficulty of online learning in the sense that the minimax regret remains Θ~(TK/B)\widetilde{\Theta}(T\sqrt{K/B}), but the feedback type becomes crucial.

Bandits with Additional Observations. Yun et al. (2018) considers the bandit setting with additional observations, where the learner can freely make n[0,K1]n\in[0,K-1] observations at each round in addition to the bandit feedback. Hence, this can be viewed as a special case of online learning with a total observation budget (Seldin et al., 2014). That is, a total of (n+1)T(n+1)T observations are used in a particular way (i.e., bandit plus extra observations). They present a tight characterization of the scaling of the minimax regret with respect to KK, TT, and nn. Similar to Seldin et al. (2014), however, switching costs are not considered.

Online Learning with Switching Costs and Feedback Graphs. Arora et al. (2019) considers online learning with switching costs and feedback graphs, where given a feedback graph GG, the learner observes the loss associated with the neighboring action(s) of the chosen action (including itself). However, the feedback graph is given and hence the additional feedback is not of the learner’s choice. Arora et al. (2019) shows that in this setting, the minimax regret is Θ~(γ(G)1/3T2/3)\widetilde{\Theta}(\gamma(G)^{1/3}T^{2/3}), where γ(G)\gamma(G) is the domination number of the feedback graph GG. Hence, the dependency on TT remains the same as in the standard bandit setting without additional observations (i.e., Θ~(T2/3)\widetilde{\Theta}(T^{2/3})). On the contrary, in the setting we consider, the learner can freely decide the loss of which actions to observe, which leads to different (and more interesting) regret bounds.

Online Learning with Limited Switches. Altschuler & Talwar (2018) considers online learning with limited switches. In contrast to the settings with switching costs, here the learner does not pay additional losses for switching actions; instead, the total number of switches allowed is capped at SS. Compared to our setting, a key difference is that switching is a constraint rather than a penalty added to the loss/cost function. They show that in the bandit setting, the minimax regret is Θ~(TK/S)\widetilde{\Theta}(T\sqrt{K/S}), i.e., the regret improves as the switching budget increases; in the expert setting, however, there is a phase-transition phenomenon: while the minimax regret is Θ~(TlnK/S)\widetilde{\Theta}(T\ln{K}/S) when S=O(TlnK)S=O(\sqrt{T\ln{K}}), it remains Θ~(TlnK)\widetilde{\Theta}(\sqrt{T\ln{K}}) when S=Ω(TlnK)S=\Omega(\sqrt{T\ln{K}}).

Online Learning against Adaptive Adversaries. Online learning with switching costs can also be viewed as a special case of learning against adaptive adversaries, where the losses at round tt are adapted to actions taken at both rounds tt and t1t-1 (in contrast to the oblivious adversaries we consider). Such adversaries have a bounded memory (of size one), in the sense that they could adapt only up to the most recent action, instead of any history in the earlier rounds (Cesa-Bianchi et al., 2013). Adopting the multi-scale random walk argument in Dekel et al. (2013), it has been shown that against adaptive adversaries with a memory of size one, the minimax policy regret is Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) under both bandit feedback (Cesa-Bianchi et al., 2013) and full-information feedback (Feng & Loh, 2018). This is fundamentally different from the special case with switching costs, where the minimax regret is different under bandit feedback and full-information feedback (Θ~(T2/3)\widetilde{\Theta}(T^{2/3}) vs. Θ~(T)\widetilde{\Theta}(\sqrt{T})).

Stochastic Bandits and the Best of Both Worlds.

Note that the above discussions have been focused on the adversarial setting. There is another body of work focused on the stochastic setting (see, e.g., Auer et al. (2002a); Auer (2003); Simchi-Levi & Xu (2019)), where the loss/reward follows some fixed distribution rather than being generated arbitrarily by an adversary. Hence, it is very different from the adversarial setting we consider. An interesting line of work has been focused on designing algorithms that can perform well in both adversarial and stochastic settings, thus achieving the best of both worlds (see, e.g., Bubeck & Slivkins (2012); Zimmert et al. (2019)).

Other Related Work.

In Shi et al. (2022), a novel bandit setting with switching costs and additional feedback has been considered. Specifically, the learner maintains an “action buffer” for each round, which is a subset of actions with fixed cardinality m[K]m\in[K], and the learner can only take an action from this buffer set. Their switching cost can be roughly viewed as how much change is made to this buffer set throughout the game – replacing an action in the buffer set incurs a constant cost. While the learner can observe the losses of all the actions in this buffer set for free, the learner can also choose to receive full-information feedback (i.e., observing the losses of all actions rather than just actions in the buffer set) by paying another (larger) constant cost. Although we draw inspiration from their work for deriving the lower bound and designing algorithms, both their problem setup and regret definition are very different from ours, and more importantly, they do not consider observation budget.

6 Conclusion

Our work is motivated by a well-known gap in the minimax regret under bandit feedback and full-information feedback in online learning with switching costs. We attempted to fundamentally understand the role of feedback by studying two cases of observation budget: (i) bandit feedback plus an extra observation budget and (ii) a total observation budget. Our findings reveal that both the amount and type of feedback play crucial roles when there are switching costs.

One interesting future direction is to consider stronger high-probability regret guarantees (Neu, 2015). Another direction is to achieve the best of both worlds guarantees for regrets with switching costs (Rouyer et al., 2021; Amir et al., 2022).

Acknowledgments

We thank the anonymous paper reviewers for their insightful feedback. This work is supported in part by the NSF grants under CNS-2112694 and CNS-2153220.

References

  • Altschuler & Talwar (2018) Altschuler, J. M. and Talwar, K. Online learning over a finite action set with limited switching. ArXiv, abs/1803.01548, 2018.
  • Amir et al. (2022) Amir, I., Azov, G., Koren, T., and Livni, R. Better best of both worlds bounds for bandits with switching costs. In Advances in Neural Information Processing Systems, volume 35, pp.  15800–15810, 2022.
  • Arora et al. (2012) Arora, R., Dekel, O., and Tewari, A. Online bandit learning against an adaptive adversary: from regret to policy regret. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pp.  1747–1754, 2012.
  • Arora et al. (2019) Arora, R., Marinov, T. V., and Mohri, M. Bandits with feedback graphs and switching costs. Advances in Neural Information Processing Systems, 32, 2019.
  • Audibert & Bubeck (2009) Audibert, J.-Y. and Bubeck, S. Minimax policies for adversarial and stochastic bandits. In Annual Conference Computational Learning Theory, 2009.
  • Audibert & Bubeck (2010) Audibert, J.-Y. and Bubeck, S. Regret bounds and minimax policies under partial monitoring. J. Mach. Learn. Res., 11:2785–2836, 2010.
  • Auer (2003) Auer, P. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res., 3:397–422, 2003.
  • Auer et al. (2002a) Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002a.
  • Auer et al. (2002b) Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32:48–77, 2002b.
  • Bubeck & Slivkins (2012) Bubeck, S. and Slivkins, A. The best of both worlds: Stochastic and adversarial bandits. In Conference on Learning Theory, pp.  42–1. JMLR Workshop and Conference Proceedings, 2012.
  • Cesa-Bianchi & Lugosi (2006) Cesa-Bianchi, N. and Lugosi, G. Prediction, learning, and games. Cambridge university press, 2006.
  • Cesa-Bianchi et al. (2004) Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. Minimizing regret with label efficient prediction. IEEE Transactions on Information Theory, 51:2152–2162, 2004.
  • Cesa-Bianchi et al. (2013) Cesa-Bianchi, N., Dekel, O., and Shamir, O. Online learning with switching costs and other adaptive adversaries. In Advances in Neural Information Processing Systems, volume 26, 2013.
  • Dekel et al. (2013) Dekel, O., Ding, J., Koren, T., and Peres, Y. Bandits with switching costs: T^{\{2/3}\} regret. arXiv preprint arXiv:1310.2997, 2013.
  • Devroye et al. (2013) Devroye, L., Lugosi, G., and Neu, G. Prediction by random-walk perturbation. In Annual Conference Computational Learning Theory, 2013.
  • Feng & Loh (2018) Feng, Z. and Loh, P.-L. Online learning with graph-structured feedback against adaptive adversaries. 2018 IEEE International Symposium on Information Theory (ISIT), pp.  931–935, 2018.
  • Geulen et al. (2010) Geulen, S., Vöcking, B., and Winkler, M. Regret minimization for online buffering problems using the weighted majority algorithm. Electron. Colloquium Comput. Complex., TR10, 2010.
  • Hazan (2016) Hazan, E. Introduction to online convex optimization. Found. Trends Optim., 2:157–325, 2016.
  • Kaplan (2011) Kaplan, S. Power plants: characteristics and costs. DIANE Publishing, 2011.
  • Littlestone & Warmuth (1989) Littlestone, N. and Warmuth, M. K. The weighted majority algorithm. 30th Annual Symposium on Foundations of Computer Science, pp. 256–261, 1989.
  • Neu (2015) Neu, G. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In NIPS, 2015.
  • Orabona (2019) Orabona, F. A modern introduction to online learning. ArXiv, abs/1912.13213, 2019.
  • Rouyer et al. (2021) Rouyer, C., Seldin, Y., and Cesa-Bianchi, N. An algorithm for stochastic and adversarial bandits with switching costs. In International Conference on Machine Learning, 2021.
  • Seldin et al. (2014) Seldin, Y., Bartlett, P. L., Crammer, K., and Abbasi-Yadkori, Y. Prediction with limited advice and multiarmed bandits with paid observations. In International Conference on Machine Learning, 2014.
  • Shi et al. (2022) Shi, M., Lin, X., and Jiao, L. Power-of-2-arms for bandit learning with switching costs. In Proceedings of the Twenty-Third International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, pp.  131–140, 2022.
  • Simchi-Levi & Xu (2019) Simchi-Levi, D. and Xu, Y. Phase transitions in bandits with switching constraints. ERN: Other Econometrics: Mathematical Methods & Programming (Topic), 2019.
  • Yao (1977) Yao, A. C.-C. Probabilistic computations: Toward a unified measure of complexity. 18th Annual Symposium on Foundations of Computer Science (sfcs 1977), pp.  222–227, 1977.
  • Yun et al. (2018) Yun, D., Proutière, A., Ahn, S., Shin, J., and Yi, Y. Multi-armed bandit with additional observations. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2:1 – 22, 2018.
  • Zhang et al. (2005) Zhang, Y., Murata, M., Takagi, H., and Ji, Y. Traffic-based reconfiguration for logical topologies in large-scale wdm optical networks. Journal of Lightwave Technology, 23:2854–2867, 2005.
  • Zimmert et al. (2019) Zimmert, J., Luo, H., and Wei, C.-Y. Beating stochastic and adversarial semi-bandits optimally and simultaneously. In International Conference on Machine Learning, pp. 7683–7692. PMLR, 2019.

Appendix A Motivating Examples for Online Learning with Switching Costs and Observation Budget

Consider a retail company that uses online learning to improve its website user interface (UI) design in order to attract more users. In this case, actions correspond to different UI designs. First, switching costs should be taken into account as frequently changing the website interface may become annoying to users. To evaluate other actions (different UI designs), the company can run an A/B test and display different interfaces to separate and relatively small groups of users so that the feedback of other actions is also available (in addition to the one displayed to the main and large population). However, each different website needs additional resources to be deployed and maintained, and hence, one may want to impose a total observation budget.

Another example would be Machine Learning as a Service (MLaaS). Consider a company that uses large ML models for jobs like prediction, chatbots, etc. They may train several different models and dynamically choose the best one via online learning. Changing the deployed ML model is not free: the new model needs to be loaded (which could be costly, especially nowadays when the number of parameters is quite large), and other components in the pipeline may also need to be adjusted accordingly. As a result, it is natural that redeploying or updating model components would incur a cost. While the performance of the deployed model is easily accessible, the company can also run these jobs using other models that are not deployed in the production system, to receive additional feedback. However, running these jobs consumes additional resources (e.g., computing and energy), which is not free either. Therefore, one may want to impose a budget on the number of additional observations (i.e., evaluations).

Appendix B Proof of Proposition 1

Proof of Proposition 1.

Our proof is built on Yao’s minimax principle (Yao, 1977). That is, we will establish a lower bound on the expected regret for any deterministic algorithm over a class of randomly generated loss sequences, which further implies the same lower bound for any randomized algorithm over some loss sequence in this class.

To begin with, we would like to give some high-level ideas about the proof. Note that while the loss sequence generation method we adopt will be the same as Algorithm 1 in Shi et al. (2022), we need a different analysis to establish the lower bound due to a different setting we consider. Specifically, in the original loss sequence construction based on multi-scale random walk (Dekel et al., 2013), the optimal action kk^{*} has the lowest loss at all TT rounds. With bandit feedback, useful information toward distinguishability (between the optimal action and suboptimal actions) is gained only when the learner switches between actions. With full-information feedback, however, the learner can immediately identify the optimal action even at one round only. Therefore, to construct a hard instance (i.e., loss sequence) for the setting where the learner is equipped with additional observations beyond the bandit feedback, Shi et al. (2022) introduced an action-dependent noise in addition to the original loss (which is called the hidden loss). Now, the learner’s information comes from two parts. On the one hand, the learner still gains distinguishability from switches (which is related to hidden losses). On the other hand, conditioning on hidden losses, the extra observations also provide additional information. Combining two parts together, we obtain a lower bound related to both the number of switches and the number of extra observations. For convenience, we restate this loss sequence generation method in Algorithm 2. Specifically, we first generate the sequence {G(t)}t\{G(t)\}_{t} according to the random walk design (Line 4). Next, we determine the loss before truncation, i.e., tinit[k]\ell_{t}^{\text{init}}[k] (Line 5). We first add an action-dependent noise γk(t)\gamma_{k}(t) (which is an i.i.d. Gaussian random variable) to G(t)G(t) for each action k[K]k\in[K]. And then, for the optimal action kk^{*} only, we will further subtract ϵ\epsilon (which is determined in the very beginning as an input to the algorithm) from the value obtained after adding γk(t)\gamma_{k}(t). Finally, we truncate each tinit[k]\ell_{t}^{\text{init}}[k] onto range [0,1][0,1] and obtain t[k]\ell_{t}[k] (Line 6).

Algorithm 2 Loss Sequence Generation Method (Shi et al., 2022)
1:  Input: suboptimality gap ϵ\epsilon and noise variance σ2\sigma^{2}
2:  Initialization: choose the identity of optimal action kk^{*} uniformly at random from [K][K]; initialize Gaussian process G(t)=0,t0G(t)=0,\forall t\geq 0; define functions δ(t):=max{i0:2i divides t}\delta(t):=\max\{i\geq 0:2^{i}\text{~{}divides~{}}t\} and ρ(t):=t2δ(t),t0\rho(t):=t-2^{\delta(t)},\forall t\geq 0
3:  for time t=1:Tt=1:T do
4:     G(t)=G(ρ(t))+ξ(t)G(t)=G(\rho(t))+\xi(t), where each ξ(t)\xi(t) is an i.i.d. sample from 𝒩(0,σ2)\mathcal{N}(0,\sigma^{2})
5:     tinit[k]=G(t)ϵ𝕀{k=k}+γk(t),k[K]\ell_{t}^{\mathrm{init}}[k]=G(t)-\epsilon\cdot\mathbb{I}_{\{k=k^{*}\}}+\gamma_{k}(t),\forall k\in[K], where γk(t)\gamma_{k}(t) is an i.i.d. sample from Gaussian distribution 𝒩(0,σ2)\mathcal{N}(0,\sigma^{2})
6:     t[k]=argminz[0,1]|ztinit[k]|,k[K]\ell_{t}[k]=\arg\min_{z\in[0,1]}|z-\ell_{t}^{\mathrm{init}}[k]|,\forall k\in[K]
7:  end for

Next, we give some additional notations needed for this proof. For any k[K]k\in[K], let k\mathbb{P}_{k} denote the conditional probability measure given the special (i.e., optimal) action k=kk^{*}=k, i.e., k():=(|k=k)\mathbb{P}_{k}(\cdot):=\mathbb{P}(\cdot|k^{*}=k). As a special case, when k=0k^{*}=0, 0\mathbb{P}_{0} denotes the conditional probability measure where all the actions are identical and there is no special action. Let 𝔼k[]:=𝔼[|k=k]\mathbb{E}_{k}[\cdot]:=\mathbb{E}[\cdot|k^{*}=k] denote the conditional expectation under measure k\mathbb{P}_{k}, and let 𝔼[]:=1Kk=1K𝔼k[]\mathbb{E}[\cdot]:=\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{k}[\cdot]. Let 1:Tob\ell^{\mathrm{ob}}_{1:T} denote the observed loss sequence throughout the game. For two probability distributions 𝒫\mathcal{P} and 𝒬\mathcal{Q} on the same space, let DKL(𝒫𝒬):=𝔼x𝒫[log(d𝒫(x)/d𝒬(x))]D_{\mathrm{KL}}(\mathcal{P}\|\mathcal{Q}):=\mathbb{E}_{x\sim\mathcal{P}}[\log\left(d\mathcal{P}(x)/d\mathcal{Q}(x)\right)] denote the Kullback-Leibler (KL) divergence (i.e., relative entropy) between 𝒫\mathcal{P} and 𝒬\mathcal{Q}, and let DTV(𝒫𝒬):=sup{𝒫(A)𝒬(A):A measurable}D_{\mathrm{TV}}(\mathcal{P}\|\mathcal{Q}):=\sup\{\mathcal{P}(A)-\mathcal{Q}(A):A\text{~{}measurable}\} denote the total variation distance between 𝒫\mathcal{P} and 𝒬\mathcal{Q}.

Let Stk:=𝕀{Xρ(t)=k,Xtk}+𝕀{Xρ(t)k,Xt=k}S_{t}^{k}:=\mathbb{I}_{\{X_{\rho(t)}=k,X_{t}\neq k\}}+\mathbb{I}_{\{X_{\rho(t)}\neq k,X_{t}=k\}} be the indicator of whether it is switched from or to action kk between rounds ρ(t)\rho(t) and tt, let S¯k:=t=1TStk\bar{S}^{k}:=\sum_{t=1}^{T}S_{t}^{k} be the total number of action switches from or to action kk, let S¯\bar{S} be the total number of action switches, i.e., S¯:=t=1T𝕀{XtXt1}=k=1KS¯k/2\bar{S}:=\sum_{t=1}^{T}\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}=\sum_{k=1}^{K}\bar{S}^{k}/2, and let NextN_{\mathrm{ex}}^{t} be the number of extra observations made at round tt in addition to the bandit feedback. Then, we naturally have Next[0,K1]N_{\mathrm{ex}}^{t}\in[0,K-1] and t=1TNextBex\sum_{t=1}^{T}N_{\mathrm{ex}}^{t}\leq B_{\mathrm{ex}} since the learning algorithm makes no more than BexB_{\mathrm{ex}} extra observations in total. Let RTR_{T} be the regret of the deterministic learning algorithm interacting with the loss sequence 1:T\ell_{1:T}, and let RTinitR_{T}^{\mathrm{init}} be the (hypothetical) regret on the same action sequence with respect to loss sequence 1:Tinit\ell_{1:T}^{\text{init}}.

In the following proof, we need Lemmas A.1 and A.4 of Shi et al. (2022) and Lemma 2 of Dekel et al. (2013). We restate these three results as Lemmas 2, 3, and 4, respectively.

Lemma 2.

(Shi et al., 2022, Lemma A.1 (restated)) The KL divergence between 0(1:Tob)\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right) and k(1:Tob)\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right) can be upper bounded as follows:

DKL(0(1:Tob)k(1:Tob))\displaystyle D_{\mathrm{KL}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right)
t=1T[0(Next=0,Stk=1)ϵ22σ2+j=1K10(Next=j,Stk=0,Xtk)ϵ22σ2\displaystyle\leq\sum_{t=1}^{T}\left[\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=0,S_{t}^{k}=1\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}\neq k\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}\right.
+j=1K10(Next=j,Stk=0,Xt=k)jϵ22σ2+j=1K10(Next=j,Stk=1,Xtk)2ϵ22σ2\displaystyle\quad+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}=k\right)\cdot\frac{j\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}\neq k\right)\cdot\frac{2\epsilon^{2}}{2\sigma^{2}}
+j=1K10(Next=j,Stk=1,Xt=k)(j+1)ϵ22σ2].\displaystyle\quad\left.+~{}\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}=k\right)\cdot\frac{(j+1)\epsilon^{2}}{2\sigma^{2}}\right].

Lemma 2 is obtained by considering five disjoint cases (which corresponds to the five terms on the Right-Hand-Side (RHS) in terms of different values of NextN_{\text{ex}}^{t}, StkS_{t}^{k}, and XtX_{t}. This lemma reveals the relationship between the KL divergence and the number of switches and the number of extra observations and will be used for deriving Eq. (7).

Lemma 3.

(Shi et al., 2022, Lemma A.4 (restated)) Consider the instance construction in Algorithm 2. Suppose that we have ϵ1/6\epsilon\leq 1/6 and σ=1/(9log2T)\sigma=1/(9\log_{2}T). Then, the difference between 𝔼[RT]\mathbb{E}\left[R_{T}\right] and 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\mathrm{init}}\right] can be bounded as follows:

𝔼[RTinit]𝔼[RT]ϵT6.\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]-\mathbb{E}\left[R_{T}\right]\leq\frac{\epsilon T}{6}.

Although the multi-scale random walk serves as a powerful and convenient tool to help us obtain the desired lower bound, an issue is that the losses could lie out of the range [0,1][0,1], which does not satisfy our problem setup. That is, based on the random walk, we can derive a lower bound on 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\text{init}}\right], with respect to a possibly unbounded loss sequence 1:Tinit\ell_{1:T}^{\text{init}}. However, our goal is to obtain a lower bound with respect to the bounded losses. To get around this issue, Lemma 3 presents a useful result: if ϵ\epsilon and σ\sigma satisfy certain conditions, then the difference between 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\text{init}}\right] and 𝔼[RT]\mathbb{E}\left[R_{T}\right] will not be too large, which is sufficient to give us the desired result.

Lemma 4.

(Dekel et al., 2013, Lemma 2 (restated)) Under the instance construction in Algorithm 2, the following is satisfied:

t=1T0(Stk=1)=𝔼[t=1TStk](log2T+1)𝔼0[S¯k].\sum_{t=1}^{T}\mathbb{P}_{0}\left(S_{t}^{k}=1\right)=\mathbb{E}\left[\sum_{t=1}^{T}S_{t}^{k}\right]\leq(\lfloor\log_{2}T\rfloor+1)\cdot\mathbb{E}_{0}\left[\bar{S}^{k}\right].

Furthermore, it can be bounded by 2log2T𝔼0[S¯k]2\log_{2}T\cdot\mathbb{E}_{0}\left[\bar{S}^{k}\right] for a sufficiently large TT.

Remark 10.

Lemma 4 holds regardless of whether the action-dependent is added or not. Therefore, it is true under the instance constructions from both Dekel et al. (2013) and Shi et al. (2022).

Lemma 4 relies on careful design of the random walk. We refer interested readers to (Dekel et al., 2013, Section 3) for technical details. In the following proof, we will first bound the KL divergence in part by 𝔼[t=1TStk]\mathbb{E}[\sum_{t=1}^{T}S_{t}^{k}]. This term is different from the switching costs we consider, as StkS_{t}^{k} roughly denotes the switch between rounds ρ(t)\rho(t) and tt, rather than between two consecutive rounds. To handle this difference, Lemma 4 builds a connection between the two and will be used for deriving Eq. (7).

With the above three restated lemmas, we are now ready to derive an upper bound on the KL divergence between 0(1:Tob)\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right) and k(1:Tob)\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right). In particular, for any k[K]k\in[K], we have

DKL(0(1:Tob)k(1:Tob))\displaystyle D_{\mathrm{KL}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right)
(a)t=1T[0(Next=0,Stk=1)ϵ22σ2+j=1K10(Next=j,Stk=0,Xtk)ϵ22σ2\displaystyle\overset{\text{(a)}}{\leq}\sum_{t=1}^{T}\left[\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=0,S_{t}^{k}=1\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}\neq k\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}\right.
+j=1K10(Next=j,Stk=0,Xt=k)jϵ22σ2+j=1K10(Next=j,Stk=1,Xtk)2ϵ22σ2\displaystyle\quad+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}=k\right)\cdot\frac{j\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}\neq k\right)\cdot\frac{2\epsilon^{2}}{2\sigma^{2}}
+j=1K10(Next=j,Stk=1,Xt=k)(j+1)ϵ22σ2]\displaystyle\quad\left.+~{}\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}=k\right)\cdot\frac{(j+1)\epsilon^{2}}{2\sigma^{2}}\right]
(b)t=1T[0(Next=0,Stk=1)ϵ22σ2+j=1K1jϵ2σ2(0(Next=j,Stk=0,Xtk)+0(Next=j,Stk=0,Xt=k)\displaystyle\overset{\text{(b)}}{\leq}\sum_{t=1}^{T}\left[\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=0,S_{t}^{k}=1\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\frac{j\epsilon^{2}}{\sigma^{2}}\cdot\left(\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}\neq k\right)+\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=0,X_{t}=k\right)\right.\right.
+0(Next=j,Stk=1,Xtk)+0(Next=j,Stk=1,Xt=k))]\displaystyle\quad\left.\left.+\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}\neq k\right)+\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j,S_{t}^{k}=1,X_{t}=k\right)\right)\right]
(c)t=1T[0(Stk=1)ϵ22σ2+j=1K10(Next=j)jϵ2σ2]\displaystyle\overset{\text{(c)}}{\leq}\sum_{t=1}^{T}\left[\mathbb{P}_{0}\left(S_{t}^{k}=1\right)\cdot\frac{\epsilon^{2}}{2\sigma^{2}}+\sum_{j=1}^{K-1}\mathbb{P}_{0}\left(N_{\mathrm{ex}}^{t}=j\right)\cdot\frac{j\epsilon^{2}}{\sigma^{2}}\right]
(d)2log2Tϵ22σ2(𝔼0[S¯k]+2Bex),\displaystyle\overset{\text{(d)}}{\leq}2\log_{2}T\cdot\frac{\epsilon^{2}}{2\sigma^{2}}\cdot\left(\mathbb{E}_{0}\left[\bar{S}^{k}\right]+2B_{\mathrm{ex}}\right), (7)

where (a) is from Lemma 2, (b) is obtained by enlarging the last four terms using the fact that 2j+12j,j12\leq j+1\leq 2j,\forall j\geq 1, (c) is obtained by applying the monotonicity property of probability to the first term and merging the last four disjoint events, and (d) is from Lemma 4 and the fact that t=1Tj=1K10(Next=j)j=t=1T𝔼0[Next]Bex\sum_{t=1}^{T}\sum_{j=1}^{K-1}\mathbb{P}_{0}(N_{\mathrm{ex}}^{t}=j)\cdot j=\sum_{t=1}^{T}\mathbb{E}_{0}\left[N_{\mathrm{ex}}^{t}\right]\leq B_{\mathrm{ex}}. Note that Eq. (7) indicates that the KL divergence (which can be viewed as the information obtained by the learner) is related to both the number of switches and the amount of extra feedback. With the upper bound on the KL divergence in Eq. (7), we can also bound the total variation. Specifically, we have

1Kk=1KDTV(0(1:Tob)k(1:Tob))\displaystyle\frac{1}{K}\sum_{k=1}^{K}D_{\mathrm{TV}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right)
(a)1Kk=1Kln22DKL(0(1:Tob)k(1:Tob))\displaystyle\overset{\text{(a)}}{\leq}\frac{1}{K}\sum_{k=1}^{K}\sqrt{\frac{\ln 2}{2}}\sqrt{D_{\mathrm{KL}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right)}
(b)ln222log2Tϵ22σ21Kk=1K𝔼0[S¯k]+2Bex\displaystyle\overset{\text{(b)}}{\leq}\sqrt{\frac{\ln 2}{2}}\sqrt{2\log_{2}T\cdot\frac{\epsilon^{2}}{2\sigma^{2}}}\cdot\frac{1}{K}\sum_{k=1}^{K}\sqrt{\mathbb{E}_{0}\left[\bar{S}^{k}\right]+2B_{\mathrm{ex}}}
(c)ln222log2Tϵ22σ21Kk=1K(𝔼0[S¯k]+2Bex)\displaystyle\overset{\text{(c)}}{\leq}\sqrt{\frac{\ln 2}{2}}\sqrt{2\log_{2}T\cdot\frac{\epsilon^{2}}{2\sigma^{2}}}\cdot\sqrt{\frac{1}{K}\sum_{k=1}^{K}\left(\mathbb{E}_{0}\left[\bar{S}^{k}\right]+2B_{\mathrm{ex}}\right)}
=(d)ϵσln2log2TK𝔼0[S¯]+Bex,\displaystyle\overset{\text{(d)}}{=}\frac{\epsilon}{\sigma}\sqrt{\frac{\ln 2\cdot\log_{2}T}{K}}\sqrt{\mathbb{E}_{0}\left[\bar{S}\right]+B_{\mathrm{ex}}}, (8)

where (a) is from Pinsker’s inequality, (b) is from Eq. (7), (c) is from Jensen’s inequality, and (d) is from k=1K𝔼0[S¯k]=2𝔼0[S¯]\sum_{k=1}^{K}\mathbb{E}_{0}\left[\bar{S}^{k}\right]=2\mathbb{E}_{0}\left[\bar{S}\right].

With all the above results, we are ready to derive a lower bound on 𝔼[RTinit]\mathbb{E}\left[R_{T}^{\mathrm{init}}\right] after showing two intermediate steps. Let NkN_{k} be the number of times action k[K]k\in[K] is played up to round TT (which is a random variable). We first assume that the deterministic learning algorithm makes at most ϵT\epsilon T switches on any loss sequence, which will be used for deriving Eq. (9) below, and we will later relax this assumption. Under this assumption, we have

𝔼0[S¯]𝔼[S¯]\displaystyle\mathbb{E}_{0}\left[\bar{S}\right]-\mathbb{E}\left[\bar{S}\right] =(a)1Kk=1K(𝔼0[S¯]𝔼k[S¯])\displaystyle\overset{\text{(a)}}{=}\frac{1}{K}\sum_{k=1}^{K}\left(\mathbb{E}_{0}\left[\bar{S}\right]-\mathbb{E}_{k}\left[\bar{S}\right]\right)
=(b)1Kk=1Kz=1T(0(S¯z)k(S¯z))\displaystyle\overset{\text{(b)}}{=}\frac{1}{K}\sum_{k=1}^{K}\sum_{z=1}^{T}\left(\mathbb{P}_{0}(\bar{S}\geq z)-\mathbb{P}_{k}(\bar{S}\geq z)\right)
=(c)1Kk=1Kz=1ϵT(0(S¯z)k(S¯z))\displaystyle\overset{\text{(c)}}{=}\frac{1}{K}\sum_{k=1}^{K}\sum_{z=1}^{\epsilon T}\left(\mathbb{P}_{0}(\bar{S}\geq z)-\mathbb{P}_{k}(\bar{S}\geq z)\right)
(d)ϵTKk=1KDTV(0(1:Tob)k(1:Tob)),\displaystyle\overset{\text{(d)}}{\leq}\frac{\epsilon T}{K}\sum_{k=1}^{K}D_{\mathrm{TV}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right), (9)

where (a) is from the definition that 𝔼[]:=1Kk=1K𝔼k[]\mathbb{E}[\cdot]:=\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{k}[\cdot], (b) is from rewriting the expectations, (c) is from the assumption of no more than ϵT\epsilon T switches, and (d) is from the definition of the total variation. Also, we have

k=1K𝔼k[Nk]T\displaystyle\sum_{k=1}^{K}\mathbb{E}_{k}[N_{k}]-T =(a)k=1K(𝔼k[Nk]𝔼0[Nk])\displaystyle\overset{\text{(a)}}{=}\sum_{k=1}^{K}\left(\mathbb{E}_{k}[N_{k}]-\mathbb{E}_{0}[N_{k}]\right)
=(b)k=1Kz=1T(k(Nkz)0(Nkz))\displaystyle\overset{\text{(b)}}{=}\sum_{k=1}^{K}\sum_{z=1}^{T}\left(\mathbb{P}_{k}(N_{k}\geq z)-\mathbb{P}_{0}(N_{k}\geq z)\right)
(c)Tk=1KDTV(0(1:Tob)k(1:Tob)),\displaystyle\overset{\text{(c)}}{\leq}T\sum_{k=1}^{K}D_{\mathrm{TV}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right), (10)

where (a) is from k=1K𝔼0[Nk]=T\sum_{k=1}^{K}\mathbb{E}_{0}[N_{k}]=T, (b) is from rewriting the expectations, and (c) is from the definition of the total variation.

We now lower bound the expected value of RTinitR_{T}^{\mathrm{init}} as follows:

𝔼[RTinit]\displaystyle\mathbb{E}\left[R_{T}^{\mathrm{init}}\right] =(a)1Kk=1K𝔼[ϵ(TNk)+S¯|k=k]\displaystyle\overset{\text{(a)}}{=}\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\epsilon(T-N_{k})+\bar{S}|k^{*}=k\right]
=ϵTϵKk=1K𝔼k[Nk]+𝔼[S¯]\displaystyle=\epsilon T-\frac{\epsilon}{K}\sum_{k=1}^{K}\mathbb{E}_{k}\left[{N}_{k}\right]+\mathbb{E}\left[\bar{S}\right]
(b)ϵTϵTK2ϵTKk=1KDTV(0(1:Tob)k(1:Tob))+𝔼0[S¯]\displaystyle\overset{\text{(b)}}{\geq}\epsilon T-\frac{\epsilon T}{K}-\frac{2\epsilon T}{K}\sum_{k=1}^{K}D_{\mathrm{TV}}\left(\mathbb{P}_{0}\left(\ell^{\mathrm{ob}}_{1:T}\right)\|\mathbb{P}_{k}\left(\ell^{\mathrm{ob}}_{1:T}\right)\right)+\mathbb{E}_{0}[\bar{S}]
(c)ϵT22ln2ϵ2Tlog2TσK𝔼0[S¯]+Bex+𝔼0[S¯]\displaystyle\overset{\text{(c)}}{\geq}\frac{\epsilon T}{2}-\frac{2\sqrt{\ln 2}\cdot\epsilon^{2}T\sqrt{\log_{2}T}}{\sigma\sqrt{K}}\sqrt{\mathbb{E}_{0}\left[\bar{S}\right]+B_{\mathrm{ex}}}+\mathbb{E}_{0}\left[\bar{S}\right]
(d)ϵT22ln2ϵ2Tlog2TσK(𝔼0[S¯]+Bex)+𝔼0[S¯]\displaystyle\overset{\text{(d)}}{\geq}\frac{\epsilon T}{2}-\frac{2\sqrt{\ln 2}\cdot\epsilon^{2}T\sqrt{\log_{2}T}}{\sigma\sqrt{K}}\left(\sqrt{\mathbb{E}_{0}\left[\bar{S}\right]}+\sqrt{B_{\mathrm{ex}}}\right)+\mathbb{E}_{0}\left[\bar{S}\right]
(e)ϵT2ln2ϵ4T2log2TKσ22ln2ϵ2Tlog2TσKBex,\displaystyle\overset{\text{(e)}}{\geq}\frac{\epsilon T}{2}-\frac{\ln 2\cdot\epsilon^{4}T^{2}\log_{2}T}{K\sigma^{2}}-\frac{2\sqrt{\ln 2}\cdot\epsilon^{2}T\sqrt{\log_{2}T}}{\sigma\sqrt{K}}\sqrt{B_{\mathrm{ex}}}, (11)

where (a) is from the regret definition (i.e., Eq. (1)), (b) is from Eqs. (9) and (10), (c) is from Eq. (8), (d) is from an elementary inequality: x+yx+y,x,y0\sqrt{x}+\sqrt{y}\geq\sqrt{x+y},\forall x,y\geq 0, and (e) is obtained by minimizing the quadratic function of 𝔼0[S¯]\sqrt{\mathbb{E}_{0}[\bar{S}]}.

Now, we turn to lower bound 𝔼[RT]\mathbb{E}\left[R_{T}\right]. By Lemma 3, if it holds that ϵ1/6\epsilon\leq 1/6 and σ=1/(9log2T)\sigma=1/(9\log_{2}T), then we have 𝔼[RTinit]𝔼[RT]ϵT/6\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]-\mathbb{E}\left[R_{T}\right]\leq\epsilon T/6. We first assume ϵ1/6\epsilon\leq 1/6 and later show that the selected ϵ\epsilon satisfies this condition. Then, we have

𝔼[RT]\displaystyle\mathbb{E}\left[R_{T}\right] 𝔼[RTinit]ϵT/6\displaystyle\geq\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]-\epsilon T/6
ϵT381ln2ϵ4T2(log2T)3K18ln2ϵ2T(log2T)3/2KBex,\displaystyle\geq\frac{\epsilon T}{3}-\frac{81\ln 2\cdot\epsilon^{4}T^{2}(\log_{2}T)^{3}}{K}-\frac{18\sqrt{\ln 2}\cdot\epsilon^{2}T(\log_{2}T)^{3/2}}{\sqrt{K}}\sqrt{B_{\mathrm{ex}}}, (12)

where the last step is from Eq. (11) and choosing σ=1/(9log2T)\sigma=1/(9\log_{2}T).

We now consider two cases for BexB_{\mathrm{ex}}: Bexc1K1/6T1/3\sqrt{B_{\mathrm{ex}}}\leq c_{1}{K^{1/6}T^{1/3}} and Bex>c1K1/6T1/3\sqrt{B_{\mathrm{ex}}}>c_{1}{K^{1/6}T^{1/3}}, for some c1>0c_{1}>0.

In the first case of Bexc1K1/6T1/3\sqrt{B_{\mathrm{ex}}}\leq c_{1}{K^{1/6}T^{1/3}}, we have

𝔼[RT]\displaystyle\mathbb{E}\left[R_{T}\right] (a)ϵT381ln2ϵ4T2(log2T)3K18ln2c1ϵ2T(log2T)3/2KK1/6T1/3\displaystyle\overset{\text{(a)}}{\geq}\frac{\epsilon T}{3}-\frac{81\ln 2\cdot\epsilon^{4}T^{2}(\log_{2}T)^{3}}{K}-\frac{18\sqrt{\ln 2}\cdot c_{1}\epsilon^{2}T(\log_{2}T)^{3/2}}{\sqrt{K}}\cdot{K^{1/6}T^{1/3}}
=(b)c2K1/3T2/33(log2T)3/281ln2c24K1/3T2/3(log2T)318ln2c1c22K1/3T2/3(log2T)3/2\displaystyle\overset{\text{(b)}}{=}\frac{c_{2}K^{1/3}T^{2/3}}{3(\log_{2}T)^{3/2}}-\frac{81\ln 2\cdot c_{2}^{4}K^{1/3}T^{2/3}}{(\log_{2}T)^{3}}-\frac{18\sqrt{\ln 2}\cdot c_{1}c_{2}^{2}K^{1/3}T^{2/3}}{(\log_{2}T)^{3/2}}
(c)(c2381ln2c2418ln2c1c22)K1/3T2/318(log2T)3\displaystyle\overset{\text{(c)}}{\geq}\left(\frac{c_{2}}{3}-81\ln 2\cdot c_{2}^{4}-18\sqrt{\ln 2}\cdot c_{1}c_{2}^{2}\right)\frac{K^{1/3}T^{2/3}}{18(\log_{2}T)^{3}}
=Ω~(K1/3T2/3),\displaystyle=\widetilde{\Omega}(K^{1/3}T^{2/3}), (13)

where (a) is from Bexc1K1/6T1/3\sqrt{B_{\mathrm{ex}}}\leq c_{1}{K^{1/6}T^{1/3}}, (b) is obtained by choosing ϵ=c2K1/3T1/3(log2T)3/2\epsilon=c_{2}\frac{K^{1/3}}{T^{1/3}(\log_{2}T)^{3/2}} (where c2>0c_{2}>0 satisfies 1381ln2c2318ln2c1c2>0\frac{1}{3}-81\ln 2\cdot c_{2}^{3}-18\sqrt{\ln 2}\cdot c_{1}c_{2}>0), and (c) is simply due to (log2T)3(log2T)3/2(\log_{2}T)^{3}\geq(\log_{2}T)^{3/2} for a sufficiently large TT. Since KTK\leq T, we have ϵ1/6\epsilon\leq 1/6 when TT is sufficiently large.

In the second case of Bex>c1K1/6T1/3\sqrt{B_{\mathrm{ex}}}>c_{1}K^{1/6}T^{1/3}, we have

𝔼[RT]\displaystyle\mathbb{E}\left[R_{T}\right] (a)ϵT381ln2ϵ4T2(log2T)3KBexc1K1/6T1/318ln2ϵ2T(log2T)3/2KBex\displaystyle\overset{\text{(a)}}{\geq}\frac{\epsilon T}{3}-\frac{81\ln 2\cdot\epsilon^{4}T^{2}(\log_{2}T)^{3}}{K}\cdot\frac{\sqrt{B_{\mathrm{ex}}}}{c_{1}K^{1/6}T^{1/3}}-\frac{18\sqrt{\ln 2}\cdot\epsilon^{2}T(\log_{2}T)^{3/2}}{\sqrt{K}}\sqrt{B_{\mathrm{ex}}}
=ϵT381ln2ϵ4T5/3(log2T)3c1K7/6Bex18ln2ϵ2T(log2T)3/2KBex\displaystyle=\frac{\epsilon T}{3}-\frac{81\ln 2\cdot\epsilon^{4}T^{5/3}(\log_{2}T)^{3}}{c_{1}K^{7/6}}\sqrt{B_{\mathrm{ex}}}-\frac{18\sqrt{\ln 2}\cdot\epsilon^{2}T(\log_{2}T)^{3/2}}{\sqrt{K}}\sqrt{B_{\mathrm{ex}}}
=(b)c3TK3(log2T)3/2Bex81ln2c34K5/6T5/3c1(log2T)3(Bex)3/218ln2c32KT(log2T)3/2Bex\displaystyle\overset{\text{(b)}}{=}\frac{c_{3}T\sqrt{K}}{3(\log_{2}T)^{3/2}\cdot\sqrt{B_{\mathrm{ex}}}}-\frac{81\ln 2\cdot c_{3}^{4}K^{5/6}T^{5/3}}{c_{1}(\log_{2}T)^{3}\cdot(B_{\mathrm{ex}})^{3/2}}-\frac{18\sqrt{\ln 2}\cdot c_{3}^{2}\sqrt{K}T}{(\log_{2}T)^{3/2}\cdot\sqrt{B_{\mathrm{ex}}}}
(c)c3TK3(log2T)3/2Bex81ln2c34KTc13(log2T)3Bex18ln2c32KT(log2T)3/2Bex\displaystyle\overset{\text{(c)}}{\geq}\frac{c_{3}T\sqrt{K}}{3(\log_{2}T)^{3/2}\cdot\sqrt{B_{\mathrm{ex}}}}-\frac{81\ln 2\cdot c_{3}^{4}\sqrt{K}T}{c_{1}^{3}(\log_{2}T)^{3}\cdot\sqrt{B_{\mathrm{ex}}}}-\frac{18\sqrt{\ln 2}\cdot c_{3}^{2}\sqrt{K}T}{(\log_{2}T)^{3/2}\cdot\sqrt{B_{\mathrm{ex}}}}
(c3381ln2c34c1318ln2c32)KT(log2T)3Bex\displaystyle\geq\left(\frac{c_{3}}{3}-\frac{81\ln 2\cdot c_{3}^{4}}{c_{1}^{3}}-18\sqrt{\ln 2}\cdot c_{3}^{2}\right)\frac{\sqrt{K}T}{(\log_{2}T)^{3}\cdot\sqrt{B_{\mathrm{ex}}}}
=Ω~(TK/Bex),\displaystyle=\widetilde{\Omega}\left(T\sqrt{K/B_{\mathrm{ex}}}\right), (14)

where (a) is due to Bexc1K1/6T1/3>1\frac{\sqrt{B_{\mathrm{ex}}}}{c_{1}K^{1/6}T^{1/3}}>1, (b) is obtained by choosing ϵ=c3K(log2T)3/2Bex\epsilon=\frac{c_{3}\sqrt{K}}{(\log_{2}T)^{3/2}\sqrt{B_{\mathrm{ex}}}} (where c3>0c_{3}>0 satisfies 1/381ln2c33/c1318ln2c3>0{1}/{3}-81\ln 2\cdot{c_{3}^{3}}/c_{1}^{3}-18\sqrt{\ln 2}\cdot c_{3}>0), and (c) again is due to Bexc1K1/6T1/3>1\frac{\sqrt{B_{\mathrm{ex}}}}{c_{1}K^{1/6}T^{1/3}}>1 (applied to the second term). Since KTK\leq T, we have ϵ=c3K(log2T)3/2Bexc3K1/3c1T1/3(log2T)3/21/6\epsilon=\frac{c_{3}\sqrt{K}}{(\log_{2}T)^{3/2}\sqrt{B_{\mathrm{ex}}}}\leq\frac{c_{3}K^{1/3}}{c_{1}T^{1/3}(\log_{2}T)^{3/2}}\leq 1/6 when TT is sufficiently large.

Now, we want to relax the assumption that the deterministic learning algorithm makes no more than ϵT\epsilon T switches. Similar to the proof of Theorem 2 in Dekel et al. (2013), we consider the following: If the learning algorithm makes more than ϵT\epsilon T switches, then we halt the algorithm at the point when there are exactly ϵT\epsilon T switches and repeat its action after the last switch throughout the rest of the game. We use RThaltR_{T}^{\text{halt}} to denote the regret of this halted algorithm over the same loss sequence as in RTR_{T}.

We consider two cases for the number of switches made by the original learning algorithm: S¯ϵT\bar{S}\leq\epsilon T and S¯>ϵT\bar{S}>\epsilon T.

In the first case of S¯ϵT\bar{S}\leq\epsilon T, halting does not happen, and trivially, we have RThalt=RTR_{T}^{\text{halt}}=R_{T}.

In the second case of S¯>ϵT\bar{S}>\epsilon T, the original learning algorithm makes more than ϵT\epsilon T switches. We use TT^{\prime} to denote the round index at which the ϵT\epsilon T-th switch happens. Clearly, we have ϵTT\epsilon T\leq T^{\prime}. As a result, the halted algorithm keeps playing action XTX_{T^{\prime}} from round T+1T^{\prime}+1 to the end of the game. We now rewrite RThaltR_{T}^{\text{halt}} and RTR_{T} as follows:

RT\displaystyle R_{T} =t=1T(t[Xt]t[k])+ϵT+t=T+1T(t[Xt]t[k])+(S¯ϵT),\displaystyle=\sum_{t=1}^{T^{\prime}}\left(\ell_{t}[X_{t}]-\ell_{t}[k^{*}]\right)+\epsilon T+\sum_{t=T^{\prime}+1}^{T}\left(\ell_{t}[X_{t}]-\ell_{t}[k^{*}]\right)+(\bar{S}-\epsilon T),
RThalt\displaystyle R_{T}^{\text{halt}} =t=1T(t[Xt]t[k])+ϵT+t=T+1T(t[XT]t[k]).\displaystyle=\sum_{t=1}^{T^{\prime}}\left(\ell_{t}[X_{t}]-\ell_{t}[k^{*}]\right)+\epsilon T+\sum_{t=T^{\prime}+1}^{T}\left(\ell_{t}[X_{T^{\prime}}]-\ell_{t}[k^{*}]\right).

By taking the difference between RThaltR_{T}^{\text{halt}} and RTR_{T} and then taking the expectation with respect to the randomness from loss generation, we have

𝔼[RThaltRT]=𝔼[t=T+1T(t[XT]t[Xt])]ϵT+𝔼[ϵTS¯]<0ϵT.\mathbb{E}\left[R_{T}^{\text{halt}}-R_{T}\right]=\underbrace{\mathbb{E}\left[\sum_{t=T^{\prime}+1}^{T}\left(\ell_{t}[X_{T^{\prime}}]-\ell_{t}[X_{t}]\right)\right]}_{\leq\epsilon T}+\underbrace{\mathbb{E}\left[\epsilon T-\bar{S}\right]}_{<0}\leq\epsilon T.

To see this, we first observe that at each round, the loss gap between actions is either ϵ\epsilon or 0 in expectation (because the Gaussian noise we add has zero mean). Therefore, the first term 𝔼[t=T+1T(t[XT]t[Xt])]\mathbb{E}\left[\sum_{t=T^{\prime}+1}^{T}\left(\ell_{t}[X_{T^{\prime}}]-\ell_{t}[X_{t}]\right)\right] can be bounded by ϵT\epsilon T (i.e., the gap at each round multiplied by the time horizon). Since the original learning algorithm makes more than ϵT\epsilon T switches, we have 𝔼[RT]ϵT\mathbb{E}\left[R_{T}\right]\geq\epsilon T, which implies 𝔼[RThalt]2𝔼[RT]\mathbb{E}\left[R_{T}^{\text{halt}}\right]\leq 2\mathbb{E}\left[R_{T}\right].

Combining the above two cases yields 𝔼[RThalt]2𝔼[RT]\mathbb{E}\left[R_{T}^{\text{halt}}\right]\leq 2\mathbb{E}\left[R_{T}\right]. Since we already obtain a lower bound for 𝔼[RThalt]\mathbb{E}\left[R_{T}^{\text{halt}}\right] (i.e., Eqs. (13) and (14)), the same lower bound also holds for 𝔼[RT]\mathbb{E}\left[R_{T}\right] (within a constant factor of 2).

Finally, we complete the proof by applying Yao’s principle. ∎

We also give two remarks about technical details in the proof of Proposition 1.

Remark 11.

To conclude that 𝔼[RThalt]2𝔼[RT]\mathbb{E}\left[R_{T}^{\text{halt}}\right]\leq 2\mathbb{E}\left[R_{T}\right], Dekel et al. (2013) shows that RThaltRT+ϵT2RTR_{T}^{\text{halt}}\leq R_{T}+\epsilon T\leq 2R_{T}, which is a stronger result compared to what is needed in an expected sense. This stronger result relies on the fact that the loss gap between actions is either ϵ\epsilon or 0 at each round. However, this may not be true anymore after introducing an additional action-dependent noise as in Shi et al. (2022). Despite this difference, one can still show 𝔼[RThalt]𝔼[RT]+ϵT2𝔼[RT]\mathbb{E}\left[R_{T}^{\text{halt}}\right]\leq\mathbb{E}\left[R_{T}\right]+\epsilon T\leq 2\mathbb{E}\left[R_{T}\right], which is used to prove Proposition 1.

Remark 12.

Some readers may ask: If the deterministic algorithm switches more than ϵT\epsilon T times, should not the switching costs already imply the desired lower bound on the regret? Why is it necessary to show a reduction from switch-limited algorithms to arbitrary algorithms? To see this, we note that the lower bound of Ω~(T2/3)\widetilde{\Omega}(T^{2/3}) is obtained after selecting ϵ\epsilon to be of order Θ~(T1/3)\widetilde{\Theta}(T^{-1/3}), while such selection is based on the previous analysis, which is further based on the assumption that the algorithm makes no more than ϵT\epsilon T switches. Therefore, the reduction from switch-limited algorithms to arbitrary algorithms is necessary.

Appendix C Proof of Proposition 2

Before proving Proposition 2, we first present two lemmas on the properties of SD. These are straightforward extensions of Lemmas 1 and 2 in Geulen et al. (2010) to their batched versions. A key difference is that we consider batches instead of rounds. While the proofs follow the same line of analysis, we provide the proofs below for completeness.

Lemma 5.

For the instantiations πfull\pi_{\text{full}} and πflex\pi_{\text{flex}} of algorithm 1, over any loss sequence, we have

(Ab=k)=𝔼[wb[k]],k[K],b[N].\mathbb{P}\left(A_{b}=k\right)=\mathbb{E}[w_{b}[k]],~{}\forall k\in[K],\forall b\in[N].
Proof of Lemma 5.

We first note that in these two instantiations of Algorithm 1, the feedback depends on the randomly chosen ubu_{b} only, and is thus independent of what actions are taken by the learner throughout the game. As a result, the whole feedback sequence ^1:N\widehat{\ell}_{1:N} can be fixed even before the learner’s actions are determined.

In the following, we will show by induction that conditioning on any random feedback sequence ^1:N\widehat{\ell}_{1:N}, it holds (almost surely) that (Ab=k|^1:N)=wb[k],k[K],b[N]\mathbb{P}\left(A_{b}=k|\widehat{\ell}_{1:N}\right)=w_{b}[k],\forall k\in[K],\forall b\in[N].

The base case of b=1b=1 is trivial due to the algorithm design (specifically, the uniform action weight initialization). In the following, we move forward to the induction step, i.e., we show that for every b2b\geq 2, if it holds that (Ab1=k|^1:N)=wb1[k],k[K]\mathbb{P}\left(A_{b-1}=k|\widehat{\ell}_{1:N}\right)=w_{b-1}[k],~{}\forall k\in[K], then it also holds that (Ab=k|^1:N)=wb[k],k[K]\mathbb{P}\left(A_{b}=k|\widehat{\ell}_{1:N}\right)=w_{b}[k],~{}\forall k\in[K], as follows:

(Ab=k|^1:N)\displaystyle\mathbb{P}\left(A_{b}=k|\widehat{\ell}_{1:N}\right) =(a)Wb[k]/Wb1[k](Ab1=k|^1:N)\displaystyle\overset{\text{(a)}}{=}W_{b}[k]/W_{b-1}[k]\cdot\mathbb{P}\left(A_{b-1}=k|\widehat{\ell}_{1:N}\right)
+i=1K(1Wb[i]/Wb1[i])wb[k](Ab1=i|^1:N)\displaystyle\quad+\sum_{i=1}^{K}\left(1-W_{b}[i]/W_{b-1}[i]\right)\cdot w_{b}[k]\cdot\mathbb{P}\left(A_{b-1}=i|\widehat{\ell}_{1:N}\right)
=(b)Wb[k]/Wb1[k]wb1[k]+wb[k]i=1K(1Wb[i]/Wb1[i])wb1[i]\displaystyle\overset{\text{(b)}}{=}W_{b}[k]/W_{b-1}[k]\cdot w_{b-1}[k]+w_{b}[k]\cdot\sum_{i=1}^{K}\left(1-W_{b}[i]/W_{b-1}[i]\right)\cdot w_{b-1}[i]
=Wb[k]Wb1[k]Wb1[k]i=1KWb1[i]+wb[k]i=1K(Wb1[i]Wb[i]Wb1[i])Wb1[i]j=1KWb1[j]\displaystyle=\frac{W_{b}[k]}{W_{b-1}[k]}\cdot\frac{W_{b-1}[k]}{\sum_{i=1}^{K}{W_{b-1}[i]}}+w_{b}[k]\cdot\sum_{i=1}^{K}\left(\frac{W_{b-1}[i]-W_{b}[i]}{W_{b-1}[i]}\right)\cdot\frac{W_{b-1}[i]}{\sum_{j=1}^{K}{W_{b-1}[j]}}
=Wb[k]i=1KWb1[i]+wb[k]i=1KWb1[i]i=1KWb[i]i=1KWb1[i]\displaystyle=\frac{W_{b}[k]}{\sum_{i=1}^{K}{W_{b-1}[i]}}+w_{b}[k]\cdot\frac{\sum_{i=1}^{K}{W_{b-1}[i]}-\sum_{i=1}^{K}{W_{b}[i]}}{\sum_{i=1}^{K}{W_{b-1}[i]}}
=wb[k]i=1KWb[i]i=1KWb1[i]+wb[k](1i=1KWb[i]i=1KWb1[i])\displaystyle=w_{b}[k]\cdot\frac{\sum_{i=1}^{K}{W_{b}[i]}}{\sum_{i=1}^{K}{W_{b-1}[i]}}+w_{b}[k]\cdot\left(1-\frac{\sum_{i=1}^{K}{W_{b}[i]}}{\sum_{i=1}^{K}{W_{b-1}[i]}}\right)
=wb[k],\displaystyle=w_{b}[k],

where (a) is due to Line 9 in Algorithm 1 (specifically, there are two disjoint cases for action kk to be played in batch bb: (i) action kk was played in batch b1b-1, and it does not change according to probability exp(η^b[k])=Wb[k]/Wb1[k]\exp(-\eta\cdot\widehat{\ell}_{b}[k])={W_{b}[k]}/{W_{b-1}[k]}; (ii) action kk is selected based on fresh randomness (i.e., the previous action ii does not stay unchanged with probability (1Wb[i]/Wb1[i])(1-{W_{b}[i]}/{W_{b-1}[i]})), regardless of which action is played in batch b1b-1), and (b) is from the inductive hypothesis that (Ab1=i|^1:N)=wb[i],i[K]\mathbb{P}\left(A_{b-1}=i|\widehat{\ell}_{1:N}\right)=w_{b}[i],\forall i\in[K].

Finally, taking the expectation on both sides of the above completes the proof. ∎

Lemma 5 implies that SD does not change the marginal distribution of action selection. Hence, one still enjoys the same regret guarantee as that of standard OMD (i.e., without SD).

Lemma 6.

For the instantiations πfull\pi_{\text{full}} and πflex\pi_{\text{flex}} of algorithm 1, over any loss sequence, the expected number of switches satisfies the following:

𝔼[t=1T𝕀{XtXt1}]b=2Nη𝔼[^b1[Ab1]].\mathbb{E}\left[\sum_{t=1}^{T}\mathbb{I}\{X_{t}\neq X_{t-1}\}\right]\leq\sum_{b=2}^{N}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right].
Proof of Lemma 6.

Based on the definition of switching costs, we have

𝔼[t=1T𝕀{XtXt1}]\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}\right] =(a)𝔼[b=2N𝕀{AbAb1}]\displaystyle\overset{\text{(a)}}{=}\mathbb{E}\left[\sum_{b=2}^{N}\mathbb{I}_{\{A_{b}\neq A_{b-1}\}}\right]
=b=2N𝔼[𝔼[𝕀{AbAb1}|^1:b1,Ab1]]\displaystyle=\sum_{b=2}^{N}\mathbb{E}\left[\mathbb{E}\left[\mathbb{I}_{\{A_{b}\neq A_{b-1}\}}|\widehat{\ell}_{1:b-1},A_{b-1}\right]\right]
(b)b=2N𝔼[1exp(η^b1[Ab1])]\displaystyle\overset{\text{(b)}}{\leq}\sum_{b=2}^{N}\mathbb{E}\left[1-\exp(-\eta\cdot\widehat{\ell}_{b-1}[A_{b-1}])\right]
(c)b=2Nη𝔼[^b1[Ab1]],\displaystyle\overset{\text{(c)}}{\leq}\sum_{b=2}^{N}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right],

where (a) is because switching happens only between two consecutive batches, (b) is due to the action selection rule of Algorithm 1 (Line 9), and (c) is from elementary inequality: 1exp(x)x,x>01-\exp(-x)\leq x,\forall x>0. ∎

Proof of Proposition 2.

Our goal here is to establish an upper bound on the expected regret for algorithm RTπfullR_{T}^{\pi_{\text{full}}}, which consists of two parts (cf. Eqs. (2) and (1)): (i) standard regret in terms of loss and (ii) switching cost. We will establish bounds on both respectively, hence obtaining the final result in the proposition.

To start with, let us establish an upper bound on the standard regret in terms of loss. To this end, we will build upon the classic analysis of OMD (cf. (Orabona, 2019, Section 6.6)). That is, for any (random) sequence ^1:N[0,)K\widehat{\ell}_{1:N}\in[0,\infty)^{K}, learning rate η>0\eta>0, and vector uu such that its YuY_{u}-th coordinate is 1 and all the others are 0, it holds almost surely that

b=1Nwbu,^b\displaystyle\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle lnKη+η2b=1Nk=1K(^b[k])2wb[k],\displaystyle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}(\widehat{\ell}_{b}[k])^{2}\cdot w_{b}[k],

where wb[k]=Wb[k]i=1KWb[i]w_{b}[k]=\frac{W_{b}[k]}{\sum_{i=1}^{K}W_{b}[i]} and Wb[k]=Wb1[k]exp(η^b1[k]),k[K]W_{b}[k]=W_{b-1}[k]\cdot\exp(-\eta\cdot\widehat{\ell}_{b-1}[k]),\forall k\in[K], i.e., line 8 in Algorithm 1. Taking the expectation on both sides yields that

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] lnKη+η2b=1Nk=1K𝔼[(^b[k])2wb[k]]\displaystyle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[(\widehat{\ell}_{b}[k])^{2}\cdot w_{b}[k]\right]
=lnKη+η2b=1Nk=1K𝔼[wb[k]𝔼[(^b[k])2|^1:b1]]\displaystyle=\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[w_{b}[k]\cdot\mathbb{E}\left[(\widehat{\ell}_{b}[k])^{2}|\widehat{\ell}_{1:b-1}\right]\right]
=(a)lnKη+η2b=1Nk=1K𝔼[wb[k](ub[k])2]\displaystyle\overset{\text{(a)}}{=}\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[w_{b}[k]\cdot\left(\ell_{u_{b}}[k]\right)^{2}\right]
(b)lnKη+ηB2K,\displaystyle\overset{\text{(b)}}{\leq}\frac{\ln K}{\eta}+\frac{\eta B}{2K}, (15)

where (a) follows from the algorithm design of πfull\pi_{\text{full}}, i.e., ^b=ub\widehat{\ell}_{b}=\ell_{u_{b}} and ubu_{b} is a randomly selected time slot within batch bb, and (b) comes from the boundedness of losses, the fact that k=1Kwb[k]=1,b[N]\sum_{k=1}^{K}w_{b}[k]=1,\forall b\in\left[N\right], and noting that N=B/KN=B/K.

We can further rewrite the left-hand-side (LHS) of the above inequality as follows:

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] =b=1N𝔼[𝔼[wbu,^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\mathbb{E}\left[\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle|\widehat{\ell}_{1:b-1}\right]\right]
=b=1N𝔼[wbu,𝔼[^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\mathbb{E}\left[\widehat{\ell}_{b}|\widehat{\ell}_{1:b-1}\right]\right\rangle\right]
=(a)b=1N𝔼[wbu,t=(b1)τ+1bτtτ]\displaystyle\overset{\text{(a)}}{=}\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\frac{\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}}{\tau}\right\rangle\right]
=1τb=1Nt=(b1)τ+1bτ𝔼[t[Ab]t[Yu]]\displaystyle=\frac{1}{\tau}\sum_{b=1}^{N}\sum_{t=(b-1)\tau+1}^{b\tau}\mathbb{E}\left[\ell_{t}[A_{b}]-\ell_{t}[Y_{u}]\right]
=1τt=1T𝔼[t[Xt]t[Yu]],\displaystyle=\frac{1}{\tau}\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right], (16)

where (a) holds since for any k[K]k\in[K], we have 𝔼[^b[k]|^1:b1]=t=(b1)τ+1bτt[k]/τ\mathbb{E}\left[\widehat{\ell}_{b}[k]|\widehat{\ell}_{1:b-1}\right]=\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}[k]/\tau. This is true since under πfull\pi_{\text{full}}, the choice of ubu_{b} for constructing ^b\widehat{\ell}_{b} is a round index chosen uniformly at random from the current batch bb.

Now, replacing the LHS of Eq. (15) with Eq. (16), yields

t=1T𝔼[t[Xt]t[Yu]]\displaystyle\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right] τ(lnKη+ηB2K)\displaystyle\leq\tau\cdot\left(\frac{\ln K}{\eta}+\frac{\eta B}{2K}\right)
=KTlnKηB+ηT2,\displaystyle=\frac{KT\ln K}{\eta B}+\frac{\eta T}{2}, (17)

where the last step is due to τ=KTB\tau=\frac{KT}{B}.

After obtaining the above upper bound on the standard regret (i.e., without switching costs), we now turn to bound the switching costs under πfull\pi_{\text{full}}. To this end, we directly leverage Lemma 6 along with the loss estimate ^b=ub\widehat{\ell}_{b}=\ell_{u_{b}} to obtain that

𝔼[t=1T𝕀{XtXt1}]b=2Nη𝔼[^b1[Ab1]]=b=2B/Kη𝔼[^b1[Ab1]]ηBK.\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}\right]\leq\sum_{b=2}^{N}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right]=\sum_{b=2}^{B/K}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right]\leq\frac{\eta B}{K}. (18)

Finally, combining Eqs. (17) and (18), we can bound the total regret as follows:

RTπfull\displaystyle R_{T}^{\pi_{\mathrm{full}}} KTlnKηB+ηT2+ηBK\displaystyle\leq\frac{KT\ln K}{\eta B}+\frac{\eta T}{2}+\frac{\eta B}{K}
(a)KTlnKηB+3ηT2\displaystyle\overset{\text{(a)}}{\leq}\frac{KT\ln K}{\eta B}+\frac{3\eta T}{2}
=(b)T6KlnKB,\displaystyle\overset{\text{(b)}}{=}T\sqrt{\frac{6K\ln K}{B}},

where (a) is from BKTB\leq KT, and (b) is obtained by choosing η=2KlnK3B\eta=\sqrt{\frac{2K\ln K}{3B}}. Hence, we have completed the proof of Proposition 2. ∎

Appendix D Proof of Proposition 3

Proof of Proposition 3.

The organization of this proof is the same as that of Proposition 2, and the only difference lies in the loss estimate in this instantiation.

We first consider the case when B<TB<T, i.e., MM can be any integer from [K][K]. Recall that BB is the total observation budget and MM is the number of observations made in each batch. Similarly, we have, for any (random) sequence ^1,,^N[0,)K\widehat{\ell}_{1},\dots,\widehat{\ell}_{N}\in[0,\infty)^{K}, learning rate η>0\eta>0, and vector uu such that its YuY_{u}-th coordinate is 1 and all the others are 0, it holds almost surely that

b=1Nwbu,^b\displaystyle\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle lnKη+η2b=1Nk=1K(^b[k])2wb[k].\displaystyle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}(\widehat{\ell}_{b}[k])^{2}\cdot w_{b}[k].

Taking the expectation on both sides yields that

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] lnKη+η2b=1Nk=1K𝔼[(^b[k])2wb[k]]\displaystyle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[(\widehat{\ell}_{b}[k])^{2}\cdot w_{b}[k]\right]
=lnKη+η2b=1Nk=1K𝔼[𝔼[(^b[k])2wb[k]|^1:b1]]\displaystyle=\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[\mathbb{E}\left[(\widehat{\ell}_{b}[k])^{2}\cdot w_{b}[k]\bigg{|}\widehat{\ell}_{1:b-1}\right]\right]
=(a)lnKη+η2b=1Nk=1K𝔼[(ub[k]M/K)2wb[k]MK+0(1MK)]\displaystyle\overset{\text{(a)}}{=}\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}\mathbb{E}\left[\left(\frac{\ell_{u_{b}}[k]}{M/K}\right)^{2}\cdot w_{b}[k]\cdot\frac{M}{K}+0\cdot(1-\frac{M}{K})\right]
(b)lnKη+ηKB2M2,\displaystyle\overset{\text{(b)}}{\leq}\frac{\ln K}{\eta}+\frac{\eta KB}{2M^{2}}, (19)

where (a) follows from the algorithm design of πflex\pi_{\text{flex}}, i.e., the loss estimate ^b[k]=𝕀{k𝒪ub}ub[k]M/K\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{u_{b}}[k]}{M/K} and ubu_{b} is a randomly selected time slot within batch bb, and (b) comes from the boundedness of losses, the fact that k=1Kwb[k]=1,b[N]\sum_{k=1}^{K}w_{b}[k]=1,\forall b\in\left[N\right], and noting that N=B/KN=B/K.

We can further rewrite the LHS of the above inequality as follows:

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] =b=1N𝔼[𝔼[wbu,^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\mathbb{E}\left[\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\bigg{|}\widehat{\ell}_{1:b-1}\right]\right]
=b=1N𝔼[wbu,𝔼[^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\mathbb{E}\left[\widehat{\ell}_{b}\bigg{|}\widehat{\ell}_{1:b-1}\right]\right\rangle\right]
=(a)b=1N𝔼[wbu,t=(b1)τ+1bτtτ]\displaystyle\overset{\text{(a)}}{=}\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\frac{\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}}{\tau}\right\rangle\right]
=1τb=1Nt=(b1)τ+1bτ𝔼[t[Ab]t[Yu]]\displaystyle=\frac{1}{\tau}\sum_{b=1}^{N}\sum_{t=(b-1)\tau+1}^{b\tau}\mathbb{E}\left[\ell_{t}[A_{b}]-\ell_{t}[Y_{u}]\right]
=1τt=1T𝔼[t[Xt]t[Yu]],\displaystyle=\frac{1}{\tau}\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right], (20)

where (a) holds since for any k[K]k\in[K], we have 𝔼[^b[k]|^1:b1]=(1MK)0+MKt=(b1)τ+1bτ1τt[k]M/K=t=(b1)τ+1bτt[k]/τ\mathbb{E}\left[\widehat{\ell}_{b}[k]\bigg{|}\widehat{\ell}_{1:b-1}\right]=(1-\frac{M}{K})\cdot 0+\frac{M}{K}\cdot\sum_{t=(b-1)\tau+1}^{b\tau}\frac{1}{\tau}\cdot\frac{\ell_{t}[k]}{M/K}={\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}[k]}/{\tau}. This is true since under πflex\pi_{\text{flex}}, the choices of both ubu_{b} and 𝒪t\mathcal{O}_{t} for constructing ^b\widehat{\ell}_{b} are uniformly random and independent with each other.

Now, replacing the LHS of Eqs. (19) with (20), yields

t=1T𝔼[t[Xt]t[Yu]]\displaystyle\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right] τ(lnKη+ηKB2M2)\displaystyle\leq\tau\cdot\left(\frac{\ln K}{\eta}+\frac{\eta KB}{2M^{2}}\right)
=MTlnKηB+ηTK2M,\displaystyle=\frac{MT\ln K}{\eta B}+\frac{\eta TK}{2M}, (21)

where in the equality, we replace τ\tau with MTB\frac{MT}{B}.

After obtaining the above upper bound on the standard regret (i.e., without switching cost), we now turn to bound the switching costs under πflex\pi_{\text{flex}}. To this end, we directly leverage Lemma 6 along with the loss estimate ^b[k]=𝕀{k𝒪ub}ub[k]M/K\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{u_{b}}[k]}{M/K} to obtain that

𝔼[t=1T𝕀{XtXt1}]b=2Nη𝔼[^b1[Ab1]]=b=2B/Mη𝔼[^b1[Ab1]]ηBMKM=ηBKM2.\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}\right]\leq\sum_{b=2}^{N}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right]=\sum_{b=2}^{B/M}\eta\cdot\mathbb{E}\left[\widehat{\ell}_{b-1}[A_{b-1}]\right]\leq\eta\frac{B}{M}\cdot\frac{K}{M}=\frac{\eta BK}{M^{2}}. (22)

Finally, combining Eqs. (21) and (22), we can bound the total regret as follows:

RTπflex\displaystyle R_{T}^{\pi_{\mathrm{flex}}} MTlnKηB+ηTK2M+ηBKM2\displaystyle\leq\frac{MT\ln K}{\eta B}+\frac{\eta TK}{2M}+\frac{\eta BK}{M^{2}}
(a)MTlnKηB+3ηKT2M\displaystyle\overset{\text{(a)}}{\leq}\frac{MT\ln K}{\eta B}+\frac{3\eta KT}{2M}
=(b)T6KlnKB,\displaystyle\overset{\text{(b)}}{=}T\sqrt{\frac{6K\ln K}{B}},

where (a) is from B<MTB<MT (recall that now we have B<TB<T and M1M\geq 1), and (b) is obtained by choosing η=M2lnK3BK\eta=M\sqrt{\frac{2\ln K}{3BK}}. Hence, we have completed the proof of Proposition 3.

The proof for the case of BTB\geq T is exactly the same, except for the (implicit) fact that we need batch size τ\tau to be well-defined, i.e., τ1\tau\geq 1. That is why in this case MM is less flexible: now MM needs to be sufficiently large to fully exploit the total budget. ∎

Appendix E Proof of Proposition 4

Proof of Proposition 4.

The organization of this proof is the same as that of Proposition 2, and the main difference lies in the commonly-used importance-weighted estimator for bandit feedback. In addition, we note that it is now sufficient to directly bounding the switching costs by the number of batches.

We still start with the same fundamental conclusion in OMD analysis. Specifically, for any (random) sequence ^1,,^N[0,)K\widehat{\ell}_{1},\dots,\widehat{\ell}_{N}\in[0,\infty)^{K}, learning rate η>0\eta>0, and vector uu such that its YuY_{u}-th coordinate is 1 and all the others are 0, it holds almost surely that

b=1Nwbu,^blnKη+η2b=1Nk=1K(^b[k])2wb[k].\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\sum_{k=1}^{K}(\widehat{\ell}_{b}[k])^{2}w_{b}[k].

Taking the expectation on both sides, we have

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] lnKη+η2b=1N𝔼[k=1K(^b[k])2wb[k]]\displaystyle\leq\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\mathbb{E}\left[\sum_{k=1}^{K}(\widehat{\ell}_{b}[k])^{2}w_{b}[k]\right]
=lnKη+η2b=1N𝔼[𝔼[k=1K(^b[k])2wb[k]|^1:b1]]\displaystyle=\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\mathbb{E}\left[\mathbb{E}\left[\sum_{k=1}^{K}(\widehat{\ell}_{b}[k])^{2}w_{b}[k]\bigg{|}\widehat{\ell}_{1:b-1}\right]\right]
=(a)lnKη+η2b=1N𝔼[k=1Kwb[k]1τt=(b1)τ+1bτ(t[k])2(wb[k])2wb[k]]\displaystyle\overset{\text{(a)}}{=}\frac{\ln K}{\eta}+\frac{\eta}{2}\sum_{b=1}^{N}\mathbb{E}\left[\sum_{k=1}^{K}w_{b}[k]\cdot\frac{1}{\tau}\cdot\sum_{t=(b-1)\tau+1}^{b\tau}\frac{(\ell_{t}[k])^{2}}{(w_{b}[k])^{2}}\cdot w_{b}[k]\right]
(b)lnKη+η2NK\displaystyle\overset{\text{(b)}}{\leq}\frac{\ln K}{\eta}+\frac{\eta}{2}NK
=(c)2NKlnK,\displaystyle\overset{\text{(c)}}{=}\sqrt{2NK\ln K}, (23)

where (a) follows from the algorithm design of πb\pi_{\text{b}}, i.e., the loss estimate ^b[k]=𝕀{k𝒪ub}ub[k]wb[k]\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{u_{b}}[k]}{w_{b}[k]} for all k[K]k\in[K], (b) follows from the assumption that all losses are bounded by one and the fact that k=1Kwb[k]=1,b[N]\sum_{k=1}^{K}w_{b}[k]=1,\forall b\in[N], and (c) is obtained by choosing η=2lnKNK\eta=\sqrt{\frac{2\ln K}{NK}}.

We can further rewrite the LHS of the above inequality as follows:

𝔼[b=1Nwbu,^b]\displaystyle\mathbb{E}\left[\sum_{b=1}^{N}\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\right] =b=1N𝔼[𝔼[wbu,^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\mathbb{E}\left[\left\langle w_{b}-u,\widehat{\ell}_{b}\right\rangle\bigg{|}\widehat{\ell}_{1:b-1}\right]\right]
=b=1N𝔼[wbu,𝔼[^b|^1:b1]]\displaystyle=\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\mathbb{E}\left[\widehat{\ell}_{b}\bigg{|}\widehat{\ell}_{1:b-1}\right]\right\rangle\right]
=(a)b=1N𝔼[wbu,t=(b1)τ+1bτtτ]\displaystyle\overset{\text{(a)}}{=}\sum_{b=1}^{N}\mathbb{E}\left[\left\langle w_{b}-u,\frac{\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}}{\tau}\right\rangle\right]
=1τb=1Nt=(b1)τ+1bτ𝔼[t[Ab]t(Yu)]\displaystyle=\frac{1}{\tau}\sum_{b=1}^{N}\sum_{t=(b-1)\tau+1}^{b\tau}\mathbb{E}\left[\ell_{t}[A_{b}]-\ell_{t}(Y_{u})\right]
=1τt=1T𝔼[t[Xt]t[Yu]],\displaystyle=\frac{1}{\tau}\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right], (24)

where (a) holds since for any k[K]k\in[K], we have 𝔼[^b[k]|^1:b1]=(1wb[k])0+t=(b1)τ+1bτ1τwb[k]t[k]wb[k]=t=(b1)τ+1bτt[k]/τ\mathbb{E}\left[\widehat{\ell}_{b}[k]\bigg{|}\widehat{\ell}_{1:b-1}\right]=(1-w_{b}[k])\cdot 0+\sum_{t=(b-1)\tau+1}^{b\tau}\frac{1}{\tau}\cdot w_{b}[k]\cdot\frac{\ell_{t}[k]}{w_{b}[k]}={\sum_{t=(b-1)\tau+1}^{b\tau}\ell_{t}[k]}/{\tau}.

Now, replacing the LHS of Eq. (23) with Eq. (24), yields

t=1T𝔼[t[Xt]t[Yu]]\displaystyle\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right] τ2NKlnK\displaystyle\leq\tau\sqrt{2NK\ln K}
=TB2BKlnK\displaystyle=\frac{T}{B}\cdot\sqrt{2BK\ln K}
=O(TKlnKB).\displaystyle=O\left(T\sqrt{\frac{K\ln K}{B}}\right). (25)

When B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3}), we have t=1T𝕀{XtXt1}BO(TKlnKB)\sum_{t=1}^{T}\mathbb{I}_{\{X_{t}\neq X_{t-1}\}}\leq B\leq O\left(T\sqrt{\frac{K\ln K}{B}}\right). Combining it with Eq. (25), we can conclude that both the standard regret and switching costs are of order O(TKlnKB)O\left(T\sqrt{\frac{K\ln K}{B}}\right), which gives us the desired result. ∎

Appendix F An Auxiliary Technical Result

In this section, we show that in the standard setting without switching costs, only using bandit feedback (e.g., algorithm πb\pi_{\mathrm{b}}) can also achieve optimal regret (i.e., matching the lower bound of Ω(TK/B)\Omega(T\sqrt{K/B}), up to poly-logarithmic factors) in the full range of B[K,T]B\in[K,T]. We state this result in Proposition 5.

Proposition 5.

In the standard setting without switching costs, for any B[K,T]B\in[K,T], the worst-case regret under algorithm πb\pi_{\mathrm{b}} is upper bounded by RTπb=O(TKlnK/B)R_{T}^{\pi_{\mathrm{b}}}=O(T\sqrt{K\ln K/B}).

Proof of Proposition 5.

The proof follows the same line of analysis as that in the proof of Proposition 4, except that we only require B<TB<T (instead of B=O(K1/3T2/3)B=O(K^{1/3}T^{2/3})) and do not consider switching costs. Therefore, Eq. (25) implies the following upper bound on the regret:

t=1T𝔼[t[Xt]t[Yu]]=O(TKlnK/B).\sum_{t=1}^{T}\mathbb{E}\left[\ell_{t}[X_{t}]-\ell_{t}[Y_{u}]\right]=O\left(T\sqrt{K\ln K/B}\right).

Note that YuY_{u} can be any fixed action, including the best fixed action. This completes the proof. ∎