Learning for Bandits under Action Erasures
Abstract
We consider a novel multi-arm bandit (MAB) setup, where a learner needs to communicate the actions to distributed agents over erasure channels, while the rewards for the actions are directly available to the learner through external sensors. In our model, while the distributed agents know if an action is erased, the central learner does not (there is no feedback), and thus does not know whether the observed reward resulted from the desired action or not. We propose a scheme that can work on top of any (existing or future) MAB algorithm and make it robust to action erasures. Our scheme results in a worst-case regret over action-erasure channels that is at most a factor of away from the no-erasure worst-case regret of the underlying MAB algorithm, where is the erasure probability. We also propose a modification of the successive arm elimination algorithm and prove that its worst-case regret is , which we prove is optimal by providing a matching lower bound.
I Introduction
Multi-armed bandit (MAB) problems have gained popularity in a wide range of applications that include recommendation systems, clinical trials, advertising, and distributed robotics [1, 2]. MABs are sequential decision problems where a learner, at each round, selects an action from a set of available actions (arms) and receives an associated reward; the aim is to maximize the total reward over rounds. In this paper, we develop a theoretical framework that explores a new setup, MAB systems with action erasures.
In particular, we consider a MAB system where the learner needs to communicate the actions to be taken to distributed agents over wireless channels subject to erasures with some probability , while the rewards for the actions taken are directly available to the learner through external sensors. For example, the learner may be regulating drone traffic in a slowly varying environment (weather conditions, obstacles) where passing by drones receive which path to take or which maneuvers to perform, and the learner monitors the outcome through video cameras, magnetic and other sensors. Similarly, since online learning has been used in medical micro-robots [3, 4], our setup can help with problems such as directing inside veins micro-robots on how to move or how much of a medical substance to release, and observing the results through patient medical imaging and vital sign measurements. In our model, while the agent knows if an action is erased, the learner does not (there is no feedback); therefore, the learner at each round receives a reward that may be a result of playing the requested action or another action (if an erasure occurs).
Note that at any given round, although the requested action may be erased and not received by the agent, the agent still needs to play an action - the micro-robot needs to release some amount (perhaps zero) of substance and the drone needs to continue moving in some way. We make the assumption that if at a round the agent does not receive an action, the agent continues to play the most recently received action, until a new action is successfully received to replace it. This is we believe a reasonable assumption. Indeed, during exploitation, we would like the agents to persistently play the identified optimal action, which this strategy achieves. In contrast, as we discuss in Section III playing a random or a fixed action results in suboptimal performance, while using more sophisticated strategies (e.g., keeping track of all previous actions played and accordingly optimizing) may not be possible in distributed systems, where multiple agents may be playing different actions and where agents may not even be able to observe the rewards.
To the best of our knowledge, this setup has not been systematically studied, although there exists extensive MAB literature.
Indeed, existing work has examined MAB systems under a multitude of setups and constraints such as stochastic bandits [5, 6, 7], adversarial bandits [8, 9, 10], or contextual bandits [10, 11]; and over a number of distributed setups as well [12, 13, 14, 15]. In particular, a number of works examine the case where rewards are corrupted adversarially or stochastically [16, 17, 18, 19]. However, all the works we know of assume that the requested actions can be sent through perfect communication channels. This assumption makes sense for delay-tolerant applications, where we can leverage feedback and error-correcting codes to ensure perfect communication of the desired action; yet as the applicability of the MAB framework expands to agents, such as micro-robots, that are communication limited, we believe that our proposed model can be of interest, both in terms of theory and practice.
The main questions we ask (and answer) in this paper are: for our action erasure model (i) what is an optimal strategy? and (ii) if a MAB algorithm is already deployed, can we couple it, at low regret cost, with a generic layer, akin to erasure coding, that makes the MAB algorithm operation robust to erasures?
Our main contributions include:
-
•
We propose a scheme that can work on top of any MAB algorithm and make it robust to action erasures. Our scheme results in a worst-case regret over the erasure channel that is at most a factor of away from the no-erasure worst-case regret of the underlying MAB algorithm, where is the erasure probability.
-
•
We propose a modification of the successive arm elimination algorithm and prove that its worst-case regret is .
-
•
We prove a matching regret lower bound.
I-A Related Work
MAB Algorithms. Over the years, several stochastic MAB algorithms that achieve optimal or near-optimal regret bounds have been proposed under different assumptions for the environment or model parameters.
For instance, over a horizon of length , Thompson sampling [6] and UCB [5, 7], achieve a gap dependent regret bound of and a worst-case regret bound of , where hides factors. These algorithms achieve nearly optimal regret; a lower bound of on the gap-dependent regret was proved in [20] while a lower bound on the worst-case regret of was proved in [21]. However, such algorithms are not robust to action erasures.
MAB with Adversarial Corruption.
Recently, [16] introduced stochastic multi-armed bandits with adversarial corruption, where the rewards are corrupted by an adversary with a corruption budget , an upper bound on the accumulated corruption magnitudes over rounds. [16] proposes a simple elimination algorithm that achieves regret bound in the worst-case. Later, [17] improved the dependency on to be additive by achieving a worst-case regret bound of while also providing a regret lower bound of . The work in [19] further improves the worst-case regret bound to when the optimal arm is unique. The work in [18] studies a similar problem where rewards are corrupted in each round independently with a fixed probability. [18] achieves a regret of , where is an upper bound on the expected corruption, and proves a lower bound for their model.
While our model can be reduced to MABs with reward corruption, the amount of corruption amounts to . This causes the best-known algorithms for MABs with adversarial corruption to suffer a linear regret, given the increased exploration required for the large amount of corruption. In contrast, by exploiting the fact that corruptions are in actions, we achieve a gap-dependent regret bound of and a worst-case regret bound of .
I-B Paper Organization
II Problem Formulation
Standard MAB Setup. We consider a MAB problem over a horizon of length , where a learner interacts with an environment by pulling arms from a set of arms (or actions). That is, we assume operation in discrete times, where at each time , the learner sends the index of an arm to an agent; the agent pulls the arm resulting in a reward that can be observed by the learner. The learner selects the arm based on the history of pulled arms and previously seen rewards . The reward is sampled from an unknown distribution with an unknown mean . The set of distributions for all arm rewards is referred to as a bandit instance. We use to denote the gap between the mean value of arm and the best (highest mean) arm. For simplicity, we follow the standard assumption that rewards are supported on . The analysis directly extends to subGaussian reward distributions.
MAB with Action Erasures. We assume that the learner is connected to the agent over an erasure channel, where the action index sent by the learner to the agent at each time can be erased with probability . We follow the standard erasure channel model where erasures at different times are independent. We assume no feedback in the channel; in particular, while the agent knows when the action is erased (does not receive anything at time ), the learner does not. Moreover, we assume that the learner can directly observe the reward of the action played .
Agent Operation. Recall that at each time , the learner transmits an action index . The agent plays the most recent action she received: in particular, at time , the agent plays the action if no erasure occurs, while if an erasure occurs, . We assume that is initialized uniformly at random, i.e., if the first action is erased, the agent chooses uniformly at random. In the example described in Table I, . Section III motivates why we select this particular agent operation, by arguing that alternative simple strategies can significantly deteriorate the performance.
Round () | 1 | 2 | 3 | 4 | 5 | … |
Learner () | 1 | 3 | 2 | 4 | 2 | … |
Agent () | 1 | 3 | 3 | 3 | 2 | … |
Erasure | x | x | … |
Performance Metric. The objective is to minimize the regret
(1) |
III Motivation and Challenges
In this paper we consider a specific agent operation, namely, the agent simply plays the most recently received action. In this section, we argue that this is a reasonable choice, by considering alternate simple strategies and explaining why they do not work well. We also discuss the technical challenges of our approach.
Random Action. A very simple strategy could be, when an erasure occurs, for the agent to uniformly at random select one of the actions to play. The rewards seen by the learner will follow a new distribution with mean that is shifted from the pulled arm mean by a constant; in particular,
where is the event that erasure occurs at time . Hence, the best arm does not change, the gaps between arm means do not change, and the learner can identify the best arm and provide a good policy. However, as actions are erased in iterations, the regret becomes .
Fixed Action. Very similar arguments hold if, when erasures occur, the agent always plays a fixed (predetermined) action ; unless happens to be the optimal action, the experienced regret will be linear.
Last Received Action. To see why the previous strategies fail, observe that although the learner may have identified the best policy, this will not be consistently played. In this paper, we make the assumption that if an action is erased, the agent plays the last successfully received action. Thus if the optimal action is identified, it will be consistently played.
We note that our selected agent strategy introduces memory and a challenge in the analysis since it creates a more complicated dependency between the action and erasures. For example, any of the previously sent actions by the learner has a non-zero probability of being played at the current iteration, where the probability changes with time and the sequence of previous actions. In the previous two strategies, the learner still observes a reward with fixed mean and can be treated as a valid MAB instance (although the optimal action might be different from the ground truth). The last received action strategy unfortunately changes the reward distribution and standard MAB analysis no longer holds.
IV Repeat-the-Instruction Module
In this section we propose a scheme that works on top of any MAB algorithm to make it robust to erasures. Our scheme adds a form of repetition coding layer on top of the MAB algorithm operation. In particular, if the underlying MAB algorithm selects an action to play, our scheme sends the action chosen by the algorithm times to the agent, and only associates the last received reward with the chosen action (hence the name Repeat-the-Instruction). Since the agent plays the last received action, one successful reception within the slots is sufficient to make the last reward sampled from the distribution of the chosen arm. Thus our algorithm, summarized in Agorithm 1, effectively partitions the rounds into groups of rounds, where each group of rounds is treated by the underlying bandit algorithm as one time slot. The next theorem describes what is the resulting performance.
Theorem 1
Let ALG be a MAB algorithm with expected regret upper bounded by for any instance with gaps . For , using Repeat-the-Instruction on top of ALG achieves an expected regret
(2) |
where the expectation is over the randomness in the MAB instance, erasures, and algorithm.
Proof. We use “run" to refer to the rounds that repeat each action dictated by ALG. Let be the event that all runs contain at least one round with no erasure. Conditioned on , the maximum number of consecutive plays for an action due to erasures is (across two runs, when the action transmitted at the first round of the first run is not erased, while the first rounds of the next run are erased). Hence, we have that
(3) |
where is the optimal arm, the second term bounds the regret for the first iterations. Note that, conditioned on , if with , we have that , that is, the reward is sampled from arm . Indeed in Algorithm 1 is the reward associated with arm for . We observe that as erasures occur independently of the bandit environment and learners actions, we have that for , any set and action , we have that (note that only depends on erasures). In particular, conditioned on , the algorithm ALG receives rewards generated from a bandit instance with the same reward distributions as the original instance, hence, the same gaps, and time horizon . Hence, its regret is upper bounded by . Substituting in (3) we get that
(4) |
We finally upper bound the probability of the event . Let denote the event that run contains at least one slot with no erasure. The probability of can be bounded as
(5) |
where follows by the union bound. From (4), we get that
Corollary 1
For , if is the worst-case expected regret of ALG, then
(6) |
This follows from Theorem 1 by observing that , and that the worst-case regret for any algorithm is . The next corollary follows by substituting the regret bound of the UCB algorithm [5, 7] in Theorem 1.
Corollary 2
Observation. Note that our proposed module achieves the optimal dependency of the regret on and . However, we show next that the multiplicative dependency on is suboptimal by providing an algorithm with a regret bound that has additive dependency on . This effect is small for small values of but becomes significant for approaching .
V The Lingering SAE (L-SAE) Algorithm
The intuition behind the repeat-the-instruction algorithm is that we do not frequently switch between arms - and thus playing the last successfully received action often coincides with playing the desired action. In this section, we propose “Lingering Successive Arm Elimination" (L-SAE), an algorithm that by design does not frequently switch arms, and evaluate its performance; in the next section, we prove a lower bound establishing L-SAE is order optimal.
L-SAE builds on the Successive Arm Elimination (SAE) algorithm [22]. SAE works in batches of exponentially growing length, where at the end of each batch the arms that appear to be suboptimal are eliminated. In batch , each of the surviving arms is pulled times. We add two modifications to SAE to make it robust to erasures. First, we do not use the frequent arm switches of the first batches, when is small. Instead, in the first batch, the algorithm pulls each arm times. Then, the number of pulls for surviving arms in a batch is times that of the previous batch. Second, we ignore the first half of the samples for each arm. This ensures that all the chosen rewards are picked from the desired arm with high probability, as the probability of half the samples being erased is small. Given these modifications, we also update the arm elimination criterion to account for the higher variance in the mean estimates. In particular, the algorithm starts with as the good arms set and at the end of batch , the algorithm eliminates arms with empirical mean that is away by more than from the empirical mean of the arm that appears to be best. The pseudo-code of the algorithm is provided in Algorithm 2.
-
•
Initialize: set of good arms , batch index .
-
•
For batch :
-
–
Pull each arm in , times to receive rewards for arm .
-
–
Update means: .
-
–
.
-
–
.
-
–
Theorem 2
For , Algorithm 2 achieves a regret that is bounded by
with probability at least , and for a constant .
Proof. Let be the good event that the second half of all arm pulls in all batches are from the correct (desired) arm. Note that does not occur when there is a batch and an arm such that all half of arm pulls in batch coincide with an erasure. As since in any batch each arm is pulled at least times we have that
(7) |
We notice that erasures are independent of the bandit environment, and learner actions; and only depends on erasures. Hence, conditioned on and the picked arm , the second half of the rewards are picked from arm and they follow the original reward distribution of arm . Hence, as the rewards are only supported on , we have that conditioned on , the reward is -subGaussian with mean . By concentration of sub-Gaussian random variables, conditioned on , we also have that the following event
where is the number of pulls for surviving arms in batch , occurs with probability at least . Hence, we have that
In the remaining part of the proof we condition on the event . By the elimination criterion in Algorithm 2, and assuming , the best arm will not be eliminated. This is because the elimination criterion will not hold for the best arm as
(8) |
Now consider an arm with gap and let be the smallest integer for which . Then, we have that
(9) |
Hence, arm will be eliminated before the start of batch . We also notice that from , the value of can be bounded as
(10) |
By the exponential increase in the number of pulls of each arm, we get that until eliminated, arm will be pulled by the learner at most
for some absolute constant . We also notice that on event , the agent will pull arm at most times. This results in a regret that is at most . Summing the regret over all arms, we get that conditioned on we have that
The proof is concluded by noticing that .
The previous theorem directly implies the following worst-case regret bound
VI Lower Bound
We here prove a lower bound that matches the upper bound in Theorem 2 up to factors. A lower bound of on the gap-dependent regret is already provided in [20] and a lower bound on the worst-case regret of is provided in [21]. Thus it suffices to prove a lower bound of which we provide next in Theorem 3.
Theorem 3
Let , and . Assuming the agent operation in Section II, for any policy , there exists a -armed bandit instance such that
(11) |
where is the expected regret of the policy over the instance and is a universal constant.
Proof. We consider bandit instances, each with arms, where in instance the means of the reward distributions are
(12) |
with the indicator function. We assume a noiseless setting where in instance picking arm results in reward almost surely. We consider two events:
indicates that the first pulls of arm are erased;
indicates that the agent in the first round, if the first transmitted arm is erased, does not select to pull arm (recall that if an erasure occurs at the first round the agent randomly selects an action according to some distribution).
We note that are independent events since erasures are independent of the agent operation.
We will show that there is no policy that can make to be small for all . We first note that there are at least111We assume for simplicity that is divisible by . The proof easily generalizes by taking the floor in divisions, which only affects the constants. arms with . Pick a set with , and .
Now define the event :
: arm is picked no more than times by the
learner in the first rounds.
We next consider the minimum worst case probability of conditioned on under the distribution induced by instance ; namely,
(13) |
Let the set denote the indices of arms for which holds, i.e., , and let . We have that are disjoint and their union is the set . Moreover, the quantity in (13) can be rewritten as
(14) |
We make two observations: (i) after rounds, there exist at least arms in for which holds (hence, and ); and (ii)
if for instance , conditioned on , arm is picked less than times by the learner, then all the rewards received by the learner will be zeros, thus no information will be provided to the policy during the first rounds.
From observation (ii) and the fact that the result of whether or not, does not change after the first reward feedback that is , it follows that the optimal value for (13) does not change if the learner does not observe rewards.
Hence, the learner can proceed assuming that all previous rewards are zeros. As a result, the learner can decide on the arms to pull in the first slots ahead of the time (i.e., at ; since no feedback is required); equivalently, the learner can divide the set into two sets and (possibly in a random way) ahead of the time with (from observation ). As the learner decides on ahead of the time, the probability of does not depend on the instance, reward outcomes and , hence, we can simply denote as . This shows that, the problem of minimizing is equivalent to partitioning the set into two sets and (with containing the indices where holds), and with the goal of minimizing .
It is easy to see that the minimum value for , and similarly, , is at least as .
Hence, for any policy , there is an instance such that conditioned on , arm is picked no more than times by the learner in the first rounds with probability at least . But for instance , whenever arm is not pulled we incur a regret of value , we get that there is such that . Then, we have that . By non-negativity of regret, we have that
(15) |
where follows from the fact that are independent, and follows since and thus . Moreover, and thus . The proof is concluded by observing that for .
References
- [1] P. Matikainen, P. M. Furlong, R. Sukthankar, and M. Hebert, “Multi-armed recommendation bandits for selecting state machine policies for robotic systems,” in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 4545–4551.
- [2] D. Bouneffouf, I. Rish, and C. Aggarwal, “Survey on applications of multi-armed and contextual bandits,” in 2020 IEEE Congress on Evolutionary Computation (CEC), 2020, pp. 1–8.
- [3] Y. Yang, M. A. Bevan, and B. Li, “Hierarchical planning with deep reinforcement learning for 3d navigation of microrobots in blood vessels,” Advanced Intelligent Systems, vol. 4, no. 11, p. 2200168, 2022. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/aisy.202200168
- [4] Z. Zou, Y. Liu, Y. Young, O. S. Pak, and A. C. H. Tsang, “Gait switching and targeted navigation of microswimmers via deep reinforcement learning,” Commun Phys, vol. 5, no. 158, 2022.
- [5] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine Learning, vol. 47, pp. 235–256, 05 2002.
- [6] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, pp. 285–294, 1933.
- [7] T. L. Lai, “Adaptive treatment allocation and the multi-armed bandit problem,” Annals of Statistics, vol. 15, pp. 1091–1114, 1987.
- [8] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire, “Gambling in a rigged casino: The adversarial multi-armed bandit problem,” Foundations of Computer Science, 1975., 16th Annual Symposium on, 07 1998.
- [9] S. Bubeck and A. Slivkins, “The best of both worlds: Stochastic and adversarial bandits,” in Proceedings of the 25th Annual Conference on Learning Theory, ser. Proceedings of Machine Learning Research, S. Mannor, N. Srebro, and R. C. Williamson, Eds., vol. 23. Edinburgh, Scotland: PMLR, 2012, pp. 42.1–42.23. [Online]. Available: https://proceedings.mlr.press/v23/bubeck12b.html
- [10] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM Journal on Computing, vol. 32, no. 1, pp. 48–77, 2002. [Online]. Available: https://doi.org/10.1137/S0097539701398375
- [11] L. Li, W. Chu, J. Langford, and R. E. Schapire, “A contextual-bandit approach to personalized news article recommendation,” in Proceedings of the 19th International Conference on World Wide Web, ser. WWW ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 661–670. [Online]. Available: https://doi.org/10.1145/1772690.1772758
- [12] S. Shahrampour, A. Rakhlin, and A. Jadbabaie, “Multi-armed bandits in multi-agent networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 2786–2790.
- [13] D. Kalathil, N. Nayyar, and R. Jain, “Decentralized learning for multiplayer multiarmed bandits,” IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2331–2345, 2014.
- [14] P. Landgren, V. Srivastava, and N. E. Leonard, “Distributed cooperative decision making in multi-agent multi-armed bandits,” Automatica, vol. 125, p. 109445, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0005109820306439
- [15] P. Landgren, “Distributed multi-agent multi-armed bandits,” 2019.
- [16] T. Lykouris, V. Mirrokni, and R. Leme, “Stochastic bandits robust to adversarial corruptions,” 06 2018, pp. 114–122.
- [17] A. Gupta, T. Koren, and K. Talwar, “Better algorithms for stochastic bandits with adversarial corruptions,” in Proceedings of the Thirty-Second Conference on Learning Theory, ser. Proceedings of Machine Learning Research, A. Beygelzimer and D. Hsu, Eds., vol. 99. PMLR, 2019, pp. 1562–1578. [Online]. Available: https://proceedings.mlr.press/v99/gupta19a.html
- [18] S. Kapoor, K. K. Patel, and P. Kar, “Corruption-tolerant bandit learning,” Mach. Learn., vol. 108, no. 4, p. 687–715, 2019. [Online]. Available: https://doi.org/10.1007/s10994-018-5758-5
- [19] I. Amir, I. Attias, T. Koren, Y. Mansour, and R. Livni, “Prediction with corrupted expert advice,” Advances in Neural Information Processing Systems, vol. 33, pp. 14 315–14 325, 2020.
- [20] T. L. Lai, H. Robbins et al., “Asymptotically efficient adaptive allocation rules,” Advances in applied mathematics, vol. 6, no. 1, pp. 4–22, 1985.
- [21] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “Gambling in a rigged casino: The adversarial multi-armed bandit problem,” in Proceedings of IEEE 36th annual foundations of computer science. IEEE, 1995, pp. 322–331.
- [22] P. Auer and R. Ortner, “Ucb revisited: Improved regret bounds for the stochastic multi-armed bandit problem,” Periodica Mathematica Hungarica, vol. 61, no. 1-2, pp. 55–65, 2010.