When Are Linear Stochastic Bandits Attackable?
Abstract
We study adversarial attacks on linear stochastic bandits: by manipulating the rewards, an adversary aims to control the behaviour of the bandit algorithm. Perhaps surprisingly, we first show that some attack goals can never be achieved. This is in a sharp contrast to context-free stochastic bandits, and is intrinsically due to the correlation among arms in linear stochastic bandits. Motivated by this finding, this paper studies the attackability of a -armed linear bandit environment. We first provide a complete necessity and sufficiency characterization of attackability based on the geometry of the arms’ context vectors. We then propose a two-stage attack method against LinUCB and Robust Phase Elimination. The method first asserts whether the given environment is attackable; and if yes, it poisons the rewards to force the algorithm to pull a target arm linear times using only a sublinear cost. Numerical experiments further validate the effectiveness and cost-efficiency of the proposed attack method.
1 Introduction
In a contextual bandit problem, a learner takes sequential actions to interact with an environment to maximize its received cumulative reward. As a natural and important variant, linear stochastic bandits (Auer,, 2002; Li et al.,, 2010; Abbasi-yadkori et al.,, 2011) assume the expected reward of an arm is a linear function of its feature vector and an unknown bandit parameter . A linear bandit algorithm thus adaptively improves its estimation of based on the reward feedback on its pulled arms. Thanks to their sound theoretical guarantees and promising empirical performance, linear stochastic bandits have become a reference solution to many real-world problems, such as content recommendation and online advertisement (Li et al.,, 2010; Chapelle and Li,, 2011; Durand et al.,, 2018).
Since bandit algorithms adapt their behavior according to its received feedback, such algorithms are susceptible to adversarial attacks, especially poisoning attacks. Under such an attack, a malicious adversary observes the pulled arm and its reward feedback, and then modifies the reward to misguide the bandit algorithm to pull a target arm, which is of the adversary’s interest. Due to the wide applicability of bandit algorithms in practice as mentioned before, understanding the robustness of such algorithms under poisoning attacks becomes increasingly important (Jun et al.,, 2018; Liu and Shroff,, 2019; Garcelon et al.,, 2020).
Most existing studies on adversarial attacks in bandits focused on the context-free stochastic multi-armed bandit (MAB) settings. Jun et al., (2018) and Liu and Shroff, (2019) showed that an adversary can force any MAB algorithm to pull a target arm linear times only using a logarithmic cost. Garcelon et al., (2020) showed the vulnerability of -armed linear contextual bandits under poisoning attacks. Linear stochastic bandits are related to context-free stochastic bandits and linear contextual bandits. Interestingly, however, there is no known result about attacks on linear stochastic bandit until now. This paper shall provide a formal explanation for this gap — the analysis of attacks to linear stochastic bandits turns out to be significantly more challenging due to the correlation among arms; in fact, some learning environment is provably unattackable.
Specifically, we fill the aforementioned gap by studying poisoning attacks on -armed linear stochastic bandits, where an adversary modifies the reward using a sublinear attack cost to misguide the bandit algorithm to pull a target arm linear times. We first show that a linear stochastic bandit environment is not always efficiently attackable111Throughout this paper, “efficient attack” means fooling the bandit algorithm to pull the target arm for linear times with a sublinear attack cost. We will use attackable and efficiently attackable interchangeably, as the adversary normally only has a limited budget and needs to design a cost-efficient strategy., and its attackability is governed by the feasibility of finding a parameter vector , under which the rewards of all non-target arms are smaller than the reward of target arm and the reward of remains the same as that in the original environment specified by . Intuitively, to promote the target arm , an adversary needs to lower the rewards of non-target arms in the null space of by , which might not be always feasible. We prove the feasibility of the resulting convex quadratic program is both sufficient and necessary for attacking a linear stochastic bandit environment.
Inspired by our attackability analysis, we propose a two-stage attack framework against linear stochastic bandit algorithms and demonstrate its application against LinUCB (Li et al.,, 2010) and Robust Phase Elimination (Bogunovic et al.,, 2021): the former is one of the most widely used linear contextual bandit algorithms, and the latter is a robust version designed for settings with adversarial corruptions. In the first stage, our method collects a carefully calibrated amount of rewards on the target arm to assess whether the given environment is attackable. The decision is based on an “empirical” version of our feasibility characterization. If attackable, i.e., there exists a feasible solution , in the second stage the method depresses the rewards the bandit algorithm receives on non-target arms based on , to fool the bandit algorithm to recognize the target arm as optimal. We prove that in an attackable environment, both algorithms can be successfully manipulated with only a sublinear cost.
Our main contributions can be summarized as follows:
-
•
We characterize the sufficient and necessary conditions about when a stochastic linear bandit environment is attackable as the feasibility of a convex quadratic program. En route to proving the sufficiency, we also provide an oracle attack method that can attack any no-regret learning algorithm given the knowledge of ground-truth bandit parameter . If the environment is unattackable, i.e., the program is infeasible, our necessity proof implies that even the vanilla LinUCB algorithm cannot be efficiently attacked. A direct corollary of our characterization is that context-free stochastic MAB is always attackable, resonating the findings in (Jun et al.,, 2018; Liu and Shroff,, 2019).
-
•
We propose a two-stage attack method that works without the knowledge of ground-truth bandit parameter. In the first stage, the algorithm detects the attackability of the environment and estimates the model parameter. In the second stage, it solves for a working solution and attacks accordingly. Our theoretical analysis shows this attack method is effective against LinUCB (Li et al.,, 2010) and Robust Phase Elimination (Bogunovic et al.,, 2021), i.e., pulling the target arm times using cost when the environment is attackable.
2 Preliminaries
Linear stochastic bandit. We study poisoning attacks to the fundamental -armed linear stochastic bandit problem (Auer,, 2002), where a bandit algorithm sequentially interacts with an environment for rounds. In each round , the algorithm pulls an arm from a set with arms, and receives reward from the environment. Each arm is associated with a -dimensional context vector ; and the observed reward follows a linear mapping , where is a common unknown bandit parameter vector and is an -sub-Gaussian noise term. We assume context vectors and parameters are all bounded; and for convenience and without loss of generality, we assume and . The performance of a bandit algorithm is evaluated by its pseudo-regret, which is defined as , where is the best arm according to , i.e., .
LinUCB.
LinUCB (Li et al.,, 2010; Abbasi-yadkori et al.,, 2011) is a classical algorithm for linear stochastic bandit. It estimates a bandit model parameter using ridge regression, i.e., , where and is the L2-regularization coefficient. We use to denote the matrix norm of vector . The confidence bound about reward estimation on arm is defined as , where is a high probability bound of . In each round , LinUCB pulls an arm with the highest upper confidence bound, i.e., to balance the exploration-exploitation trade-off. LinUCB algorithm achieves a sublinear upper regret bound, i.e., ignoring the logarithmic term.
Threat model.
The goal of an adversary is to fool a linear stochastic bandit algorithm to pull the target arm for times. Like most recent works in this space (Jun et al.,, 2018; Liu and Shroff,, 2019; Garcelon et al.,, 2020; Zhang et al.,, 2020), we also consider the widely studied poisoning attack on the rewards: after arm is pulled by the bandit algorithm, the adversary modifies the realized reward from the environment by into , i.e., , and feeds the manipulated reward to the algorithm. Naturally, the adversary aims to achieve its attack goal with minimum attack cost, defined as . By convention, an attack strategy is said to be efficient if it uses only a sublinear total cost, i.e., .
We conclude the preliminaries with an important remark about a key difference between attacking linear stochastic bandit studied in this paper and attacking -armed linear contextual bandit setting recently studied in (Garcelon et al.,, 2020). In linear contextual bandits, all arms share a context vector at each round but each arm has its own (to-be-estimated) bandit parameter. Therefore, the reward manipulation at a round will only affect the parameter estimation of the pulled arm , but not any other arms’. This “isolates” the attack’s impact in different arms. In contrast, in linear stochastic bandit, all arms share the same bandit parameter but have different context vectors. And thus manipulating the reward of any arm will alter the shared bandit parameter estimation, which will then affect the reward estimation of all arms due to the correlation among their context vectors. Such coupled effect of adversarial manipulation from the pulled arm to all other arms is unique in linear stochastic bandits, and makes its analysis of attack much more challenging. This is also the fundamental reason that some linear stochastic bandit environment may not be attackable (see our illustration in Example 1).
3 The Attackability of A Linear Stochastic Bandit Environment
We study the attackability of a linear stochastic bandit environment. At the first glance, one might wonder whether attackability is the property of a bandit algorithm rather than a property of the environment, since if an algorithm can be attacked, we should have “blamed” the algorithm for not being robust. A key finding of this work is attackability is also a property of the learning environment; and in other words, not all environments are attackable.
Definition 1 (Attackability of a -Armed Linear Stochastic Bandit Environment).
A -armed linear stochastic bandit environment is attackable with respect to the target arm if for any no-regret learning algorithm, there exists an attack method that uses attack cost and fools the algorithm to pull arm at least times with high probability222Typically, the high probability refers to probability for an arbitrarily small . Please see theorems later for rigorous statements. for any large enough, i.e., larger than a constant .
We make a few remarks about the above definition of attackability. First, this definition is all about the bandit environment and the target arm , but independent of any specific bandit algorithm. Second, if an attack method can only fool a bandit algorithm to pull the target arm under a few different time horizons , it does not count as a successful attack – it has to succeed for infinitely many time horizons. Third, by reversing the order of quantifiers, we obtain the assertion that a bandit environment is not attackable w.r.t. the target arm if there exists some no-regret learning algorithm such that no attack method can use attack cost to fool the algorithm to pull arm at least times with high probability for any large enough.
The following simple yet insightful example illustrates that there are indeed linear stochastic bandit environment setups where some no-regret learning algorithm cannot be attacked.
Example 1 (An unattackable environment).
Figure 1 shows a three-arm environment with . Suppose the target arm and the ground-truth bandit parameter 333One can re-scale all vectors with norms smaller than 1, e.g., divide each dimension by 10, without changing the conclusion that the environment is unattackable. . The expected true rewards of the arms are and is the best arm in this environment.
Based on Definition 1, we will need to identify a no-regret learning algorithm that cannot be attacked in this environment, and we argue that LinUCB is such an algorithm. Suppose, for the sake of contradiction, that there exists an efficient attack which fools LinUCB to pull times. LinUCB then must estimate in the ’s direction almost accurately as becomes large, since the amount of true reward feedback in direction will ultimately dominate the adversarial manipulations. This suggests that the estimated parameter will be close to for some . Since , implying that either or will have its estimated reward larger than (i.e., strictly larger than ’s estimated reward) for any . This shows that for large , cannot be the arm with the highest UCB during the execution of LinUCB, which causes a contradiction. Therefore, this environment cannot be efficiently attacked with cost. Here we give an intuitive argument about this environment with target arm is not attackable, while its formal proof is an instantiation of our Theorem 1.

Note that when , the environment becomes attackable: as shown in Figure 1, a feasible attack strategy is to perturb reward according to . The key idea is that in the null space of , reduces the reward of to make the best arm without changing the actual reward of from the environment. The presence of arm prevents the existence of such a (and therefore ) and makes the environment unattackable.
The above example motivates us to study when a linear stochastic bandit environment is attackable. After all, only when facing an unattackable environment, we can ensure the existence of a no-regret algorithm that will be robust to any poisoning attacks.
Next we show that there indeed exists a complete characterization about when a linear stochastic bandit environment is attackable. As Example 1 shows, the attackability of a bandit environment depends on the arm set , the target arm , and the underlying bandit parameter . For clarity of presentation, in this section, we shall always assume that the adversary knows exactly the ground-truth bandit parameter and thus the true expected reward of each arm. This is also called the oracle attack in previous works (Jun et al.,, 2018; Rakhsha et al.,, 2020). But in the next section, we will show that this assumption is not necessary: when the bandit environment is indeed attackable, we can design provably successful attacks even if the adversary does not know the underlying bandit parameter .
We need the following convenient notation to state our results. Let
(1) |
denote the projection of ground-truth bandit parameter onto the targeted direction. Since the attackability depends on the target arm as well, we shall include the target arm as part of the bandit environment. The following theorem provides a clean characterization about the attackability of a linear stochastic bandit environment.
Theorem 1 (Characterization of Attackability).
A bandit environment is attackable if and only if the optimal objective of the following convex quadratic program (CQP) satisfies .
(2) |
where are variables.
Since the conceptual message of Theorem 1 significantly differs from previous studies on adversarial attacks in bandit algorithms, we would like to elaborate on its implications.
First of all, we, for the first time, point out some learning environment is intrinsically robust. Even the vanilla LinUCB algorithm, as we will analyze in the proof of Theorem 1, cannot be efficiently attacked when CQP (2) is not satisfied. Notably, although almost all previous works have focused on the vulnerability of bandit algorithms, e.g., by designing attacks against UCB, -Greedy (Jun et al.,, 2018), LinUCB (Garcelon et al.,, 2020), it just so happens that they were already studied under an attackable environment (see our Corollary 1). To our best knowledge, the problem about the intrinsic robustness of a linear bandit environment has not been studied before and can be viewed as a complement to these previous works. Second, as we will show next, our proof techniques are also significantly different from existing ones, since what is central to our proof is to demonstrate that when CQP (2) is not satisfied, there will exist a robust algorithm that cannot be efficiently attacked by any adversary. This can be viewed as analyzing the robustness of certain bandit algorithms when in CQP (2).
Since CQP (2) and its solutions will show up very often in our later analysis, we provide the following definition for reference convenience.
Definition 2 (Attackability Index and Certificate).
We should note both the index and certificate are intrinsic to the bandit environment , and are irrelevant to any bandit algorithms used. As we will see in the next section when designing attack algorithms without the knowledge of , the index will determine how difficult it is to attack the environment.
Proof of Theorem 1.
Proof of sufficiency. This direction is relatively straightforward. We show that there exists an efficient attack strategy if CQP (2) holds.
Suppose the attackability index and let be a certificate. In Algorithm 1, we design the oracle null space attack based on the knowledge of bandit parameter . Let where is defined in Eq (1). The adversary perturbs the reward of any non-target arm by , whereas the reward of the target arm is not touched. In other words, the adversary misleads the algorithm to believe that is the ground-truth parameter We should note both and generate the same expected reward on , i.e., . To make the attack appear less “suspicious”, a sub-Gaussian noise term is added to the perturbed reward to make it stochastic. The key idea is that the attacker does not need to perturb the reward of the target arm because the original reward is the same as perturbed reward in expectation. Instead, the attacker only perturbs the reward in the null space of according to , which guarantees the cost-efficiency of the attack.
Since the perturbed rewards observed by the bandit algorithm are generated by , the target arm is the optimal arm in this environment due to the attackability index being strictly positive. According to the definition, any no-regret bandit algorithm will only pull suboptimal arms, i.e., the non-target arms, times and pull target arm times with high probability. Thus the attack is successful. Moreover, the cost of oracle attack is because the attacker only perturbs rewards on the non-target arms for times, and the cost on each attack is bounded by a constant (because of the finite norm of and ). Importantly, we note that this argument only relies on the definition of “no regret” but does not depend on what the algorithm is. This is crucial for proving the sufficiency of attackability.
Proof of necessity. This is the more difficult direction. Due to space limit, we only provide the proof sketch here while leave the involved technical arguments to Appendix B. We shall prove that if , the bandit environment is not attackable. To do so, we need to identify at least one no-regret bandit algorithm such that no attack strategy can successfully attack it. We argue that even the vanilla LinUCB is already robust to any attack strategy with cost when . Recall that LinUCB maintains an estimate at round using the attacked rewards . We consider LinUCB with the choice of L2-regularization parameter that guarantees in order to satisfy the last constraint in CQP (2). Consider the decomposition , where and .
Suppose, for the sake of contradiction, that LinUCB is attackable when . According to Definition 1, the target arm will be pulled times with high probability for infinitely many different time horizons . Fix any large ; we know that must have the largest UCB score whenever it is pulled at some round , or formally, for any we must have the following:
(3) |
By attackability, we know that the above inequality will hold for infinitely many s. Our main idea to construct the proof is that as , we have and . Moreover, the estimation of will converge to , since will be pulled for times. The key challenge is to show , due to Inequality (3), is strictly greater than for all large . To do so, we prove a lower bound for (Lemma 3 in Appendix B) and an upper bound for (Lemma 2). The main technical barrier we overcome is the lower bound proof for the confidence bound term, which employs non-standard arguments since most (if not all) of the bandit algorithm analysis only needs the upper bound of the confidence bound terms. Due to this reason, we believe this technical proof is of independent interest, particularly for the analysis of robust properties of linear bandit algorithms.
By letting , we obtain the following condition:
(4) |
This implies that for any sufficiently large , there must exist a that and makes the optimal objective of CQP (2) positive. But this contradicts the starting assumption of ; hence, the bandit environment is not attackable. ∎
We now provide an intuitive explanation about Theorem 1. CQP (2) is to find such that: 1) it is orthogonal to (hence its subscript); and 2) it maximizes the gap between and the largest among all . Theorem 1 states that the bandit environment is attackable if and only if such a gap is strictly larger than , i.e., when the geometry of context vectors allows the adversary to lower non-target arms’ rewards in the null space of . The constraint ensures the attacked rewards are in the same scale as the unattacked rewards.
Recent works have shown that any no-regret algorithm for the context-free -armed setting (where arm set is orthonormal) can always be attacked (Liu and Shroff,, 2019). This finding turns out to be a corollary of Theorem 1.
Corollary 1.
For standard stochastic multi-armed bandit setting where arm set is orthonormal, the environment is attackable for any target arm .
Proof.
The intuition behind this corollary is that arms in context-free stochastic multi-armed bandits are independent, and an adversary can arbitrarily lower the rewards of non-target arms to make the target arm optimal. This is also the attack strategy in (Jun et al.,, 2018; Liu and Shroff,, 2019). Garcelon et al., (2020) showed that similar idea works for -armed linear contextual bandits where each arm is associated with an unknown bandit parameter and the reward estimations are independent among different arms.
We should point an important distinction between poisoning attacks to -armed bandits and another line of research on stochastic bandits under adversarial corruption initiated by Lykouris et al., (2018). For poisoning attacks considered in this paper, the adversary manipulates the realized rewards after the algorithm selects an action, whereas in (Lykouris et al.,, 2018), the adversary manipulates the entire reward vector before the algorithm selects any action. Obviously, the later threat model is strictly weaker and has led to various bandit algorithms that can have sublinear regret so long as the total manipulation is sublinear in (Lykouris et al.,, 2018; Zimmert and Seldin,, 2021).
4 Effective Attacks without Knowledge of True Model Parameters
In the previous section, we characterized the attackability of a linear stochastic bandit environment by assuming the knowledge of ground-truth bandit parameter . We now show that such prior knowledge is not needed when designing practical attacks. Towards this end, we propose provably effective attacks against two representative bandit algorithms: the most well-known LinUCB and a state-of-the-art robust linear stochastic bandit algorithm, Robust Phase Elimination (Bogunovic et al.,, 2021). We remark that the optimal attacks to these algorithms depend on the characteristics of algorithms themselves and are generally different, due to their different levels of robustness. This also resonates the important message mentioned in the introduction, i.e., the attackability analysis often goes hand-in-hand with the understanding of robustness of different algorithms, as reflected in various parts of our analysis. However, we point out that it is an intriguing open question to understand whether there is a single attack strategy that can manipulate any no-regret algorithm in an attackable environment.
Two-stage Null Space Attack.
Our proposed attack method is presented in Algorithm 2. The method spends the first rounds as the first stage to attack rewards on all arms by imitating a bandit environment , in which is the best arm such that arm will be pulled most often by the bandit algorithm. This stage is for the adversary to observe the true rewards of from the environment, which helps it estimate the parameter . At round , the method uses the estimate of , denoted as , to perform the “attackability test” by solving CQP (2) using to obtain an estimated index and certificate . If , the method asserts the environment is attackable and starts the second stage of attack. From to , the method perturbs the reward of non-target arms by (just like the oracle attack but using the estimated ). When the bandit algorithm pulls the target arm for the first time in the second stage, the method will compensate the reward of as shown in line 20, where is the number of times target arm is pulled before . The goal is to correct the rewards on collected in the first stage to follow instead of . Note that a larger brings in more observations on to improve the estimate of ; but it also means a higher attack cost. The optimal choice of depends on the “robustness” of the bandit algorithm to be attacked. Consequently, it also leads to different attack cost for different algorithms. For example, as we will show next, the attack to Robust Phase Elimination will be more costly than the attack to LinUCB.
Note that our attackability test might make both false positive and false negative assertions due to the estimation error in . But as becomes larger, the estimate gets more accurate and the assertion is correct with high probability.
Remark 1.
We acknowledge that the rewards from the two stages follow different reward distributions and could be detected, e.g., using some homogeneity test (Li et al.,, 2021). Thus a bandit player could realize the attack if equipped with some change detector. However, attacking such a cautious bandit algorithm is beyond the scope of this paper. Moreover, it is very difficult (if not impossible) to attack with a stationary reward distribution or undetectable perturbations (e.g., using a fixed target parameter ). We could easily find cases where the adversary’s parameter is limited to extremely few choices and it is almost impossible to directly start the attack with a valid without knowing . For example, if we change the third arm in Example 1 to with a small , we can see that the valid parameters are only in a small range around . Therefore, in order to attack with a stationary reward distribution, the adversary needs to start from somewhere very close to , which we believe is extremely difficult without knowing . Overall, we think designing an attack strategy against a bandit algorithm with reward change detector or showing the inability to attack such cautious algorithms is an important future work of ours.
Attack against LinUCB.
We now show how LinUCB algorithm can be attacked by Algorithm 2.
Theorem 2.
For large enough , the attack strategy in Algorithm 2 will correctly assert the attackability with probability at least . Moreover, when the environment is attackable, with probability at least the attack strategy will fool LinUCB to pull non-target arms at most
rounds. And with probability at least , the adversary’s cost is at most
Specifically, when , the strategy gives the minimum attack cost in the order of , and the non-target arms are pulled at most rounds.
Proof Sketch..
To prove the the assertion is correct with high probability, the key idea is that the estimated is close to the true parameter . We first note that in the first stage, the bandit algorithm will pull the target arm times, since is the best arm according to . According to the Hoeffding’s inequality, the estimation error . Therefore, with a large enough , the error on ’s reward estimation is smaller than . Thus solving CQP (2) with and we can correctly assert attackability with positive estimated index when the environment is attackable with index .
To prove the success and the cost of the attack, we need to analyze the behavior of LinUCB under the reward discrepancy between the two stages. Our proof crucially hinges on the following robustness result of LinUCB.
Lemma 1.
Consider LinUCB with ridge regression under poisoning attack. Let be the total corruption on non-target arms until time and assume every corruption on target arm is bounded by . For any , with probability at least we have
(5) |
where .
Based on this lemma, we can derive the regret of LinUCB with as the true parameter. The total corruption on non-target arms is given the rewards are generated by (the rewards of target arm in the first stage are compensated in line 20). Because the target arm’s rewards are not attacked in the second stage and follows , we have . Since the non-target arms are pulled at most rounds, substitute the total corruption back and we have the resulting bound.
The attack cost has two sources: attacks in the first stage for times, and attacks on the non-target arms in the second stage. The second term has the same order as the number of rounds where the non-target arms are pulled by LinUCB. Each attack cost can be decomposed as 1) the change of mean reward , and 2) the sub-Gaussian noise , the sum of which increases linearly with high probability. By setting , the total cost achieves . ∎
Remark 2.
Lemma 1 shows that LinUCB still enjoys sublinear regret for any corruption amount . This tolerance of attack turns out to be the same as the recently proposed robust linear contextual bandit algorithm based on phase elimination in (Bogunovic et al.,, 2021) (which we examine next). However, the regret term in LinUCB has a worse dependence on within the regime compared to the regret dependence of the robust algorithm in (Bogunovic et al.,, 2021).
Attack against Robust Phase Elimination.
We now show that Robust Phase Elimination (RobustPhE) (Bogunovic et al.,, 2021) can also be attacked by Algorithm 2. Comparing to attacking LinUCB, the robustness of RobustPhE brings challenge to the first stage attack, as the attack cost is more sensitive to the length of this stage.
Corollary 2.
For any large enough , the attack strategy in Algorithm 2 will correctly assert the attackability with high probability. Moreover, when the environment is attackable, with probability at least the attack strategy will fool RobustPhE to pull non-target arms at most
rounds. And with probability at least , the adversary’s cost is at most
Specifically, setting gives the minimum attack cost order and the non-target arms are pulled at most rounds.
RobustPhE has an additional regret term for total corruption (assuming is unknown to the bandit algorithm). If we view the second stage attack model as the underlying environment bandit model, rewards generated by in the first stage should be considered as corrupted rewards. The amount of rewards from the first stage means amount of corruption, which leads to the additional term compared with the results in Theorem 2. Hence, the adversary can only run fewer iterations in the first stage but spend more attack cost there. On the other hand, this also facilitates the design of attack such that line 19-20 in Algorithm 2 is not necessary: the corruption in the first stage can be handled by the robustness of RobustPhE. The unattacked rewards in second stage are viewed as misspecification from with error , which leads to the term (the second term) in the bound. Our success in attacking RobustPhE does not violate the robustness claim in the original paper (Bogunovic et al.,, 2021): RobustPhE could tolerate corruption and our attack cost is .
5 Experiments
![]() |
![]() |
(a) Total cost of attack. | (b) Target arm pulls. |
We use simulation-based experiments to validate the effectiveness and cost-efficiency of our proposed attack methods. In our simulations, we generate a size- arm pool , in which each arm is associated with a context vector . Each dimension of is drawn from a set of zero-mean Gaussian distributions with variances sampled from a uniform distribution . Each is then normalized to . The bandit model parameter is sampled from and normalized to . We set to 10, the standard derivation of Gaussian noise and to 0.1, and the arm pool size to 30 in our simulations. We run the experiment for iterations. To create an attackable environment, we will re-sample the environment until it is attackable, following Theorem 1.
We compared the two proposed attack methods, oracle null space attack and two-stage null space attack, against LinUCB Li et al., (2010) and Robust Phase Elimination (RobustPhE) Bogunovic et al., (2021). We report average results of 10 runs where in each run we randomly sample an attackable environment. Both oracle attack and two-stage attack can effectively fool the two bandit algorithms to pull the target arm linear times as shown in Figure 2(b), and the total cost of the attack is shown in Figure 2(a). We observe that both attack methods are cost-efficient with a sublinear total cost, while the two-stage attack requires higher attack cost when attacking the same bandit algorithm. Specifically, we notice that the adversary spends almost linear cost in the first stage. This is because in the first stage the adversary attacks according to parameter , which leads to an almost constant cost at every round. This is to help the adversary to estimate bandit model parameter in order to construct target parameter . In the second stage, the cost gets much smaller since the adversary only attacks the non-target arms. We also notice that for the same attack method, attacking RobustPhE requires a higher cost and the number of target arm pull is also smaller comparing with attacking LinUCB, due to the robustness of the algorithm.
6 Related Work
Adversarial attacks to bandit algorithms was first studied in the stochastic multi-armed bandit setting (Jun et al.,, 2018; Liu and Shroff,, 2019) and recently in linear contextual bandits (Garcelon et al.,, 2020). These works share a similar attack idea: lowering the rewards of non-target arms while not modifying the reward of target arm. However, as our attackability analysis revealed, this idea can fail in a linear stochastic bandit environment where one cannot lower the rewards of non-target arms without modifying the reward of target arm, due to their correlation. This insight is a key reason that gives rise to unattackable environments. Ma et al., (2018) also considered the attackability issue of linear bandits, but under the setting of offline data poisoning attack where the adversary has the power to modify the rewards in history. There are also several recent works on reward poisoning attacks against reinforcement learning (Yu and Sra,, 2019; Zhang et al.,, 2020; Rakhsha et al.,, 2021, 2020), but with quite different focus as ours. Besides reward poisoning attacks, recent works also studied other threat model such as action poisoning attacks (Liu and Lai,, 2020, 2021).
A parallel line of works focused on improving the robustness of bandit algorithms. Lykouris et al., (2018) proposed a robust MAB algorithm and Gupta et al., (2019) further improved the solution with additive regret dependency on attack cost. Zimmert and Seldin, (2021); Masoudian and Seldin, (2021) proposed best-of-both-world solutions for both stochastic and adversarial bandits which also solved stochastic bandits with adversarial corruption. Ito, (2021) further proposed optimal robust algorithm to adversarial corruption. These work assumed a weaker oblivious adversary who determines the manipulation before the bandit algorithm pulls an arm. Hajiesmaili et al., (2020) studied robust adversarial bandit algorithm. Guan et al., (2020) proposed a median-based robust bandit algorithm for probabilistic unbounded attack. Bogunovic et al., (2021) proposed robust phase elimination algorithm for linear stochastic bandits under a stronger adversary (same as ours), which could tolerate corruption when the total corruption is unknown to the algorithm. We showed that our two-stage null space attack could effectively attack this algorithm with cost. Recently, Zhao et al., (2021) proposed an OFUL style robust algorithm that can handle infinite action set, but only tolerates corruption.
7 Conclusion
In this paper, we studied the problem of poisoning attacks in -armed linear stochastic bandits. Different from context-free stochastic bandits and -armed linear contextual bandits where the environment is always attackable, we showed that some linear stochastic bandit environments are not attackable due to the correlation among arms. We characterized the attackability condition as the feasibility of a CQP based on the geometry of the arms’ context vectors. Our key insight is that given the ground-truth parameter , the adversary should perform oracle attack that lowers the reward of non-target arms in the null space of the target arm’s convex vector . Based on this insight, we proposed a two-stage null space attack without the knowledge of and applied it against LinUCB and Robust Phase Elimination. We showed that the proposed attack methods are effective and cost-efficient, both theoretically and empirically.
As our future work, it is interesting to study the lower bound of attack cost in linear stochastic bandits and also design cost-optimal attack method with a matching upper bound. One idea is to design a multi-stage attack method following the doubling trick idea, which was brief discussed in Appendix C.3. Although the analysis could be much more challenging than our two-stage attack, it may lead to a lower attack cost as well as handling unknown time horizon . Another intriguing direction is to study algorithm-oblivious choice of the length of the first stage — or more generally, algorithm-oblivious attack strategies — that can achieve sublinear cost for arbitrary no-regret algorithm without the knowledge of .
Acknowledgements
This work was supported in part by National Science Foundation Grant IIS-2128019, IIS-1618948, IIS-1553568, Bloomberg Data Science Ph.D. Fellowship, and a Google Faculty Research Award.
References
- Abbasi-yadkori et al., (2011) Abbasi-yadkori, Y., Pál, D., and Szepesvári, C. (2011). Improved algorithms for linear stochastic bandits. In NIPS, pages 2312–2320.
- Auer, (2002) Auer, P. (2002). Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3:397–422.
- Bogunovic et al., (2021) Bogunovic, I., Losalka, A., Krause, A., and Scarlett, J. (2021). Stochastic linear bandits robust to adversarial attacks. In International Conference on Artificial Intelligence and Statistics, pages 991–999. PMLR.
- Chapelle and Li, (2011) Chapelle, O. and Li, L. (2011). An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pages 2249–2257.
- Durand et al., (2018) Durand, A., Achilleos, C., Iacovides, D., Strati, K., Mitsis, G. D., and Pineau, J. (2018). Contextual bandits for adapting treatment in a mouse model of de novo carcinogenesis. In Machine Learning for Healthcare Conference, pages 67–82.
- Garcelon et al., (2020) Garcelon, E., Roziere, B., Meunier, L., Tarbouriech, J., Teytaud, O., Lazaric, A., and Pirotta, M. (2020). Adversarial attacks on linear contextual bandits. Advances in Neural Information Processing Systems, 33.
- Guan et al., (2020) Guan, Z., Ji, K., Bucci Jr, D. J., Hu, T. Y., Palombo, J., Liston, M., and Liang, Y. (2020). Robust stochastic bandit algorithms under probabilistic unbounded adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 4036–4043.
- Gupta et al., (2019) Gupta, A., Koren, T., and Talwar, K. (2019). Better algorithms for stochastic bandits with adversarial corruptions. In Conference on Learning Theory, pages 1562–1578. PMLR.
- Hajiesmaili et al., (2020) Hajiesmaili, M., Talebi, M. S., Lui, J., Wong, W. S., et al. (2020). Adversarial bandits with corruptions: Regret lower bound and no-regret algorithm. Advances in Neural Information Processing Systems, 33.
- Ito, (2021) Ito, S. (2021). On optimal robustness to adversarial corruption in online decision problems. Advances in Neural Information Processing Systems, 34:7409–7420.
- Jun et al., (2018) Jun, K.-S., Li, L., Ma, Y., and Zhu, J. (2018). Adversarial attacks on stochastic bandits. In Advances in Neural Information Processing Systems, pages 3640–3649.
- Lattimore et al., (2020) Lattimore, T., Szepesvari, C., and Weisz, G. (2020). Learning with good feature representations in bandits and in rl with a generative model. In International Conference on Machine Learning, pages 5662–5670. PMLR.
- Li et al., (2021) Li, C., Wu, Q., and Wang, H. (2021). Unifying clustered and non-stationary bandits. In International Conference on Artificial Intelligence and Statistics, pages 1063–1071. PMLR.
- Li et al., (2010) Li, L., Chu, W., Langford, J., and Schapire, R. E. (2010). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661–670. ACM.
- Liu and Shroff, (2019) Liu, F. and Shroff, N. (2019). Data poisoning attacks on stochastic bandits. In International Conference on Machine Learning, pages 4042–4050.
- Liu and Lai, (2020) Liu, G. and Lai, L. (2020). Action-manipulation attacks on stochastic bandits. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3112–3116. IEEE.
- Liu and Lai, (2021) Liu, G. and Lai, L. (2021). Provably efficient black-box action poisoning attacks against reinforcement learning. Advances in Neural Information Processing Systems, 34.
- Lykouris et al., (2018) Lykouris, T., Mirrokni, V., and Paes Leme, R. (2018). Stochastic bandits robust to adversarial corruptions. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 114–122. ACM.
- Ma et al., (2018) Ma, Y., Jun, K.-S., Li, L., and Zhu, X. (2018). Data poisoning attacks in contextual bandits. In International Conference on Decision and Game Theory for Security, pages 186–204. Springer.
- Masoudian and Seldin, (2021) Masoudian, S. and Seldin, Y. (2021). Improved analysis of the tsallis-inf algorithm in stochastically constrained adversarial bandits and stochastic bandits with adversarial corruptions. In Conference on Learning Theory, pages 3330–3350. PMLR.
- Rakhsha et al., (2020) Rakhsha, A., Radanovic, G., Devidze, R., Zhu, X., and Singla, A. (2020). Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In International Conference on Machine Learning, pages 7974–7984. PMLR.
- Rakhsha et al., (2021) Rakhsha, A., Zhang, X., Zhu, X., and Singla, A. (2021). Reward poisoning in reinforcement learning: Attacks against unknown learners in unknown environments. arXiv preprint arXiv:2102.08492.
- Vershynin, (2018) Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press.
- Yu and Sra, (2019) Yu, T. and Sra, S. (2019). Efficient policy learning for non-stationary mdps under adversarial manipulation. arXiv preprint arXiv:1907.09350.
- Zhang et al., (2020) Zhang, X., Ma, Y., Singla, A., and Zhu, X. (2020). Adaptive reward-poisoning attacks against reinforcement learning. arXiv preprint arXiv:2003.12613.
- Zhao et al., (2021) Zhao, H., Zhou, D., and Gu, Q. (2021). Linear contextual bandits with adversarial corruptions. arXiv preprint arXiv:2110.12615.
- Zimmert and Seldin, (2021) Zimmert, J. and Seldin, Y. (2021). Tsallis-inf: An optimal algorithm for stochastic and adversarial bandits. J. Mach. Learn. Res., 22:28–1.
Appendix A Notations
For clarity, we collect the notations used in the paper below.
Context vector of target arm | |
Context vector of arm | |
Unknown bandit model parameter of the environment | |
Projection of on | |
Unattacked reward feedback at time | |
Sub-Gaussian noise in reward, i.e., . | |
Attacked reward | |
Parameter estimated by the victim bandit algorithm with attacked rewards | |
Parameter parallel to , estimated by adversary with unattacked rewards | |
Paramter of adversary’s attack strategy | |
Attackability certificate, the parameter orthogonal to solved by CQP (2) | |
Attackability index, optimal objective of CQP (2) | |
Estimated index, optimal objective of CQP (2) with replacing |
Appendix B Details on Attackability of Linear Stochastic Bandits
B.1 Necessity Proof of Theorem 1
To prove its necessity, we will rely on the following results.
Lemma 2.
Suppose arm is pulled times till round by LinUCB. Its confidence bound in LinUCB satisfies
(6) |
with probability at least , where . Furthermore, we have
(7) |
with probability at least .
Proof.
In Abbasi-yadkori et al., (2011), the exploration bonus term is computed as . Denote . Since , we have . We can thus bound by
(8) |
Claim 1.
Target arm is pulled times till round . According to Lemma 2, we have
(10) |
Lemma 3.
Suppose arm is pulled times till round by LinUCB, and other arms are pulled times in total. Confidence bound of any arm that is not parallel to satisfies
(11) |
with probability at least , where and constant . Furthermore, we have
(12) |
with probability at least
Proof.
Since , we need to show a lower bound of . Since , we decompose , where . By the reverse triangle inequality we have
(13) |
First we analyze the term . Decompose and let . Since , we have
There are at most terms in the summation of , thus
Then we have
(14) |
Claim 2.
Non-target arms are pulled times till round . According to Lemma 3, any arm satisfies
(17) |
with probability at least .
Lemma 4.
Suppose the non-target arms are pulled times, the arm is pulled times, and the total manipulation is . With probability at least , reward estimation error satisfies
(18) |
Proof.
Note that . In the last step, the first term is because there are number of in and , and is bounded by total manipulation . Similarly, in the second term we have , and is the self-normalized bound for vector-valued martingales following Theorem 1 in Abbasi-yadkori et al., (2011). ∎
Now we are ready to prove that the index in CQP (2) being positive is the necessary condition of an attackable environment.
Proof of Necessity of Theorem 1.
We prove if , the bandit environment is not attackable. To prove this, we show that there exists some no-regret bandit algorithm (LinUCB in particular) such that no attacking strategy can succeed. In particular, we will show that LinUCB (with a specific choice of its L2-regularization parameter ) is robust under any attacking strategy with cost when . We prove it by contradiction: assume LinUCB is attackable with cost when . According to Definition 1, the target arm will be pulled times for infinitely many different time horizons , and the following inequalities hold when arm is pulled by LinUCB:
(19) |
where is LinUCB’s estimated parameter at round based on the attacked rewards. We decompose , where and . We will now show that the above inequalities lead to
when .
By Lemma 4 we have
Substitute them back and we have that with probability at least ,
(20) |
Let us first consider the case of . Substitute Claim 1 and Claim 2 back and we have with probability at least
Taking and noticing that the last three terms diminish to faster than the third term, there must exists a such that for any ,
(21) |
satisfies when .
Now we consider the special case that some and show that the above inequality is still strict. Let . If , we have . If , since is pulled linear times for any large with sublinear cost, then ; otherwise the cost would be linear. We directly have . This leads to (inequality (21)) since .
Combining the two cases, we know there must exist a that satisfies inequality (21) (the first constraint of CQP (2)), (the second constraint of CQP (2)), and makes the objective of CQP (2) larger than . To satisfy the last constraint, we consider LinUCB with the choice of L2-regularization parameter that guarantees given the data for large enough and any . Note that ridge regression is equivalent to minimizing square loss under some constraint , and there always exists a one-to-one correspondence between and (one simple way to show the correspondence is using KKT conditions). Therefore, we are guaranteed to find a that gives us where is an arbitrarily small constant. Then we know that satisfies . We prove the last constraint by the fact that and is arbitrarily small for large enough according to Lemma 4.
Now we proved that there exists a that satisfies all the constraints in CQP (2) with positive objective, which means the optimal objective must also be positive. This however contradicts our assumption , implying that such LinUCB is not attackable by any attack strategy. ∎
Appendix C Details on Effective Attacks Without Knowledge of Model Parameters
We now prove the theorems of using Two-stage Null Space Attack (Algorithm 2) against LinUCB and Robust Phase Elimination.
C.1 Proof of Theorem 2
Proof of Theorem 2.
We first prove that for a large enough , Algorithm 2 will correctly assert the attackability with probability at least . We rely on the following lemma to show estimated in step 11 of Algorithm 2 is close to the true parameter .
Lemma 5 (Estimation error of ).
Algorithm 2 estimates by
(22) |
With probability at least , the estimation error is bounded by
(23) |
where the rewards have -sub-Gaussian noise.
Proof.
is the projected vector of onto , which is
as defined in Eq (1). Though we need to estimate the vector , we actually only need to estimate the scale of it by , since the direction is known to be . Based on Hoeffding’s inequality, the estimation error of averaged rewards on is bounded by
(24) |
where . Considering and we finish the proof. ∎
In the first stage, the bandit algorithm will pull the target arm for times, since is the best arm according to . According to Lemma 5, with probability at least the estimation error is bounded as
As a result, with a large enough , the error of ’s reward estimation satisfies
Thus solving CQP (2) with replacing and we could correctly assert attackability with an estimated positive index when the environment is attackable with index .
Remark 3.
From the analysis above, we notice that the adversary requires sufficiently large to collect enough rewards on the target arm, in order to make the correct attackability assertion. When is not large enough, the attackability test may have false positive or false negative conclusions. We empirically test the error rate and report the results in Additional Experiments section.
We now prove the success and total cost of the proposed attack. The analysis relies on the “robustnes” property of LinUCB stated in Lemma 1, which is restated and proved below.
Lemma 6 (Robustness of ridge regression).
Consider LinUCB with ridge regression under poisoning attack. Let be the total corruption on non-target arms until time and assume every corruption on target arm is bounded by . Then for any , with probability at least we have
(25) |
where .
Proof.
Based on the closed form solution of ridge regression, we have
Therefore, using ideas from Abbasi-yadkori et al., (2011), we can have
with probability at least , where is the times target arm has been pulled. The second step is based on the definition of and introduces the high probability bound, the fourth step is because we have if ; the fifth step is because of ; and the second last inequality follows Eq (8). Finally, notice that and we finish the proof. ∎
Let us first analyze the attack in the first stage. Denote as the regret of LinUCB until round , where is the ground-truth parameter. We know from Abbasi-yadkori et al., (2011) that if the rewards are all generated by then with probability at least we have
(26) |
where . Then the attack in the first rounds based on should make the bandit algorithm pull at least times. According to Lemma 5, with probability at least parameter estimation error is bounded by
(27) |
for large enough . This means we have
(28) |
Now we prove the attack is successful with high probability. Consider the regret of the LinUCB against as the ground-truth parameter. The estimation error in has three sources: the sub-Gaussian noise, the rewards on non-target arms in the first stage generated by (the rewards on the target arm are corrected to in step 19-20 in Algorithm 2), and the unattacked rewards on target arm in the second stage generated by . According to Lemma 1, with probability at least , we have
To show the number of rounds pulling non-target arms, we first look at the regret against , i.e., .
holds with probability at least . And LinUCB will pull non-target arms at most times, which can be bounded by
and is in the order of
(29) |
The term is due to the “corrupted” rewards of non-target arms observed in the first stage. Setting gives us the minimum number of rounds pulling non-target arms in according to Eq (29).
Now we prove the total cost . Note that in order to make the attack “stealthy”, we inject sub-Gaussian noise on perturbed reward to make it stochastic. We separate the total cost by the cost on changing the mean reward and the cost of sub-Gaussian noise as
(30) |
where . Let be the total rounds of attack. Since we know the times attacking non-target arms is bounded by Eq 29 and attack target arm at most times, we have with probablity ,
(31) |
when setting .
Notice that since is -sub-Gaussian, its absolute value is also -sub-Gaussian, and for some constant following Proposition 2.5.2 in Vershynin, (2018). According to the general Hoeffding’s inequality, with probability at least ,
(32) |
Thus the second term of Eq (30) has the order of . The first term of Eq (30) is bounded by because each attack changes the mean reward at most 2 except the compensation step in line 20, and the reward compensation on target arm can be bounded by because target arm is pulled at most times in the first stage. Overall, with probability at least
(33) |
when setting .
∎
C.2 Proof Sketch of Corollary 2
The proof is similar to the proof of Theorem 2, and thus we only explain the difference here.
Instead of using Lemma 1 to analyze the impact of perturbed rewards generated by (against ) collected in the first stage, we know RobustPhE has an additional regret term for corruption (assuming is unknown to the bandit algorithm). Since the bandit algorithm observes rewards in the first stage, and the additional regret due to rewards from first stage is . For the unattacked rewards on target arm in the second stage generated by , we view them as rewards generated by with misspecification error , i.e., . Proposition 5.1 in Lattimore et al., (2020) showed that the phase elimination algorithm with misspecification has additional regret in . With by Eq (28), the total regret is
with probability at least . Therefore, we have with probability at least , the attack strategy will fool RobustPhE to pull non-target arms at most rounds.
Similar to Eq (31), we bound the total rounds of attacking RobustPhE by
(34) |
From Eq (33), we know the total cost is in the same order as the rounds of attack. So with probability at least the total cost is
Setting gives us the minimum attack cost , and the non-target arms are pulled at most rounds.
Remark 4.
Note that we bound the total corruption by , which means the adversary does not need to compensate the rewards on the target arm as shown in line 20 in Algorithm 2. The robustness of RobustPhE allows us to carry over the rewards in the first stage while LinUCB does not.
C.3 Attack under unknown
Our two-stage null space attack algorithm requires that is known for the convenience of analysis. Here we discuss a promising idea that leveraging the doubling trick to extend the attack to the case of unknown , which will lead to a multi-stage attack that repeatedly adjusts the target parameter at each stage. More concretely, to attack LinUCB without knowing , the adversary can start with a pre-specified horizon and run the two-stage attack. Once the actual horizon reaches , the adversary will expand the horizon to (i.e., the “doubling” trick), and views previous rounds as the new first stage, re-calculates target parameter using rewards from the rounds and runs the new attack strategy until . Using this exponential horizon sequence , we have a multi-stage attack on LinUCB with unknown time horizon. Similarly, to attack RobustPhE, we will need to adjust the horizon sequence to be , which will lead to a similar multi-stage attack on RobustPhE. We believe this direction is feasible based on our current analysis, and more thorough and complete proof should be the target of our next work.
Appendix D Additional Experiments

In Figure 3, we study the false negative rate of the attackability test in Algorithm 2, i.e., how often the adversary mistakenly asserts that an attackable environment is not attackable. As we explained in Proof of Theorem 1, the wrong assertion is because of using estimated instead of the ground-truth bandit parameter. In this experiment, we consider a challenging attackable three-arm environment with , and . By solving CQP (2), we have attackability index and certificate 555We introduce arm to guarantee the first dimension of cannot be smaller than . Comparing and and we can see the optimal solution is .. We test two-stage null space attack against LinUCB with and the adversary will test the attackability after the first rounds. We vary from to to see how many iterations is sufficient for attackability test. We report averaged results of 100 runs. We also vary the standard derivation of Gaussian noise from 0.1 to 0.3. In Figure 3, we can see that the false negative rate is almost zero when , suggesting is sufficient. When the adversary only needs around 10 rounds to make a correct assertion. We also notice the false negative rate becomes higher under a larger noise scale. As suggested in Lemma 5, the error in estimation is larger if noise scale is larger or the number of target arm’s rewards is smaller, which highly depends on . Larger error means CQP (2) with is more likely to be unfeasible and gives false negative assertion. However, is still enough for the attackability test when .