Online Minimax Multiobjective Optimization:
Multicalibeating and Other Applications
Abstract
We introduce a simple but general online learning framework in which a learner plays against an adversary in a vector-valued game that changes every round. Even though the learner’s objective is not convex-concave (and so the minimax theorem does not apply), we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our framework by using it to (re)derive optimal bounds and efficient algorithms across a variety of domains, ranging from multicalibration to a large set of no regret algorithms, to a variant of Blackwell’s approachability theorem for polytopes with fast convergence rates. As a new application, we show how to “(multi)calibeat” an arbitrary collection of forecasters — achieving an exponentially improved dependence on the number of models we are competing against, compared to prior work.
1 Introduction
We introduce and study a simple but powerful framework for online adversarial multiobjective minimax optimization. At each round , an adaptive adversary chooses an environment for the learner to play in, defined by a convex compact action set for the learner, a convex compact action set for the adversary, and a -dimensional continuous loss function that, in each coordinate, is convex in the learner’s action and concave in the adversary’s action. The learner then chooses an action, or distribution over actions, , and the adversary responds with an action . This results in a loss vector , which accumulates over time. The learner’s goal is to minimize the maximum accumulated loss over each of the dimensions: .
One may view the environment chosen at each round as defining a zero-sum game in which the learner wishes to minimize the maximum coordinate of the resulting loss vector. The objective of the learner in the stage game in isolation can be written as:111 A brief aside about the “inf max max” structure of : since each is continuous, so is , and hence is attained on the compact set . However, may not be a continuous function of and therefore the infimum over need not be attained.
Unfortunately, although is convex-concave in each coordinate, the maximum over coordinates does not preserve concavity for the adversary. Thus the minimax theorem does not hold, and the value of the game in which the learner moves first (defined above) is larger than the value of the game in which the adversary moves first— that is, , where is defined as:
Nevertheless, fixing a series of environments chosen by the adversary, this defines in hindsight an aspirational quantity , summing the adversary-moves-first value of the constituent zero sum games. Despite the fact that these values are not individually obtainable in the stage games, we show that they are approachable on average over a sequence of rounds, i.e., there is an algorithm for the learner that guarantees that against any adversary,
Our derivation is elementary and based on a minimax argument, and is a development of a game-theoretic argument from the calibration literature due to Hart (2020) and Fudenberg and Levine (1999).222This argument was extended in Gupta et al. (2022) to obtain fast rates and explicit algorithms for multicalibration and multivalidity. The generic algorithm plays actions at every round according to a minimax equilibrium strategy in a surrogate game that is derived both from the environment chosen by the adversary at round , as well as from the history of play so far on previous rounds . The loss in the surrogate game is convex-concave (and so we may apply minimax arguments), and can be used to upper bound the loss in the original games.
We then show that this simple framework can be instantiated to derive a wide array of optimal bounds, and that the corresponding algorithms can be derived in closed form by solving for the minimax equilibrium of the corresponding surrogate game. Despite its simplicity, our framework has a number of applications to online learning— we sketch these below.
“Multi-Calibeating”:
Foster and Hart (2021) recently introduced the notion of “calibeating” an arbitrary online forecaster: making online calibrated predictions about an adversarially chosen sequence of inputs that are guaranteed to have lower squared error than an arbitrary predictor , where the improvement in error approaches ’s calibration error in hindsight. Foster and Hart give two methods for calibeating an arbitrary collection of predictors simultaneously, but these methods have an exponential and polynomial dependence in their convergence bounds on , respectively.
Using our framework, we can derive optimal online bounds for online multicalibration (Hébert-Johnson et al., 2018; Gupta et al., 2022), and as an application, obtain bounds for calibeating arbitrary collection of models with only a logarithmic dependence on . Our algorithm naturally extends to the more general problem of online “multi-calibeating” — i.e. combining the goals of online multicalibration and calibeating. Namely, we give an algorithm for making real-valued predictions given contexts from some space . The algorithm is parameterized by (i) a collection of (arbitrary, potentially intersecting) subsets of that we might envision to represent e.g. different demographic groups in a setting in which we are making predictions about people; and (ii) an arbitrary collection of predictors . We promise that our predictions are calibrated not just overall, but simultaneously within each group — and moreover, that we calibeat each predictor not just overall, but simultaneously within each group . We do this by proving an online analogue of what Hébert-Johnson et al. (2018) call a “do no harm” property in the batch setting using a similar technique: multicalibrating with respect to the level sets of the predictors.
Fast Polytope Blackwell Approachability:
We give a variant of Blackwell’s Approachability Theorem (Blackwell, 1956) for approaching a polytope. Standard methods approach a set in Euclidean distance, at a rate polynomial in the payoff dimension. In contrast, we give a dimension-independent approachability guarantee: we approximately satisfy all halfspace constraints defining the polytope, after logarithmically many rounds in the number of constraints, a significant improvement over a polynomial dimensional dependence in many settings. It is equivalent to the results of Perchet (2015), which show that the negative orthant is approachable in the metric with a dependence in the convergence rate. This result follows immediately from a specialization of our framework that does not require changing the environment at each round, highlighting the connection between our framework and approachability. We remark that approachability has been extended in a number of ways in recent years (Mannor et al., 2014a, b; Perchet and Mannor, 2013). However most of our other applications take advantage of the flexibility of our framework to play a different game at each round (which can be defined by context) with potentially different action sets, and so do not directly follow from Blackwell approachability. Therefore, while many of our regret bounds could be derived from approachability to the negative orthant by enlarging the action space exponentially to simulate aspects of our framework, this approach would not easily lead to efficient algorithms.
Recovering Expert Learning Bounds:
Algorithms and optimal bounds for various expert learning problems fall naturally out of our framework as corollaries. This includes external regret (Vovk, 1990; Littlestone and Warmuth, 1994), internal and swap regret (Foster and Vohra, 1998; Hart and Mas-Colell, 2000; Blum and Mansour, 2007), adaptive regret (Littlestone and Warmuth, 1994; Hazan and Seshadhri, 2009; Adamskiy et al., 2012), sleeping experts (Freund et al., 1997; Blum, 1997; Blum and Mansour, 2007; Kleinberg et al., 2010), and the recently introduced multi-group regret (Blum and Lykouris, 2020; Rothblum and Yona, 2021). Multi-group regret refers to a contextual prediction problem in which the learner gets contexts from before each round. It is parameterized by a collection of groups : e.g., if the predictions concern people, may represent an arbitrary, intersecting set of demographic groups. Here the “experts” are different models that make predictions on each instance; the goal is to attain no-regret not just overall, but also on the subset of rounds corresponding to contexts from each . Multi-group regret, like multicalibration, is one of the few solution concepts in the algorithmic fairness literature known not to involve tradeoffs with overall accuracy (Globus-Harris et al., 2022). Blum and Lykouris (2020) derived their algorithm for online multigroup regret via a reduction to sleeping experts, and Gupta et al. (2022) derived their algorithm for online multicalibration via a direct argument. Here we derive online algorithms for both multicalibration and multigroup regret as corollaries of the same fundamental framework.
1.1 Additional Related Work
Papers by Azar et al. (2014) and Kesselheim and Singla (2020) study a related problem: an online setting with vector-valued losses, where the goal is to minimize the norm of the accumulated loss vector (they also consider other -norms). However, they study an incomparable benchmark that in our notation would be written as (which is well-defined in their setting, where loss functions and action sets are fixed throughout the interaction). On the one hand, this benchmark is stronger than ours in the sense that the maximum over coordinates is taken outside the sum over time, whereas our benchmark considers a “greedy” per-round maximum. On the other hand, in our setting the game can be different at every round, so our benchmark allows a comparison to a different action at each round rather than a single fixed action. In the setting of Kesselheim and Singla (2020), it is impossible to give any regret bound to their benchmark, so they derive an algorithm obtaining a competitive ratio to this benchmark. In contrast, our benchmark admits a regret bound. Hence, our results are quite different in kind despite the outward similarity of the settings: none of our applications follow from their theorems (since in all of our applications, we derive regret bounds).
A different line of work (Rakhlin et al., 2010, 2011) takes a very general minimax approach towards deriving bounds in online learning, including regret minimization, calibration, and approachability. Their approach is substantially more powerful than the framework we introduce here (e.g. it can be used to derive bounds for infinite dimensional problems, and characterizes online learnability in the sense that it can also be used to prove lower bounds). However, it is also correspondingly more complex, and requires analyzing the continuation value of a round dynamic program. Such analyses are generally technically challenging; as an example, a recent line of work by Drenska and Kohn (2020) and Kobzar et al. (2020) considers a Rakhlin et al.-style minimax formulation of the standard experts problem, and shows how to find nonlinear PDE-based minimax solutions for the Learner and the Adversary that can be optimal not just asymptotically in the number of experts (dimensions) , but also nonasymptotically for small such as or ; their PDE approach is also conducive to bounding not just the maximum regret across dimensions, but also more general functions of the individual dimensions’ losses.
Overall, results derived from the Rakhlin et al. framework (with some notable exceptions, including Rakhlin et al. (2012)) are generically nonconstructive, whereas our framework is simple and inherently constructive, in that the algorithm derives from repeatedly solving a one-round stage zero-sum game. Relative to this literature, we view our framework as a “user-friendly” power tool that can be used to derive a wide variety of algorithms and bounds without much additional work — at the cost of not being universally expressive.
2 General Framework
2.1 The Setting
A Learner (she) plays against an Adversary (he) over rounds . Over these rounds, she accumulates a -dimensional loss vector (), where each round’s loss vector lies in for some . At each round , the Learner and the Adversary interact as follows:
-
1.
Before round , the Adversary selects and reveals to the Learner an environment comprising:
-
(a)
The Learner’s and Adversary’s respective convex compact action sets , embedded into a finite-dimensional Euclidean space;
-
(b)
A continuous vector loss function , with each (for ) convex in the 1st and concave in the 2nd argument.
-
(a)
-
2.
The Learner selects some .
-
3.
The Adversary observes the Learner’s selection , and responds with some .
-
4.
The Learner suffers (and observes) the loss vector .
The Learner’s objective is to minimize the value of the maximum dimension of the accumulated loss vector after rounds—in other words, to minimize:
To benchmark the Learner’s performance, we consider the following quantity at each round :
Definition 2.1 (The Adversary-Moves-First (AMF) Value at Round ).
The Adversary-Moves-First value of the game defined by the environment at round is:
If the Adversary had to reveal first and the Learner could best respond, would be the smallest value of the maximum coordinate of she could guarantee. However, the function is not convex-concave (as the does not preserve concavity); hence the minimax theorem does not apply, making this value unobtainable for the Learner, who is in fact obligated to reveal first. However, we can define regret to a benchmark given by the cumulative AMF values of the games:
Definition 2.2 (Adversary-Moves-First (AMF) Regret).
On transcript , we define the Learner’s Adversary Moves First (AMF) Regret for the dimension at time to be:
The overall AMF Regret is then defined as follows: 333We will generally elide the dependence on the transcript and simply write and .
Again, the game played at each round is not convex-concave, so we cannot get . Instead, we will aim to obtain sublinear AMF regret, worst-case over adaptive adversaries: .
2.2 General Algorithm
Our algorithmic framework will be based on a natural idea: instead of directly grappling with the maximum coordinate of the cumulative vector valued loss, we upper bound the AMF regret with a one-dimensional “soft-max” surrogate loss function, which the algorithm will then aim to minimize.
Definition 2.3 (Surrogate loss).
Fixing a parameter , we define our surrogate loss function (that implicitly depends on the transcript through the respective round ) as:
This surrogate loss tightly bounds the AMF regret :
Lemma 2.1.
The Learner’s AMF Regret is upper bounded using the surrogate loss as:
Next we observe a simple but important bound on the per-round increase in the surrogate loss.
Lemma 2.2.
For any , any transcript through round , and any , it holds that:
The proof is very simple (see Appendix A.1): we write out the quantity , use the definition of AMF regret , and then bound via the inequality for .
We now exploit Lemma 2.2 to bound the final surrogate loss and obtain a game-theoretic algorithm for the Learner that attains this bound. While the above steps should remind the reader of a standard derivation of the celebrated Exponential Weights algorithm via bounding a log-sum-exp potential function, the next lemma is the novel ingredient that makes our framework significantly more general by relying on Sion’s powerful generalization of the Minimax Theorem to convex-concave games.
Lemma 2.3.
For any , the Learner can ensure that the final surrogate loss is bounded as:
Proof sketch; see Appendix A.1.
Define, for , continuous convex-concave functions by: If the Learner can ensure on all rounds regardless of the Adversary’s play, then Lemma 2.2 implies for all , leading to the desired bound on . Due to the continuous convex-concave nature of each (inherited from the loss coordinates ), we can apply Sion’s Minimax Theorem to conclude that:
In words, the Learner has a so-called minimax-optimal strategy , that achieves (worst-case over all ) value as low as if the Adversary moved first and the Learner could best-respond. But in the latter counterfactual scenario, using the definitions of and the Adversary-moves-first value , we can easily see that by best-responding to the Adversary, the Learner would always guarantee herself value : that is, . Thus, , and so by playing minimax-optimally at every round , the Learner will guarantee for all , leading to the desired regret bound. ∎
In fact, via a simple algebraic transformation (see Appendix A.1) taking advantage of the values being independent of the actions , we can explicitly express the Learner’s minimax optimal strategies at all rounds as: . Together with the proof of Lemma 2.3, this immediately gives the following algorithm for the Learner that achieves the desired bound on (and thus, as we will show, on the AMF regret ).
Theorem 2.1 (AMF Regret guarantee of Algorithm 1).
For any , Algorithm 1 with learning rate obtains, against any Adversary, AMF regret bounded by:
Remark 2.1.
Our framework is easy to adapt to the setting where the Learner randomizes, at each round, amongst a finite set of actions (i.e. ), and wishes to obtain in-expectation and high-probability AMF regret bounds. This is useful in all our applications below. Additionally, our AMF regret bounds are robust to the Learner playing only an approximate (rather than exact) minimax strategy at each round: we use this to derive our simple multicalibration algorithm below. See Appendix A.2 for both these extensions.
3 Deriving No-X-Regret Algorithms from Our Framework
The core of our framework — the Adversary-Moves-First regret — is strictly more general than a very large variety of known regret notions including: external, internal, swap, adaptive, sleeping-experts, multigroup, and wide-range () regret. Specifically, in Appendix E, we use our framework to derive simple -regret algorithms for what we call subsequence regret, which encapsulates all these regret forms. In each of these cases, our generic algorithm is efficient, and often specializes (by computing a minimax equilibrium strategy in closed form) to simple combinatorial algorithms that had been derived from first principles in prior work. We note that in any problem that involves context or changing action spaces (as the sleeping experts problem does), we are taking advantage of the flexibility of our framework to present a different environment at every round, which distinguishes our framework from more standard Blackwell approachability arguments. In fact, as we will see in Section 5 below, our framework recovers fast Blackwell approachability as a special case.
For our general subsequence regret algorithms, please see Appendix E. Now, as a warm-up application of our framework, we directly instantiate it for the simplest case of obtaining external regret.
Simple Learning From Expert Advice: External Regret
In the classical experts learning setting Littlestone and Warmuth (1994), the Learner has a set of pure actions (“experts”) . At the outset of each round , the Learner chooses a distribution over experts . The Adversary then comes up with a vector of losses corresponding to each expert. Next, the Learner samples , and experiences loss corresponding to the expert she chose: . The Learner also gets to observe the entire vector of losses for that round. The goal of the Learner is to achieve sublinear external regret — that is, to ensure that the difference between her cumulative loss and the loss of the best fixed expert in hindsight grows sublinearly with :
Theorem 3.1.
Fix a finite pure action set for the Learner and a time horizon . Then, an instantiation of our framework’s Algorithm 2 lets the Learner achieve the following regret bounds:
Proof.
We instantiate (the probabilistic version of) our framework (see Section A.2.1).
At all rounds, the Learner’s pure action set is , and the Adversary’s strategy space is the convex and compact set , from which each round’s collection of all actions’ losses is selected. Next, we define a -dimensional loss function , where each coordinate loss expresses the regret of the Learner’s chosen action relative to action :
By Theorem A.1, , where is the AMF value at round . Using this AMF regret bound, we can bound the Learner’s external regret as:
It thus remains to show that the AMF value for all . This holds, since if the Learner knew the Adversary’s choice of losses before round , then picking the action with the smallest loss would get her regret in that round. 444Formally, for any vector of actions’ losses , define and notice that Hence, the AMF value is indeed nonpositive at each round: This gives the in-expectation regret bound; the high-probability bound follows in the same way from Theorem A.2. ∎
A bound of is optimal for external regret in the experts learning setting, and so serves to witness the optimality of our framework’s general AMF regret bound in Theorem 2.1.
In fact, the above instantiation of Algorithm 2 yields the classical Exponential Weights algorithm Littlestone and Warmuth (1994): at each round , the action is sampled with , for . We denote this distribution by .
Indeed, given the above defined loss , the Learner solves the following problem at each round:
where . That is, the per-coordinate weights themselves form the Exponential Weights distribution with rate .
For any choice of by the Adversary, the quantity inside the expectation, , is antisymmetric in and : that is, . Due to this antisymmetry, no matter which gets selected by the Adversary, by playing the Learner obtains , thus achieving the value of the game. It is also easy to see that is the unique choice of that guarantees nonnegative value, hence Algorithm 2, when specialized to the external regret setting, is equivalent to the Exponential Weights Algorithm 5.
4 Multicalibration and Multicalibeating
We now apply our framework to derive an online contextual prediction algorithm which simultaneously satisfies a (potentially very large) family of strong adversarial accuracy and calibration conditions. Namely, given an arbitrarily complex family of subsets of the context space (we call them “groups”, a term from the fairness literature), the predictor will be both calibrated and accurate on each group (that is, over those online rounds when the context belongs to ).
The accuracy benchmark that we aim to satisfy was recently proposed by Foster and Hart (2021), who called it calibeating: given any collection of online forecasters, the goal is (intuitively) to “beat” the (squared) error of each by at least the calibration score of .
In Section 4.1, we use our framework to rederive the online multigroup calibration (known as multicalibration) algorithm of Gupta et al. (2022). In Section 4.2, we show that by appropriately augmenting the original collection of groups , this algorithm will, in addition to multicalibration, calibeat any family of predictors on every group , which we call multicalibeating.
4.1 Multicalibration
Setting
There is a feature (or context) space encoding the set of possible feature vectors representing individuals . There is also a label space . At every round :
-
1.
The Adversary announces a particular individual , whose label is to be predicted;
-
2.
The Learner predicts a label distribution over ;
-
3.
The Adversary observes , and fixes the true label distribution over ;
-
4.
The (pure) guessed label and the (pure) true label are sampled.
Objective: Multicalibration
The Learner is initially given an arbitrary collection of protected population groups. Her goal, multicalibration, is empirical calibration not just marginally over the whole population, but also conditionally on individual membership in each . Formally, for any we let the -bucketing of the label interval be its partition into subintervals . The of these intervals (buckets) is denoted .
Definition 4.1 (-Multicalibration with respect to ).
Fix a real and an integer . Given the transcript of the interaction , the Learner’s sequence of guessed labels is -multicalibrated with respect to the collection of groups if:
Using our framework, we now derive the guarantee on that matches that of Gupta et al. (2022).
Theorem 4.1 (Multicalibration).
Sketch.
Setting up the game: The adversary’s strategy space is . The learner will randomize over , for any choice of integer (this will ensure continuity of the loss functions that we are about to define), i.e., her strategy space is .
Loss functions: The definition of multicalibration consists of constraints (one for each sign, group , and bucket ) of the following form: . Thus, we define (for each , , , and ) a loss function over as:
Now, defining a dimensional loss vector for each recasts multicalibration in our framework as requiring that
Bounding the AMF regret: To bound the Adversary-Moves-First value with these loss functions, suppose the Adversary announces . Then, we easily see that by (deterministically) responding with , for all , . Hence,
Now, for , the AMF regret , by our framework’s guarantees, satisfies over the Learner’s randomness. Since , we get .
This gives -multicalibration with . The high-probability bound on is obtained similarly.
Simplifying Learner’s algorithm: To attain the AMF value at each round, our framework has the Learner solve a linear program (that encodes her minimax strategy). However, she can obtain the almost optimal value without solving an LP: this observation gives Algorithm 3 (see Appendix B). The guarantees on only differ from optimal ones by replacing . ∎
4.2 Multicalibeating
We now give an approach to “beating” arbitrary collections of online forecasters via online multicalibration. The goal, called calibeating by Foster and Hart (2021) who introduce the problem, is to make calibrated forecasts that are more accurate than each of an arbitrary set of forecasters, by exactly the calibration error in hindsight of that forecaster. They achieve optimal calibeating bounds for a single forecaster, but their extension to calibeating multiple forecasters incurs at least a polynomial dependence on the number of forecasters. We achieve a logarithmic dependence on the number of forecasters. Additionally, we are able to simultaneously calibeat forecasters on all (big enough) subgroups in some set , with still only a logarithmic dependence on and the number of forecasters in the group-wise convergence bound. We call this multicalibeating. We now give an overview of our setting, results, and techniques. For full details, see Appendix C.
Setting
The Learner (predictor ) and the Adversary (true labels ) interact in the same way as in Section 4.1, but the Adversary additionally reveals to the Learner a finite set of forecasters , where each is a function . Here is assumed to be a finite set of all possible forecasts that makes: it will characterize the level sets of . We often suppress the dependence on the transcript, denoting the forecast at time .
The Learner’s goal is to “improve on” the forecasts of all , for some suitable scoring of the predictions. We measure the Learner’s and the forecasters’ accuracy via the squared error, alternatively known as the Brier score.
Definition 4.2 (Brier Score).
The Brier score of a forecaster over all rounds is defined as:
The Brier score can be decomposed into so-called calibration and refinement parts. The former quantifies the extent to which the predictor is calibrated, while the latter expresses the average amount of variance in predictions within every calibration bucket.
To define this decomposition, we need some extra notation. We denote by the subsequence of days on which the Learner’s prediction is in bucket .555Note that depends implicitly on the bucketing parameter and the transcript . Similarly, (eliding when clear from context) denotes days on which forecaster predicts . We let . Finally, we use bars to indicate average predictions over given subsequences. For instance, is the Learner’s average prediction over a given subsequence .
Definition 4.3 (Calibration and Refinement).
The calibration score and refinement score of a forecaster over the full transcript are defined as:
Fact 1 (Calibration-Refinement Decomposition of Brier Score (DeGroot and Fienberg, 1983)).
.
The goal of calibeating is to beat the forecaster’s Brier score by an amount equal to its calibration score. Or equivalently, to attain a Brier score (almost) equal to the refinement score of the forecaster.
Definition 4.4 (Calibeating).
The Learner’s predictor is said to -calibeat a forecaster if:
We will now extend the definition of calibeating simultaneously along two natural directions. First, we will want to calibeat multiple forecasters at once. The second extension is that we will want to calibeat the forecasters not just overall, but also on each of the subsequences corresponding to each “population group” in a given family of subpopulations .
Definition 4.5 (Multicalibeating).
Given a family of forecasters , groups , and a mapping , the Learner’s predictor is an -multicalibeater if for every :
Note that -multicalibeating is equivalent to -calibeating a forecaster .
We first show how to calibeat a single forecaster (Definition 4.4). The modularity of multicalibration will then let us easily extend this result to multiple forecasters and population subgroups.
The idea is to show that if our predictor is multicalibrated with respect to the level sets of , then we achieve calibeating. Hébert-Johnson et al. (2018) give a similar bound in the batch setting. We denote the collection of level sets of as: .
Theorem 4.2 (Calibeating One Forecaster).
Suppose that the Learner’s predictions are -multicalibrated on the collection of groups . Then the Learner is -calibrated on , and she -calibeats forecaster .
Proof sketch.
We show that has small calibration score, and refinement score close to that of .
Step 1: Replace with a surrogate Brier score . Consider a (pseudo-)predictor given by for (where is the bucket of ). That is, whenever , predicts the average of over all such rounds that . This is a pseudo-predictor, as the bucket averages of are unknown until after round . Thus, has precisely level sets, unlike . Now, we define to be the Brier, calibration, and refinement scores of . We can show , allowing us to switch to bounding the more manageable Brier loss .
Step 2: Bound the surrogate calibration score . Since the Learner is -calibrated on the domain , the calibration error per level set is at most . There are level sets, so
Step 3: Bound the surrogate refinement score . We connect and via a joint refinement score: , which measures the average variance of the partition generated by all intersections of the level sets of and . The finer the partition, the smaller the refinement score, so Next, informally, multicalibration ensures that has already “captured” most of the variance explained by . Therefore, refining ’s level sets by does little to reduce variance. More precisely, we show that Combining with our previous inequality, we have:
Combining the above, we get: . ∎
Calibeating many forecasters
Generalizing the above construction, we can easily calibeat any collection of forecasters on the entire context space : it suffices to ask for multicalibration with respect to the level sets of all forecasters, i.e. Theorem 4.2 applies separately to each ; the only degradation in the guarantees will come in the form of a larger , since we are asking for multicalibration with respect to more groups than before. But this effect will be small, since depends on the number of required groups as . See Corollary C.2.
However, to fully satisfy Definition 4.5 of multicalibeating, we need to calibeat all on all groups in a given collection . For that, we simply extend the above construction by requiring multicalibration with respect to all pairwise intersections of the forecasters’ level sets with the groups . By further augmenting this collection with the protected groups themselves, we finally achieve our ultimate goal: simultaneous multicalibeating and multicalibration.
Theorem 4.3 (Multicalibeating + Multicalibration).
Let , and some set of forecasters . The multicalibration algorithm on with parameters , after rounds, attains expected -multicalibeating, where: 666 denotes the subsequence of days on which a group occurs, suppressing dependence on transcript.
while maintaining -multicalibration on , with:
In particular, for any group occurring more than of the time, we asymptotically converge to -calibeating as , thus combining the goals of online multicalibration and multigroup regret.
5 Polytope Blackwell Approachability
Consider a setting where the Learner and the Adversary are playing a repeated game with vector-valued payoffs, in which the Learner always goes first and aims to force the average payoff over the entire interaction to approach a given convex set. Blackwell’s Theorem (1956) states that a convex set is approachable if and only if it is response-satisfiable (roughly, for any choice of the Adversary, the Learner has a response forcing the one-round payoff inside the convex set). The rate of approachability typically depends on the dimension of the payoff vectors.
This is a specialization of our framework to a case in which the environment is fixed at every round. Thus our framework can be used to obtain a dimension-independent rate bound in the fundamental case where the approachable set is a convex polytope. Our bound is only logarithmic in the polytope’s number of facets, and is achievable via an efficient convex-programming based algorithm.
Let us formalize our setting. In rounds , the Learner and the Adversary play a repeated game. Their respective pure strategy sets are and , where is a finite set and (for some integer ) is convex and compact. The game’s utility function is -dimensional (for some integer ), continuous, concave in the second argument, and is denoted by . At each round , the Learner plays a mixed strategy , the Adversary responds with some , and the Learner then samples a pure action . This gives rise to the utility vector . The average play up to any round is then defined as .
The target convex set that the Learner wants to approach is a polytope , defined as the intersection of a finite collection of halfspaces , where for any given we denote . Finally, by way of normalization, consider any two dual norms and .We require, first, that and for each halfspace ; and second, that the payoffs be in the -unit ball: for .
Theorem 5.1 (Polytope Blackwell Approachability).
Suppose the target convex polytope is response-satisfiable, in the sense that for any Adversary’s action , the Learner has a mixed response that places the expected payoff inside : that is, .
Then, is approachable, both in expectation and with high probability with respect to the transcript of the interaction. Namely, the Learner has an efficient convex programming based algorithm which guarantees both following conditions simultaneously:
-
1.
For any margin , the average play up to any round will satisfy
-
2.
For any , the average play up to any round will satisfy
Acknowledgments
We thank Ira Globus-Harris, Chris Jung, and Kunal Talwar for helpful conversations at an early stage of this work. Supported in part by NSF grants AF-1763307, FAI-2147212, CCF-2217062, and CCF-1934876 and the Simons collaboration on algorithmic fairness.
References
- Adamskiy et al. [2012] Dmitry Adamskiy, Wouter M Koolen, Alexey Chernov, and Vladimir Vovk. A closer look at adaptive regret. In International Conference on Algorithmic Learning Theory, pages 290–304. Springer, 2012.
- Azar et al. [2014] Yossi Azar, Uriel Felge, Michal Feldman, and Moshe Tennenholtz. Sequential decision making with vector outcomes. In Proceedings of the 5th conference on Innovations in Theoretical Computer Science, pages 195–206, 2014.
- Blackwell [1956] David Blackwell. An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6(1):1–8, 1956.
- Blum [1997] Avrim Blum. Empirical support for winnow and weighted-majority algorithms: Results on a calendar scheduling domain. Machine Learning, 26(1):5–23, 1997.
- Blum and Lykouris [2020] Avrim Blum and Thodoris Lykouris. Advancing subgroup fairness via sleeping experts. In Innovations in Theoretical Computer Science Conference (ITCS), volume 11, 2020.
- Blum and Mansour [2007] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research, 8(6), 2007.
- DeGroot and Fienberg [1983] Morris H. DeGroot and Stephen E. Fienberg. The comparison and evaluation of forecasters. The Statistician, 32:12–22, 1983.
- Drenska and Kohn [2020] Nadejda Drenska and Robert V Kohn. Prediction with expert advice: A pde perspective. Journal of Nonlinear Science, 30(1):137–173, 2020.
- Dubhashi and Panconesi [2009] Devdatt P Dubhashi and Alessandro Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, 2009.
- Foster and Hart [2021] Dean P Foster and Sergiu Hart. “calibeating”: Beating forecasters at their own game. https://www.ma.huji.ac.il/~hart/papers/calib-beat.pdf, 2021.
- Foster and Vohra [1998] Dean P Foster and Rakesh V Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998.
- Freund et al. [1997] Yoav Freund, Robert E Schapire, Yoram Singer, and Manfred K Warmuth. Using and combining predictors that specialize. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pages 334–343, 1997.
- Fudenberg and Levine [1999] Drew Fudenberg and David K Levine. An easier way to calibrate. Games and Economic Behavior, 29(1-2):131–137, 1999.
- Globus-Harris et al. [2022] Ira Globus-Harris, Michael Kearns, and Aaron Roth. Beyond the frontier: Fairness without accuracy loss. arXiv preprint arXiv:2201.10408, 2022.
- Greenwald and Jafari [2003] Amy Greenwald and Amir Jafari. A general class of no-regret learning algorithms and game-theoretic equilibria. In Learning theory and kernel machines, pages 2–12. Springer, 2003.
- Gupta et al. [2022] Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M. Pai, and Aaron Roth. Online Multivalid Learning: Means, Moments, and Prediction Intervals. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 82:1–82:24, 2022.
- Hart [2020] Sergiu Hart. Calibrated forecasts: The minimax proof, 2020. URL http://www.ma.huji.ac.il/~hart/papers/calib-minmax.pdf.
- Hart and Mas-Colell [2000] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, 2000.
- Hazan and Seshadhri [2009] Elad Hazan and Comandur Seshadhri. Efficient learning algorithms for changing environments. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 393–400, 2009.
- Hébert-Johnson et al. [2018] Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939–1948. PMLR, 2018.
- Kesselheim and Singla [2020] Thomas Kesselheim and Sahil Singla. Online learning with vector costs and bandits with knapsacks. In Conference on Learning Theory, pages 2286–2305. PMLR, 2020.
- Kleinberg et al. [2010] Robert Kleinberg, Alexandru Niculescu-Mizil, and Yogeshwer Sharma. Regret bounds for sleeping experts and bandits. Machine Learning, 80(2):245–272, 2010.
- Kobzar et al. [2020] Vladimir A Kobzar, Robert V Kohn, and Zhilei Wang. New potential-based bounds for prediction with expert advice. In Conference on Learning Theory, pages 2370–2405. PMLR, 2020.
- Lehrer [2003] Ehud Lehrer. A wide range no-regret theorem. Games and Economic Behavior, 42(1):101–115, 2003.
- Littlestone and Warmuth [1994] Nick Littlestone and Manfred K Warmuth. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994.
- Mannor et al. [2014a] Shie Mannor, Vianney Perchet, and Gilles Stoltz. Approachability in unknown games: Online learning meets multi-objective optimization. In Conference on Learning Theory, pages 339–355. PMLR, 2014a.
- Mannor et al. [2014b] Shie Mannor, Vianney Perchet, and Gilles Stoltz. Set-valued approachability and online learning with partial monitoring. The Journal of Machine Learning Research, 15(1):3247–3295, 2014b.
- Perchet [2015] Vianney Perchet. Exponential weight approachability, applications to calibration and regret minimization. Dynamic Games and Applications, 5(1):136–153, 2015.
- Perchet and Mannor [2013] Vianney Perchet and Shie Mannor. Approachability, fast and slow. In Conference on Learning Theory, pages 474–488. PMLR, 2013.
- Raghavan [1994] T.E.S. Raghavan. Zero-sum two-person games. In R.J. Aumann and S. Hart, editors, Handbook of Game Theory with Economic Applications, volume 2 of Handbook of Game Theory with Economic Applications, chapter 20, pages 735–768. Elsevier, 1994. URL https://ideas.repec.org/h/eee/gamchp/2-20.html.
- Rakhlin et al. [2010] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinatorial parameters, and learnability. Advances in Neural Information Processing Systems, 23:1984–1992, 2010.
- Rakhlin et al. [2011] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Beyond regret. In Proceedings of the 24th Annual Conference on Learning Theory, pages 559–594. JMLR Workshop and Conference Proceedings, 2011.
- Rakhlin et al. [2012] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: from value to algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems-Volume 2, pages 2141–2149, 2012.
- Rothblum and Yona [2021] Guy N Rothblum and Gal Yona. Multi-group agnostic pac learnability. arXiv preprint arXiv:2105.09989, 2021.
- Vovk [1990] Volodimir G Vovk. Aggregating strategies. Proc. of Computational Learning Theory, 1990, 1990.
Appendix A The General Framework with Extensions to Probabilistic and Approximate Learners: Full Proofs and Algorithms
A.1 Omitted Proofs from Section 2
Proof of Lemma 2.1.
After taking the log and dividing by , this lemma follows from the following chain:
∎
Proof of Lemma 2.2.
By definition of the surrogate loss,we have:
Using the fact that for , we have, for , | ||||
∎
Proof of Lemma 2.3.
We begin by recalling that . Thus, the desired bound on follows via Lemma 2.2 and a telescoping argument, if only we can show that for every the Learner has an action which guarantees that for any ,
To this end, we define a zero-sum game between the Learner and the Adversary, with action space for the Learner and for the Adversary, and with the objective function (which the Adversary wants to maximize and the Learner wants to minimize):
Recall from the definition of our framework that are convex, compact and finite-dimensional, as well as that each is continuous, convex in the first argument, and concave in the second argument. Since is defined as an affine function of the individual coordinate functions , is also convex-concave and continuous. This means that we may invoke Sion’s Minimax Theorem:
Fact 2 (Sion’s Minimax Theorem).
Given finite-dimensional convex compact sets , and a continuous function which is convex in the first argument and concave in the second argument, it holds that
Using Sion’s Theorem to switch the order of play (so that the Adversary is compelled to move first), and then recalling the definition of (the value of the maximum coordinate value of that the Learner can obtain when the Adversary is compelled to move first), we obtain:777Note that in the third step, turns into . This is because after each is replaced with , the maximum over generally becomes unachievable (recall Footnote 1).
Thus, the Learner can ensure that by playing at every round :
This concludes the proof. ∎
An equivalent description of Learner’s space of minimax optimal strategies at each round
We observe that the Learner’s optimal action at each round, derived in the proof, can be expressed without any reference to the quantities :
The weights placed on the loss coordinates in the final expression form a probability distribution which should remind the reader of the well known Exponential Weights distribution.
A.2 Extensions
Before presenting applications of our framework, we pause to discuss two natural extensions that are called for in some of our applications. Both extensions only require very minimal changes to the notation in Section 2.1 and to the general algorithmic framework in Section 2.2.
We begin by discussing, in Section A.2.1, how to adapt our framework to the setting where the Learner is allowed to randomize at each round amongst a finite set of actions, and wishes to obtain probabilistic guarantees for her AMF regret with respect to her randomness. This will be useful in all three of our applications.
We then proceed to show, in Section A.2.2, that our AMF regret bounds are robust to the case in which at each round, the Learner, who is playing according to the general Algorithm 1 given above, computes and plays according to an approximate (rather than exact) minimax strategy. This is useful for settings where it may be desirable (for computational or other reasons) to implement our algorithmic framework approximately, rather than exactly. In particular, in one of our applications — mean multicalibration, which is discussed in Section 4.1 — we will illustrate this point by deriving a multicalibration algorithm that has the Learner play only extremely (computationally and structurally) simple strategies, at the cost of adding an arbitrarily small term to the multicalibration bounds, compared to the Learner that plays the exact minimax equilibrium.
A.2.1 Performance Bounds for a Probabilistic Learner
So far, we have described the interaction between the Learner and the Adversary as deterministic. In many applications, however, the convex action space for the Learner is the simplex over some finite set of base actions, representing probability distributions over actions. In this case, the Adversary chooses his action in response to the probability distribution over base actions chosen by the Learner, at which point the Learner samples a single base action from her chosen distribution.
We will use the following notation. The Learner’s pure action set at time is denoted by . Before each round , the Adversary reveals a vector valued loss function . At the beginning of round , the Learner chooses a probabilistic mixture over her action set , which we will usually denote as ; after the Adversary has made his move, the Learner samples her pure action for the round, which is recorded into the transcript of the interaction.
The redefined vector valued losses now take as their first argument a pure action . We extend this to as for any . In this notation, holding the second argument fixed, the loss function is linear (hence convex and continuous) and has a convex, compact domain (the simplex ). Using this extended notation, it is now easy to see how to define the probabilistic analog of the AMF value.
Definition A.1 (Probabilistic AMF Value).
For a more detailed discussion of the probabilistic setting, please refer to Appendix A.3.
Adapting the algorithm to the probabilistic Learner setting
Above, Algorithm 1 was given for the deterministic case of our framework. In the probabilistic setting, when computing the probability distribution for the current round, the Learner should take into account the realized losses from the past rounds. We present the modified algorithm below.
Probabilistic performance guarantees
Algorithm 2 provides two crucial blackbox guarantees to the probabilistic Learner. First, the guarantees on Algorithm 1 from Theorem 2.1 almost immediately translate into a bound on the expected AMF regret of the Learner who uses Algorithm 2, over the randomness in her actions. Second, a high-probability AMF regret bound, also over the Learner’s randomness, can be derived in a straightforward way.
Theorem A.1 (In-Expectation Bound).
Given , Algorithm 2 with learning rate guarantees that ex-ante, with respect to the randomness in the Learner’s realized outcomes, the expected AMF regret is bounded as:
Proof Sketch.
Theorem A.2 (High-Probability Bound).
Fix any . Given , Algorithm 2 with learning rate guarantees that the AMF regret will satisfy, with ex-ante probability over the randomness in the Learner’s realized outcomes,
Proof Sketch.
The proof proceeds by constructing a martingale with bounded increments that tracks the increase in the surrogate loss , and then using Azuma’s inequality to conclude that the final surrogate loss (and hence the AMF regret) is bounded above with high probability. For a detailed proof, see Appendix A.3. ∎
A.2.2 Performance Bounds for a Suboptimal Learner
Our general Algorithms 1 and 2 involve the Learner solving a convex program at each round in order to identify her minimax optimal strategy. However, in some applications of our framework it may be necessary or desirable for the Learner to restrict herself to playing approximately minimax optimal strategies instead of exactly optimal ones. This can happen for a variety of reasons:
-
1.
Computational efficiency. While the convex program that the Learner must solve at each round is polynomial-sized in the description of the environment, one may wish for a better running time dependence — e.g. in settings in which the action space for the Learner is exponential in some other relevant parameter of the problem. In such cases, we will want to trade off run-time for approximation error in the minimax equilibrium computation at each round.
-
2.
Structural simplicity of strategies. One may wish to restrict the Learner to only playing “simple” strategies (for example, distributions over actions with small support), or more generally, strategies belonging to a certain predefined strict subset of the Learner’s strategy space. This subset may only contain approximately optimal minimax strategies.
-
3.
Numerical precision. As the convex programs solved by the Learner at each round generally have irrational coefficients (due to the exponents), using finite-precision arithmetic to solve these programs will lead to a corresponding precision error in the solution, making the computed strategy only approximately minimax optimal for the Learner. This kind of approximation error can generally be driven to be arbitrarily small, but still necessitates being able to reason about approximate solutions.
Given a suboptimal instantiation of Algorithm 1 or 2, we thus want to know: how much worse will its achieved regret bound be, compared to the existential guarantee? We will now address this question for both the deterministic setting of Sections 2.1 and 2.2, and the probabilistic setting of Section A.2.1.
Recall that at each round , both Algorithm 1 and Algorithm 2 (with the weights defined accordingly) have the Learner solve for the minimizer of the function defined as:
The range of is as indicated, since it is a linear combination of loss coordinates , where the weights form a probability distribution over .
Now suppose the Learner ends up playing actions which do not necessarily minimize the respective objectives . The following definition helps capture the degree of suboptimality in the Learner’s play at each round.
Definition A.2 (Achieved AMF Value Bound).
Consider any round , and suppose the Learner plays action at round . Then, any number
is called an achieved AMF value bound for round .
This definition has two aspects. Most importantly, upper bounds the Learner’s achieved objective function value at round . Furthermore, we restrict to be — otherwise it would be a meaningless bound as the Learner gets objective value no matter what she plays.
We now formulate the desired bounds on the performance of a suboptimal Learner. The upshot is that for a suboptimal Learner, the bounds of Theorems 2.1, A.1, A.2 hold with each replaced with the corresponding achieved AMF bound .
Theorem A.3 (Bounds for a Suboptimal Learner).
Consider a Learner who does not necessarily play optimally at all rounds, and a sequence of achieved AMF value bounds.
In the deterministic setting, the Learner achieves the following regret bound analogous to Theorem 2.1:
Proof Sketch.
We use the deterministic case for illustration. The main idea is to redefine the Learner’s regret to be relative to her achieved AMF value bounds rather than the AMF values . Namely, we let , where The surrogate loss is defined in the same way as before, namely .
First, Lemma 2.1 still holds: , with the same proof. Lemma 2.2 also holds after replacing each with : namely, The proof is almost the same: we formerly used , and now use that by Definition A.2.
Now, following the proofs of Lemma 2.3 and Theorem 2.1, to obtain the declared regret bound it suffices to show for that the Learner’s action guarantees , no matter what is played by the Adversary. For any , we can rewrite this objective as:
It now follows that action achieves , from observing that:
where the final inequality holds since the Learner achieves AMF value bound at round . ∎
A.3 Omitted Proofs and Details from Section A.2.1: Bounds for the Probabilistic Learner
First, we define our probabilistic setting, emphasizing the differences to the deterministic protocol. At each round , the interaction between the Learner and the Adversary proceeds as follows:
-
1.
At the beginning of each round , the Adversary selects an environment consisting of the following, and reveals it to the Learner:
-
(a)
The Learner’s simplex action set , where is a finite set of pure actions;
-
(b)
The Adversary’s convex compact action set , embedded in a finite-dimensional Euclidean space;
-
(c)
A vector valued loss function . Every dimension (where ) of the loss function is continuous and concave in the second argument.
-
(a)
-
2.
The Learner selects some ;
-
3.
The Adversary observes the Learner’s selection , and chooses some action in response;
-
4.
The Learner’s action is interpreted as a mixture over the pure actions in , and an outcome is sampled from it; that is, .
-
5.
The Learner suffers (and observes) , the loss vector with respect to the outcome .
Thus, the probabilistic setting is simply a specialization of our framework to the case of the Learner’s action set being a simplex at each round.
Unlike in the above deterministic setting, where the transcript through any round was defined as , in the present case we define the transcript through round as
that is, the transcript now records the Learner’s realized outcomes rather than her chosen mixtures at all rounds. Furthermore, we will denote by the set of transcripts through round , for .
Now, let us fix any Adversary (that is, all of the Adversary’s decisions through round ). With respect to this fixed Adversary, any algorithm for the Learner (defined as the collection of the Learner’s decision mappings for all rounds) induces an ex-ante distribution over the set of transcripts .
Now, we give two types of probabilistic guarantees on the performance of Algorithm 2, namely, an in-expectation bound and a high-probability bound. Both bounds hold for any choice of Adversary , and are ex-ante with respect to the algorithm-induced distribution over the final transcripts.
See A.1
As mentioned in Section A.2.1, the proof of Theorem A.1 is much the same as the proofs of Theorem 2.1 and the helper Lemmas 2.1, 2.2, 2.3, with the exception of using Jensen’s inequality to switch the order of taking expectations when necessary. We omit further details.
See A.2
Proof.
Throughout this proof, we put tildes over random variables to distinguish them from their realized values. For instance, is the random transcript through round , while is a realization of . Also, we explicitly specify the dependence of the surrogate loss on the (random or realized) transcript.
Consider the following random process , defined recursively for and adapted to the sequence of random variables . We let deterministically, and for we let
It is easy to see that for all , we have , and thus is a martingale.
We next show that this martingale has bounded increments. In brief, this follows from being defined in terms of the logarithm of the surrogate loss.
Lemma A.1.
The martingale has bounded increments: for all .
Proof.
It suffices to establish the bounded increments property for an arbitrary realization of the process. Towards this, fix the full transcript of the interaction, and consider any round .
Recall from the definition of the surrogate loss that
Thus, noting that for all , we have
Taking the logarithm yields
In fact, this argument shows that for any transcript that equals on the first rounds. Hence, taking the expectation over conditioned on , we obtain:
To conclude the proof, it now suffices to observe that:
∎
Having established that is a martingale with bounded increments, we can now apply the following concentration bound (see e.g. Dubhashi and Panconesi [2009]).
Fact 3 (Azuma’s Inequality).
Fix . For any martingale with for ,
We instantiate this bound for our martingale with , , and , and obtain that for any ,
(1) |
At this point, let us express as follows:
Now, with an eye toward bounding the latter sum, observe that for ,
Here, the first step is via Jensen’s inequality and the last step is via for . The second step holds since we can show (via reasoning similar to Lemma 2.3) that for any , at each round Algorithm 2 with learning rate achieves:
Combining the above observations with Bound 1 and recalling yields, with probability ,
Using the last inequality, with , and the fact that (which is easy to deduce via Lemma 2.1), we thus obtain the desired high-probability AMF regret bound. Specifically, with probability we have:
In the last line, we used that for . ∎
Appendix B Multicalibration: The Algorithm and Full Proofs
A simple and efficient algorithm for the Learner
As mentioned in the proof sketch of Theorem 4.1, in the setting of multicalibration, our framework’s general Algorithm 2 has a particularly simple approximate version (originally derived in Gupta et al. [2022]) that lets the Learner (almost) match the above bounds on the multicalibration constant . This approximate algorithm is very efficient and has “low” randomization: namely, at each round the Learner plays an explicitly given distribution which randomizes over at most two points in .
Proof.
Let us instantiate the generic probabilistic Algorithm 2 with our current set of loss functions. In parallel with the notation of Algorithm 2, for any bucket , group and , we define
where
In this notation, at each round , the Learner has to solve the following zero-sum game:
where we define
For any , let denote the unique bucket index such that . Substituting
we see that most terms in the summation disappear, and what remains is precisely
where is as defined in the pseudocode for Algorithm 3.
Crucially, for any distribution chosen by the Learner, her attained utility after the Adversary best-responds has a simple closed form. Namely, given any played by the Learner, we have
With this in mind, the Learner can easily achieve value in the following two cases. When for all , playing deterministically gives: . When for all , she can play deterministically, ensuring that .
In the final case, when there are nonpositive and nonnegative quantities among , note that there exists an intermediate index such that . Then, it is easy to check that , as defined in Algorithm 3, satisfies
Using this relation, we obtain that when the Learner plays with probability and with probability , she accomplishes value
and thus, recalling that , we obtain
where the last line is due to the quantities forming a probability distribution.
Appendix C Multicalibeating: Full Statements and Proofs
C.1 Calibeating a Single Forecaster: Proof of Theorem 4.2
Proof of Theorem 4.2.
For the exposition of this full proof, we will employ some probabilistic notation that we have not seen in the main Section 4.2. We briefly define it here.
For any subsequence of rounds, denotes a uniformly random round in . We denote the empirical distributions of the values of , , on by (or simply when ). In this notation, we e.g. have .
Our quantity of interest, the Brier score of the Learner’s predictions , is inconvenient to handle: indeed, the calibration-refinement decomposition of is of little utility since the Learner’s predictions can take arbitrary real values (in particular, they might all be distinct, in which case the refinement score would be 0, and all of the Brier score would be contained in the calibration error). Instead, we define a convenient surrogate notion of bucketed Brier/calibration/refinement score.
The following lemma shows that as long as is large enough, the surrogate Brier score is a good estimate of the true Brier score of our predictions (i.e. our squared error).
Lemma C.1.
.
Proof.
We first compute that the original Brier score equals
The inner sum is the expectation, over the transcript, of conditioned on , so we can write:
We can decompose the expected value as:
By linearity of expectation, the expectation-squared term satisfies:
Meanwhile, the variance term can be upper bounded using the following fact:
Fact 4.
For any random variables :
where the inequality follows from an application of Cauchy-Schwartz.
Instantiating and , and upper bounding , we get:
Putting the above back together gives the desired bound on the difference of and :
∎
Having shown that the surrogate Brier score closely approximates the Learner’s original score , we can now focus on bounding the calibration and refinement scores associated with .
Calibration: Our multicalibration condition on implies that for . The calibration score bound then follows directly.
Refinement: We claim that the Learner’s surrogate refinement score relates to the refinement score of the forecaster as follows:
The proof proceeds in two steps, connecting and via a quantity we call .
Definition C.1 (Joint Refinement Score).
Recall that refinement score, although we defined it for a forecaster, is really a property of a partition of the days. It’s equally well defined if, instead of partitioning by days on which a forecaster makes a certain forecast, we partition on say, even and odd days, or sunny vs cloudy vs rainy vs snowy days. Or, in the case of Definition C.1, the partition .
First, note that the joint refinement score of and is no worse than the refinement score of .
Observation 1.
Intuitively this should make sense, since is a refinement of ’s level sets by ’s level sets. If is “useful”, then this inequality would be strict, as combining with would explain away more of the variance. Refining by cannot decrease the amount of variance captured by the partition.
Reversing our perspective, we can think of as a refinement of ’s level sets by ’s level sets. The key idea is to use multicalibration to show that refining by is not “useful." Multicalibration ensures us that almost all of ’s explanatory power is captured by .
Observation 2.
Observation 3.
The extra error term is small:
Combining these three observations will give us our desired refinement score bound:
We therefore now prove these observations one by one.
Proof of Observation 1.
We recall the following fact from probability:
Fact 5 (Law of Total Variance).
For any random variables in a probability space,
In particular, since variance is always non-negative:
For each fixed , we instantiate this fact with (equipped with the discrete -algebra and uniform distribution). and , the unique s.t. . This gives us:
Since this is true for all , the inequality continues to hold in expectation over the ’s:
∎
Proof of Observation 2.
Recall the definition of bucketed refinement:
To relate this back to , we instantiate Fact 5 again, but flipping the roles of and : we take the underlying spaces to be the sequences defined by calibrated buckets, and let , the variable we condition on, be the level sets of .
For any fixed representing a level set of , Fact 5 tells us:
Like before, we take the expectation over all , giving us the desired result:
∎
Proof of Observation 3.
We have to bound the extra error term:
In words, this is the expected variance of the true averages on , conditioned on the buckets . Intuitively, if these true averages vary a lot, then the calibration error on the s must be large since the prediction on each of the s is (close to) ; in particular, they are almost constant across . Conversely, if multicalibration error is low, then the variance must be low as well. Formally,
The first line is just expanding out the definition. In the third line, we upperbound square with absolute value, since all values are at most 1. In the forth line, we break apart the error term into the difference between our average prediction on and the true average (upperbounded by , by calibration guarantees w.r.t ), the difference between our prediction on and our average prediction on (which is upper bounded by , the size of our bucketing), and the difference between our average prediction on and the true average (upperbounded by ). ∎
C.2 Applying Theorem 4.2: Explicit Rates and Multiple Forecasters
First, we show how to instantiate Theorem 4.2 with our efficiently achievable multicalibration guarantees on of Theorem 4.1.
Corollary C.1.
When run with parameters on the collection , the multicalibration algorithm (Algorithm 3) -calibeats , where
and for any , with probability ,
The calibration error overall of the algorithm is bounded, for any , as:
Proof.
Using our online multicalibration guarantees, we get (by Theorem B.1):
and, for any , with probability :
Plugging this into the result from Theorem 4.2:
we obtain the desired in-expectation bound on :
We can do so similarly for the high probability bound, so that with probability :
Finally, the overall calibration error follows directly by plugging in for . ∎
The main utility in our approach to calibeating is that it easily extends to multicalibeating. As a warm up, we start by deriving calibration with respect to an ensemble of forecasters. The main result then combines this with calibeating on groups to attain the multicalibeating from Definition 4.5.
Calibeating an ensemble of forecasters
Since our result above is based on bounds on multicalibration, we can easily extend it to calibeating an ensemble of forecasters by asking for multicalibration with respect to the level sets of all forecasters. More formally, define the groups as:
Theorem 4.2 applies separately to each . The only degradation comes in the , since we’re asking for multicalibration with respect to more groups. But this effect is small, since the dependence on the number of groups is only .
Corollary C.2 (Ensemble Calibeating).
On groups , the multicalibration algorithm with parameters , after rounds attains -multicalibeating with
C.3 Multicalibeating + Multicalibration Theorem 4.3: Full Statement and Proof
Recall that for every group , we let denote the subsequence of days on which occurs, where the transcript is left implicit.
Theorem C.1 (Multicalibeating + Multicalibration: Full version with high-probability bounds).
Let , and some set of forecasters . The multicalibration algorithm on with parameters , after rounds, attains expected -multicalibeating, where: 888 denotes the subsequence of days on which a group occurs, suppressing dependence on transcript.
while maintaining -multicalibration on the original collection , with:
We also have the corresponding high probability bounds. For any , with probability :
and on the original collection , the multicalibration constant satisfies, with probability ,
Proof.
We begin with a preliminary observation that translates our overall multicalibration assumptions into guarantees over the individual sequences , for .
Observation 4.
Let be -multicalibrated on groups over the entire time sequence . Then, for any , on the subsequence of days the predictor is -multicalibrated with respect to groups .
Proof.
Let be some particular group. Also, fix any and . Using multicalibration guarantees (Definition 4.1), we have that for every :
The first equality is by definition of ; in particular, . This concludes the proof of our observation. ∎
With this observation in hand, the proof is again a direct application of Theorem 4.2.
We can instantiate Theorem B.1 with groups to conclude that the multicalibrated algorithm achieves -multicalibration, with (choosing any ):
where .
Now, fix any and . By our observation above, we are multicalibrated w.r.t. on the sequence of days on which occurs. Therefore, we can instantiate Theorem 4.2:
Inserting the above bounds on yields our in-expectation and high-probability bounds on .
Additionally, the theorem posits that the predictor is also -multicalibrated on the base collection of subgroups . Indeed, we have included the family into the collection , hence the predictor will be -multicalibrated on . ∎
Appendix D Blackwell Approachability: The Algorithm and Full Proofs
of Theorem 5.1.
We instantiate our probabilistic framework of Section A.2.1. The Learner’s and Adversary’s action sets are inherited from the underlying Polytope Blackwell game.
Defining the loss functions.
For all , we consider the following losses:
where here and below the notational convention is that for , . The coordinates of the resulting vector loss correspond to the collection of the halfspaces that define the polytope. By Holder’s inequality, each vector loss function — this follows because we required that for some with , the family is -normalized, and the range of is contained in . In addition, each is continuous and convex-concave by virtue of being a linear function of the continuous and affine-concave function .
Bounding the Adversary-Moves-First value.
We observe that for , the AMF value . Indeed, if the Adversary moves first and selects any , then by the assumption of response satisfiability, the Learner has some guaranteeing that . The latter is equivalent to for all , letting us conclude that for any round ,
Applying AMF regret bounds.
Given this instantiation of our framework, Theorem A.1 implies that for any response satisfiable Polytope Blackwell game, the Learner can use Algorithm 2 (instantiated with the above loss functions) to ensure that after any round ,
where the expectation is with respect to the Learner’s randomness. Given this guarantee, we obtain, using the definition of , that
Using , we have that for every ,
This concludes the proof of our in-expectation guarantee for Polytope Blackwell games.
The high-probability statement follows directly from Theorem A.2, using . ∎
An LP based algorithm when the Adversary has a finite pure strategy space.
Algorithm 2, which achieves the guarantees of Theorem 5.1, generally involves solving a convex program at each round. It is worth pointing out that only a linear program will need to be solved at each round in the commonly studied special case of Blackwell approachability where both the Learner and the Adversary randomize between actions in their respective finite action sets and .
Formally, in the setting above, suppose additionally that the Adversary’s action space is , where is a finite set of pure actions for the Adversary. At each round , both the Learner and the Adversary randomize over their respective action sets. First, the Learner selects a mixture , and then the Adversary selects a mixture in response. Next, pure actions and are sampled from the chosen mixtures, and the vector valued utility in that round is set to .
In this fully probabilistic setting, at each round Algorithm 2 has the Learner solve a normal-form zero-sum game with pure action sets , where the utility to the Adversary (the max player) is
(2) |
A standard LP-based approach to solving this zero-sum game (see e.g. Raghavan [1994]) is for the Learner to select among distributions with the goal of minimizing the maximum payoff to the Adversary over all pure responses . Writing this down as a linear program, we obtain the following algorithm:
Appendix E No-X-Regret: Definitions, Examples, Algorithms, and Proofs
As a warmup, we begin this subsection by carefully demonstrating how to use our framework to derive bounds and algorithms for the very fundamental external regret setting. Then, we derive the same types of existential guarantees in the much more general subsequence regret setting. We then specialize these subsequence regret bounds into tight bounds for various existing regret notions (such as internal, adaptive, sleeping experts, and multigroup regret). We conclude this subsection by deriving a general no-subsequence-regret algorithm which in turn specializes to an efficient algorithm in all of our applications.
E.1 Simple Learning From Expert Advice: External Regret
In the classical experts learning setting Littlestone and Warmuth [1994], the Learner has a set of pure actions (“experts”) . At the outset of each round , the Learner chooses a distribution over experts . The Adversary then comes up with a vector of losses corresponding to each expert. Next, the Learner samples , and experiences loss corresponding to the expert she chose: . The Learner also gets to observe the entire vector of losses for that round. The goal of the Learner is to achieve sublinear external regret — that is, to ensure that the difference between her cumulative loss and the loss of the best fixed expert in hindsight grows sublinearly with :
Theorem E.1.
Fix a finite pure action set for the Learner and a time horizon . Then, Algorithm 2 can be instantiated to guarantee that the Learner’s expected external regret is bounded as
and furthermore that for any , with ex-ante probability over the Learner’s randomness,
Proof.
We instantiate our probabilistic framework (see Section A.2.1).
Defining the strategy spaces.
We define the Learner’s pure action set at each round to be the set , and the Adversary’s strategy space to be the convex and compact set , from which the Adversary chooses each round’s collection of all actions’ losses.
Defining the loss functions.
For , we define a -dimensional vector valued loss function , where for every action , the corresponding coordinate is given by
It is easy to see that is continuous and concave — in fact, linear — in the second argument for all and . Furthermore, its range is , for . This verifies the technical conditions imposed by our framework on the loss functions.
Applying AMF regret bounds.
We may now invoke Theorem A.1, which implies the following in-expectation AMF regret bound after round for the instantiation of Algorithm 2 with the just defined vector losses :
where recall that is the Adversary-Moves-First (AMF) value at round . Connecting the instantiated AMF regret to the Learner’s external regret, we get:
Bounding the Adversary-Moves-First value.
To obtain the claimed in-expectation external regret bound, it suffices to show that the AMF value at each round satisfies . Intuitively, this holds because if at some round the Learner knew the Adversary’s choice of losses in advance, then she could guarantee herself no added loss in that round by picking the action with the smallest loss .
Formally, for any vector of actions’ losses , define and notice that
The third step follows by definition of . Hence, the AMF value is indeed nonpositive at each round:
This completes the proof of the in-expectation external regret bound. The high-probability external regret bound follows in the same way from Theorem A.2 of Section A.2.1. ∎
A bound of is optimal for external regret in the experts learning setting, and so serves to witness the optimality of Theorem 2.1.
In fact, it is easy to demonstrate that in the external regret setting, the generic probabilistic Algorithm 2 amounts to the well known Exponential Weights algorithm (Algorithm 5 below) Littlestone and Warmuth [1994]. To see this, note that Algorithm 2, when instantiated with the above defined loss functions, has the Learner solve the following problem at each round:
where we denoted the exponential weights distribution as
For any choice of by the Adversary, the quantity inside the expectation, , is antisymmetric in and : that is, . Due to this antisymmetry, no matter which gets selected by the Adversary, by playing the Learner obtains
thus achieving the value of the game. It is also easy to see that is the unique choice of that guarantees nonnegative value, hence Algorithm 2, when specialized to the external regret setting, is equivalent to the Exponential Weights Algorithm 5.
E.2 Generalization to Subsequence Regret
Here, we present a generalization of the experts learning framework from which we will be able to derive our other applications to no-regret learning problems. There is again a Learner and an Adversary playing over the course of rounds . Initially, the Learner is endowed with a finite set of pure actions . At each round , the Adversary restricts the Learner’s set of available actions for that round to some subset . The Learner plays a mixture over the available actions. The Adversary responds by selecting a vector of losses associated with the Learner’s pure actions. Next, the Learner samples a pure action .
Unlike in the standard setting, the Learner’s regret will now be measured not just on the entire sequence of rounds , but more generally on an arbitrary collection of weighted subsequences . The understanding is that for any , the quantity is the “weight” with which round will be included in the subsequence if the Learner’s sampled action is at that round. The Learner does not need to know the subsequences ahead of time; instead the Adversary may announce the values to the Learner before the corresponding round .
Definition E.1 (Subsequence Regret).
Given a family of functions , where each is a mapping , chosen adaptively by the Adversary, and a set of finitely many pure actions for the Learner, consider a collection of action-subsequence pairs .
The Learner’s subsequence regret after round with respect to the collection is defined by
where is the transcript of the interaction.
For intuition, suppose , where satisfies for all . That is, the only relevant subsequence is the entire sequence of rounds . If we then set , subsequence regret specializes to the classical notion of (external) regret which was discussed above.
Moreover, we shall require the following condition on and the action sets , which simply asks that at each round, the Learner be responsible for regret only to currently available actions.
Definition E.2 (No regret to unavailable actions).
A collection of action-subsequence pairs , paired with action sets , satisfy the no-regret-to-unavailable-actions property if at each round , for every such that for some , it holds that for all .
It is worth noting that this condition is trivially satisfied whenever the Learner’s action set is invariant across rounds ( for all ).
Theorem E.2.
Consider a sequence of action sets for the Learner, a collection of action-subsequence pairs, and a time horizon . If and satisfy no-regret-to-unavailable-actions, then an appropriate instantiation of Algorithm 2 guarantees that the Learner’s expected subsequence regret is bounded as
and furthermore, for any , that with ex-ante probability over the Learner’s randomness,
Proof.
We instantiate our probabilistic framework of Section A.2.1.
Defining the strategy spaces.
At each round , the Learner’s pure strategy set will be , and the Adversary’s strategy space will be the convex and compact set .
Defining the loss functions.
For all action-subsequence pairs , we define the corresponding loss as
It is easy to see that for all and each , the function is continuous and concave — in fact, linear — in the second argument, as well as bounded within for . Therefore, the technical conditions imposed by our framework on the loss functions are met.
Bounding the Adversary-Moves-First value.
At each round , the AMF value . Trivially, , as the Adversary can always set for all . Conversely, as an easy consequence of the no-regret-to-unavailable-actions property. To see this, for any vector of actions’ losses , define
and notice that
(no regret to unavailable actions) | ||||
We thus conclude that Theorems A.1 and A.2 apply (with and all ) to the subsequence regret setting, yielding the claimed in-expectation and high-probability regret bounds. ∎
We now instantiate subsequence regret with various choices of subsequence families, in order to get bounds and efficient algorithms for several standard notions of regret from the literature. For brevity, for each notion of regret considered below we only exhibit the existential in-expectation guarantee for that type of regret, and omit the corresponding high-probability bounds (which are all easily derivable from Theorem A.2). We also point out that all in-expectation bounds cited below are efficiently achievable by instantiating, with appropriate loss functions, the no-subsequence regret Algorithms 6 and 7 derived in the following Section E.3.
In all no-regret settings discussed below, except for Sleeping Experts, the Learner has a pure and finite action set at every round ; furthermore — as usual — the Adversary’s role at each round consists in selecting the vector of per-action losses .
Internal and Swap Regret
To introduce the notion of internal regret [Foster and Vohra, 1998], consider the following collection of mappings from the action set to itself. consists of the identity map (such that for all ), together with all maps that pair two particular actions: i.e., , and for . The Learner’s internal regret is then defined as
In other words, the Learner’s total loss is being compared to all possible counterfactual worlds, for , in which whenever the Learner played some action , it got replaced with action (and other actions remain fixed).
We can reduce the problem of obtaining no-internal-regret to the problem of obtaining no subsequence regret for a simple choice of subsequences. Let us define the following set of subsequences: , where each is defined to be the indicator of the subsequence where the Learner played action — that is, for all , we let . Then, we let . By the in-expectation no-subsequence-regret guarantee, we then have
since .
But observe that the Learner’s internal regret precisely coincides with the just defined instance of subsequence regret:
Therefore, we have established the following existential in-expectation internal regret bound:
which is optimal.
The notion of swap regret, introduced in Blum and Mansour [2007], is strictly more demanding than internal regret in that it considers strategy modification rules that can perform more than one action swap at a time. Consider the set of all swapping rules . The Learner’s swap regret is defined to be the maximum of her regret to all swapping rules:
The interpretation is that the Learner’s total loss is being compared to the total loss of any remapping of her action sequence.
An easy reduction shows that the swap regret is upper-bounded by times the internal regret. For completeness, we provide the details of this reduction in Appendix E.4. The reduction implies an in-expectation bound of on swap regret, which, compared to the optimal bound of (see Blum and Mansour [2007]), has suboptimal dependence on .
Adaptive Regret
In this setting, consider all contiguous time intervals within rounds , namely, all intervals , where are integers such that . The Learner’s regret on each interval is defined as her total loss over the rounds , minus the loss of the best action for that interval in hindsight. The Learner’s adaptive regret is then defined to be her maximum regret over all contiguous time intervals:
We observe that adaptive regret corresponds to subsequence regret with respect to , where is the collection of subinterval indicator subsequences — that is, for all and . Observe that , and therefore, the expected regret upper bound for subsequence regret specializes to the following expected adaptive regret bound:
Sleeping Experts
Following Blum and Mansour [2007], we define the sleeping experts setting as follows. Suppose that the Learner is initially given a set of pure actions , and before each round , the Adversary chooses a subset of pure actions available to the Learner at that round — these are known as the “awake experts”, and the rest of the experts are the “sleeping experts” at that round.
The Learner’s regret to each action is defined to be the excess total loss of the Learner during rounds where was “awake”, compared to the total loss of over those rounds. Formally, the Learner’s sleeping experts regret after round is defined to be
This is clearly an instance of subsequence regret — indeed, we may consider the family of subsequences , where for all , and let . It is easy to verify that the no-regret-to-unavailable-actions property holds, and thus the guarantees of the subsequence regret setting carry over to this sleeping experts setting. In particular, the following existential in-expectation sleeping experts regret bound holds:
which is also optimal in this setting.
Multi-Group Regret
We imagine that before each round, the Adversary selects and reveals to the Learner some context from an underlying feature space . The interpretation is that the Learner’s decision at round will pertain to an individual with features . Additionally, there is a fixed collection , where each is interpreted as a (demographic) group of individuals within the population . Here may be large and may consist of overlapping groups. The Learner’s goal is to minimize regret to each action not just over the entire population, but also separately for each population group . Explicitly, the Learner’s multi-group regret after round is defined to be
It is easy to see that multi-group regret corresponds to subsequence regret with , where is the collection of group indicator subsequences — that is, for all . Here we are taking advantage of the fact that the functions on which subsequences are defined need not be known to the algorithm ahead of time, and can be revealed sequentially by the Adversary, allowing us to model adversarially chosen contexts. Therefore, multi-group regret inherits subsequence regret guarantees, and in particular, we obtain the following existential in-expectation multi-group regret bound:
Observe that this bound scales only as with respect to the number of population groups, which we can therefore take to be exponentially large in the parameters of the problem.
E.3 Deriving No-Subsequence-Regret Algorithms
We now present a way to specialize Algorithm 2 to the setting of subsequence regret with no-regret-to-unavailable-actions. At each round, instead of solving a convex-concave problem, the specialized algorithm will only need to solve a polynomial-sized linear program.
Theorem E.3.
Proof.
When instantiated with our current set of loss functions, Algorithm 2 solves the following zero-sum game at round , where we denote :
By definition of the loss functions in the subsequence regret setting, the objective function is linear in the Adversary’s choice of . Thus, let us rewrite the objective as a linear combination of :
which, by the no-regret-to-unavailable actions property, | ||||
and now, swapping and in the second summation, | ||||
Thus, the zero-sum game played at round has objective function where the coefficients do not depend on the Adversary’s action . Recall that this game has value at most . Hence, for any minimax optimal strategy for the Learner — since otherwise, if some , the Adversary would get value by setting and for . Conversely, by playing such that , the Learner gets value , as for all .
Therefore, the Learner’s choice of is minimax optimal if and only if for all ,
This recovers Algorithm 6, concluding the proof. ∎
Simplification for Action Independent Subsequences
The above Algorithm 6 requires solving a linear feasibility problem. This mirrors how existing algorithms for the special case of minimizing internal regret operate (Blum and Mansour [2007]); recall that internal regret corresponds to subsequence regret for a certain collection of subsequences that depend on the Learner’s action in the current round .
By contrast, if all of our subsequence indicators are action independent, that is, satisfy for all and , then it turns out that we can avoid solving a system of linear inequalities: our equilibrium has a closed form. In what follows, we abuse notation and simply write for the value of the subsequence at round .
Observe that if each is action independent, then we can rewrite our equilibrium characterization in Algorithm 6 as the requirement that the Learner’s chosen distribution must satisfy, for each (provided that for at least some ), the following inequality:
Here the equality follows because is a probability distribution.
We now observe that setting each to be its upper bound, for , yields a probability distribution over , which is consequently the unique feasible solution to the above system. Hence, for action independent subsequences, we have a closed-form implementation of Algorithm 6 that does not require solving a linear feasibility problem:
E.4 Omitted Reductions between Different Notions of Regret
Reducing swap regret to internal regret
We can upper bound the swap regret by reusing the instance of subsequence regret that we defined to capture internal regret. Recall that it was defined as follows. We let , where each is the indicator of the subsequence of rounds where the Learner played action — that is, for all , we let . Then, we let . We then obtained the in-expectation regret guarantee
Returning to swap regret, note that for any fixed swapping rule , we have
where in the last line we simply reparametrized the maximum over as the maximum over all . Since the above holds for any , we have
and therefore, we conclude that there exists an efficient algorithm that achieves expected swap regret
Wide-range regret and its connection to subsequence regret
The wide-range regret setting was first introduced in Lehrer [2003] and then studied, in particular, in Blum and Mansour [2007] and Greenwald and Jafari [2003]. It is quite general, and is in fact equivalent to the subsequence regret setting, up to a reparametrization.
Just as in the subsequence regret setting, imagine there is a finite family of subsequences , where each has the form . Moreover, suppose there is a finite family of modification rules. Each modification rule is defined as a mapping , which has the interpretation that if at time , the Learner plays action , then the modification rule modifies this action into another action . Now, consider a collection of modification rule-subsequence pairs . The Learner’s wide-range regret with respect to is defined as
It is evident that wide-range regret has subsequence regret (when the Learner’s action set for all ) as a special case, where each modification rule always outputs the same action: that is, for all , we have for some .
It is also not hard to establish the converse. Indeed, suppose we have an instance of no-wide-range-regret learning with , where is a family of modification rules and is a family of subsequences. Fix any pair . Then, let us define, for all , the subsequence
Now, let us instantiate our subsequence regret setting with
Observe in particular that .
Computing the subsequence regret of this family , we have
Now, we have the following upper bound on the wide-range regret:
Since our subsequence regret results imply the existence of an algorithm such that , we have the following expected wide-range regret bound: