This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

How Asymmetry Helps Buffer Management: Achieving Optimal Tail Size in Cup Games

William Kuszmaul
Abstract.

The cup game on nn cups is a multi-step game with two players, a filler and an emptier. At each step, the filler distributes 11 unit of water among the cups, and then the emptier selects a single cup to remove (up to) 11 unit of water from.

There are several objective functions that the emptier might wish to minimize. One of the strongest guarantees would be to minimize tail size, which is defined to be the number of cups with fill 22 or greater. A simple lower-bound construction shows that the optimal tail size for deterministic emptying algorithms is Θ(n)\Theta(n), however.

We present a simple randomized emptying algorithm that achieves tail size O~(logn)\tilde{O}(\log n) with high probability in nn for polyn\operatorname{poly}n steps. Moreover, we show that this is tight up to doubly logarithmic factors. We also extend our results to the multi-processor cup game, achieving tail size O~(logn+p)\tilde{O}(\log n+p) on pp processors with high probability in nn. We show that the dependence on pp is near optimal for any emptying algorithm that achieves polynomial-bounded backlog.

A natural question is whether our results can be extended to give unending guarantees, which apply to arbitrarily long games. We give a lower bound construction showing that no monotone memoryless emptying algorithm can achieve an unending guarantee on either tail size or the related objective function of backlog. On the other hand, we show that even a very small (i.e., 1/polyn1/\operatorname{poly}n) amount of resource augmentation is sufficient to overcome this barrier.

MIT CSAIL. [email protected]. Supported by a Fannie & John Hertz Foundation Fellowship, a NSF GRFP Fellowship, and NSF grant CCF 1533644.

1. Introduction

At the start of the cup game on nn cups, there are nn empty cups. In each step of the game, a filler distributes 11 unit of water among the cups, and then an emptier removes (up to) 11 unit of water from a single cup of its choice. The emptier aims to minimize some measure of “behind-ness” for cups in the system (e.g., the height of the fullest cup, or the number of cups above a certain height). If the emptier’s algorithm is randomized, then the filler is an oblivious adversary, meaning it cannot adapt to the behavior of the emptier.

The cup game naturally arises in the study of processor scheduling, modeling a problem in which nn tasks each receive work at varying rates, and a scheduler must pick one task to schedule at each time step [7, 23, 8, 32, 30, 36, 6, 26, 33, 34, 1, 31, 19]. The cup game has also found numerous applications to network-switch buffer management [24, 4, 38, 22], quality of service guarantees [7, 1, 31], and data structure deamortization [2, 19, 18, 3, 37, 25, 20, 27, 10].

Bounds on backlog

Much of the work on cup games has focused on bounding the backlog of the system, which is defined to be the amount of water in the fullest cup.

Research on bounding backlog has spanned five decades [7, 23, 8, 32, 30, 36, 6, 26, 33, 34, 19, 11, 28, 1, 18, 31]. Much of the early work focused on the fixed-rate version of the game, in which the filler places a fixed amount of water fjf_{j} into each cup jj on every step [7, 23, 8, 32, 30, 36, 6, 26, 33, 34]; in this case constant backlog is achievable [33, 7]. For the full version of the game, without fixed rates, constant backlog is not possible. In this case, the optimal deterministic emptying algorithm is known to be the greedy emptying algorithm, which always empties from the fullest cup, and which achieves backlog O(logn)O(\log n) [1, 18]. If the emptier is permitted to use a randomized algorithm, then it can do much better, achieving an asymptotically optimal backlog of O(loglogn)O(\log\log n) for polyn\operatorname{poly}n steps with high probability [19, 11, 28].

A strong guarantee: small tail size

The tail size of a cup game at each step is the number of cups containing at least some constant cc amount of water. For the guarantees in this paper, cc will be taken to be 22.

A guarantee of small tail size is particularly appealing for scheduling applications, where cups represent tasks and water represents work that arrives to the tasks over time. Whereas a bound of bb on backlog guarantees that the furthest behind worker is only behind by at most bb, it says nothing about the number of workers that are behind by bb. In contrast, a small bound on tail size ensures that almost no workers are behind by more than O(1)O(1).

The main result in this paper is a randomized emptying algorithm that achieves tail size
O(lognloglogn)O(\log n\log\log n). The algorithm also simultaneously optimizes backlog, keeping the maximum height at O(loglogn)O(\log\log n). As a result the total amount of water above height 22 in the system is O~(logn)\tilde{O}(\log n) with high probability. In constrast, the best possible deterministic emptying algorithm allows for up to n1εn^{1-\varepsilon} cups to all have fills Ω(logn)\Omega(\log n) at once (see the lower-bound constructions discussed in [19] and [12]).

The problems of optimizing tail size and backlog are closely related to the problem of optimizing the cc-shifted p\ell_{p} norm of the cup game. Formally, the cc-shifted p\ell_{p} norm is given by

(i=1nmax(fic,0)p)1/p,\left(\sum_{i=1}^{n}\max(f_{i}-c,0)^{p}\right)^{1/p},

where fif_{i} is the fill of cup ii. 111When pp\neq\infty, in order for the cc-shifted p\ell_{p} norm to be interesting, it is necessary to require that cΩ(1)c\geq\Omega(1), since trivial filling strategies can achieve fill Ω(1)\Omega(1) in Θ(n)\Theta(n) cups deterministically. The problem of bounding backlog corresponds to the problem of optimizing the \ell_{\infty} norm of the cup game, and the problem of bounding tail size corresponds to bounding the 22-shifted 1\ell_{1} norm. By optimizing both metrics simultaneously our algorithm has the desirable property that it also achieves a bound of O~(logn)\tilde{O}(\log n) for the 22-shifted p\ell_{p} norm for any pO(1)p\in O(1), which is optimal up to doubly logarithmic factors.

Past work on tail size using beyond-worst-case analysis

To approach the combinatorial difficulty of analyzing cup games, past works have often turned to various forms of beyond worst-case analysis (e.g., smoothed analysis [11], semi-clairvoyance [31], resource augmentation [11, 31, 19, 18]). The most successful of these approaches has arguably been resource augmentation. In the cup game with ε\varepsilon resource augmentation, the filler is permitted to distribute at most 1ε1-\varepsilon water into cups at each step (rather than 11 full unit), giving the emptier a small advantage. Resource augmentation can also be studied in a more extreme form by allowing the emptier to fully empty a cup on each step, rather than simply removing a single unit [19, 18]. Resource augmentation significantly simplifies the task of analyzing cup games — for example, there was nearly a 30 year gap between the first randomized bounds on backlog with resource augmentation [19] versus the first bounds without resource augmentation [28].

Currently, the best known guarantees with resource augmentation are achieved by the algorithm of [11] which, using ε=1/polylogn\varepsilon=1/\operatorname{polylog}n, limits backlog to O(loglogn)O(\log\log n) and tail size to polylogn\operatorname{polylog}n (with high probability).

The algorithm, which is called the smoothed greedy algorithm, begins by randomly perturbing the starting state of the system, and then following a variant of the deterministic greedy emptying algorithm. Roughly speaking, the random perturbations at the beginning of the game allow for the number of cups XtX_{t} containing more than 11 unit of water after each step tt to be modeled by a biased random walk X1,X2,X_{1},X_{2},\ldots, where Pr[Xi=Xi1+1]=1/2ε\Pr[X_{i}=X_{i-1}+1]=1/2-\varepsilon and Pr[Xi=max(Xi11,0)]=1/2+ε\Pr[X_{i}=\max(X_{i-1}-1,0)]=1/2+\varepsilon. This is where the resource augmentation plays a critical role, since it introduces a bias to the random walk which pushes the walk near 0 at all times. In contrast, without resource augmentation, the random walk is unbiased.

Subsequent work [28] showed that the algorithm’s guarantees on backlog continue to hold without resource augmentation (at least, for a polynomial number of steps). More generally, the analysis bounds the number of cups at a given height α\alpha by roughly n1/2Ω(α)n^{1/2^{\Omega(\alpha)}}, which in turn implies an arbitrarily small polynomial tail size. Whether or not a subpolynomial tail size can be achieved without resource augmentation has remained open.

This paper: the asymmetric smoothed greedy algorithm

We show that resource augmentation is not needed to bound tail size. We present a randomized emptying algorithm (called the asymmetric smoothed greedy algorithm) that achieves tail size O~(logn)\tilde{O}(\log n) with high probability in nn after each of the first polyn\operatorname{poly}n steps of a cup game. We prove that the algorithm is nearly optimal, in that any emptying algorithm must allow for a tail size of Ω~(logn)\tilde{\Omega}(\log n) with probability at least 1polyn\frac{1}{\operatorname{poly}n}.

The analysis of the algorithm takes an indirect approach to bounding tail size. Rather than examining tail size directly, we instead prove that the use of randomness by the algorithm makes the state of the cups at each step “difficult” for the filler to accurately predict. We call this the unpredictability guarantee. We then show that any greedy-like algorithm that satisfies the unpredictability guarantee is guaranteed to have tail size O~(logn)\tilde{O}(\log n) and backlog O(loglogn)O(\log\log n).

Algorithmically, the key to achieving the unpredictability guarantee is to add a small amount of asymmetry to the smoothed greedy emptying algorithm. When choosing between cups that have 2 or more units of water, the emptier follows the standard smoothed greedy algorithm; but when choosing between cups whose fills are between 1 and 2, the emptier ignores the specifics of how much water is in each cup, and instead chooses between the cups based on random priorities that are assigned to the cups at the beginning of the game.

Intuitively, the asymmetric treatment of cups ensures that there is a large collection (of size roughly n/2n/2) of randomly selected cups that are “almost always” empty. The fact that the emptier doesn’t know which cups these are then implies the unpredictability guarantee. Proving this intuition remains highly nontrivial, however, and requires several new combinatorial ideas.

Multi-processor guarantees

The cup game captures a scheduling problem in which a single processor must pick one of nn tasks to make progress on in each time step. The multi-processor version of this scheduling problem is captured by the pp-processor cup game [11, 28, 31, 1, 7, 33]. In each step of the pp-processor cup game, the filler distributes pp units of water among cups, and the emptier removes 11 unit of water from (up to) pp different cups. Because the emptier can remove at most 11 unit of water from each cup at each step, an analogous constraint is also placed on the filler, requiring that it places at most 11 unit of water into each cup at each step.

A key feature of the pp-processor cup game is that the emptier is required to remove water from pp distinct cups in each step, even if the vast majority of water is contained in fewer than pp cups.

Until recently, establishing any nontrivial bounds on backlog in the multi-processor cup game remained an open problem, even with the help of resource augmentation. Recent work by Bender et al. [11] (using resource augmentation) and then by Kuszmaul [28] (without resource augmentation) established bounds on backlog closely matching those for the single-processor game.

By extending our techniques to the multi-processor setting, we construct a randomized emptying algorithm achieves tail size O~(logn+p)\tilde{O}(\log n+p) with high probability in nn after each of the first polyn\operatorname{poly}n steps of a pp-processor cup game. Moreover, we show that the dependence on pp is near optimal for any backlog-bounded algorithm (i.e., any algorithm that achieves backlog polyn\operatorname{poly}n or smaller).

Lower bounds against unending guarantees

In the presence of resource augmentation ε=1/polylogn\varepsilon=1/\operatorname{polylog}n, the smoothed greedy emptying algorithm is known to provide an unending guarantee [11], meaning that the high-probability bounds on backlog and tail size continue to hold even for arbitrarily large steps tt.

A natural question is whether unending guarantees can also be achieved without the use of resource augmentation. It was previously shown that, when p2p\geq 2, the smoothed greedy algorithm does not offer unending guarantees [28]. Analyzing the single-processor game has remained an open question, however.

We give a lower bound construction showing that neither the smoothed greedy algorithm nor the asymmetric smoothed greedy algorithm offer unending guarantees for the single-processor cup game without the use of resource augmentation. Even though resource augmentation ε>0\varepsilon>0 is needed for the algorithms to achieve unending guarantees, we show that the amount of resource augmentation required is very small. Namely, ε=1/2polylogn\varepsilon=1/2^{\operatorname{polylog}n} is both sufficient and necessary for the asymmetric smoothed greedy algorithm to offer unending guarantees on both tail size and backlog.

We generalize our lower-bound construction to work against any emptying algorithm that is both monotone and memoryless, including emptying algorithms that are equipped with a clock. We show that no such emptying algorithm can offer an unending guarantee of o(logn)o(\log n) backlog in the single-processor cup game, and that any unending guarantee of polylogn\operatorname{polylog}n tail size must come of the cost of polynomial backlog.

We call the filling strategy in our lower bound construction the fuzzing algorithm. The fuzzing algorithm takes a very simple approach: it randomly places water into a pool of cups, and shrinks that pool of cups very slowly over time. The fact that gradually shrinking random noise represents a worst-case workload for cup games suggests that real-world applications of cup games (e.g., processor scheduling, network-switch buffer management, etc.) may be at risk of experiencing “aging” over time, with the performance of the system degrading due to the impossibility of strong unending guarantees.

Related work on other variants of cup games

Extensive work has also been performed on other variants of the cup game. Bar-Noy et al. [5] studied the backlog for a variant of the single-processor cup game in which the filler can place arbitrarily large integer amounts of water into cups at each step. Rather than directly bounding the backlog, which would be impossible, they show that the greedy emptying algorithm achieves competitive ratio O(logn)O(\log n), and that this is optimal for both deterministic and randomized online emptying algorithms. Subsequent work has also considered weaker adversaries [21, 17].

Several papers have also explored variants of cup games in which cups are connected by edges in a graph, and in which the emptier is constrained by the structure of the graph [13, 12, 14, 15]. This setting models multi-processor scheduling with conflicts between tasks [14, 15] and some problems in sensor radio networks [12].

Another recent line of work is that by Kuszmaul and Westover [29], which considers a variant of the pp-processor cup game in which the filler is permitted to change the value of pp over time. Remarkably, the optimal backlog in this game is significantly worse than in the standard game, and is Θ(n)\Theta(n) for an (adaptive) filler.

Cup games have also been used to model memory-access heuristics in databases  [9]. Here, the emptier is allowed to completely empty a cup at each step, but the water from that cup is then “recycled” among the cups according to some probability distribution. The emptier’s goal is achieve a large recycling rate, which is the average amount of water recycled in each step.

Closely related to the study of cup games is the problem of load balancing, in which one must assign balls to bins in order to minimize the number of balls in the fullest bin. In the classic load balancing problem, nn balls arrive over time, and each ball comes with a selection of dd random bins (out of nn bins) in which it can potentially be placed. The load balancing algorithm gets to select which of the dd bins to place the ball in, and can, for example, always select the bin with the fewest balls. But what should the algorithm do when choosing between bins that have the same number of balls? In this case, Vöcking famously showed that the algorthm should always break ties in the same direction [40], and that this actually results in an asymptotically better bound on load than if one breaks ties arbitrarily. Interestingly, one can think of the asymmetry used in Vöcking’s algorithm for load balancing as being analogous to the asymmetry used in our algorithm for the cup game: in both cases, the algorithm always breaks ties in the same random direction, although in our result, the way that one should define a “tie” is slightly nonobvious. In the case of Vöcking’s result, the asymmetry is known to be necessary in order to get an optimal algorithm [40]; it remains an open question whether the same is true for the problem of bounding tail size in cup games.

Related work on the roles of backlog and tail size in data structures

Bounds on backlog have been used extensively in data-structure deamortization [2, 19, 18, 3, 37, 25, 20, 27], where the scheduling decisions by the emptier are used to decide how a data structure should distribute its work.

Until recently, the applications focused primarily on in-memory data structures, since external-memory data structures often cannot afford the cost of a buffer overflowing by an ω(1)\omega(1) factor. Recent work shows how to use bounds in tail-size in order to solve this problem, and presents a new technique for applying cup games to external-memory data structures [10]. A key insight is that if a cup game has small tail size, then the water in “overflowed cups” (i.e., cups with fill more than O(1)O(1)) can be stored in a small in-memory cache. The result is that every cup consumes exactly Θ(1)\Theta(1) blocks in external memory, meaning that each cup can be read/modified by the data structure in O(1)O(1) I/Os. This insight was recently applied to external-memory dictionaries in order to eliminate flushing cascades in write optimized data structures [10].

Outline

The paper is structured as follows. Section 2 describes a new randomized algorithm that achieves small tail size without resource augmentation. Section 3 gives a technical overview of the algorithm’s analysis and of the other results in this paper. Section 4 then presents the full analysis of the algorithm and Section 5 presents (nearly) matching lower bounds. Finally, Section 6 gives lower bounds against unending guarantees and analyzes the amount of resource augmentation needed for such guarantees.

Conventions

Although in principle an arbitrary constraint height cc can be used to determine which cups contribute to the tail size, all of the algorithms in this paper work with c=2c=2. Thus, throughout the rest of the paper, we define the tail size to be the number of cups with height 22 or greater.

As a convention, we say that an event occurs with high probability in nn, if the event occurs with probability at least 11nc1-\frac{1}{n^{c}} for an arbitrarily large constant cc of our choice. The constant cc is allowed to affect other constants in the statement. For example, an algorithm that achieves tail size clognc\log n with probability 1nc\frac{1}{n^{c}} is said to achieve tail size O(logn)O(\log n) with high probability in nn.

2. The Asymmetric Smoothed Greedy Algorithm

Past work on randomized emptying algorithms has focused on analyzing the smoothed greedy algorithm [11, 28]. The algorithm begins by randomly perturbing the starting state of the system: the emptier places a random offset rjr_{j} of water into each cup jj, where the rjr_{j}’s are selected independently and uniformly from [0,1)[0,1). The emptier then follows a greedy emptying algorithm, removing water from the fullest cup at each step. If the fullest cup contains fill less than 11, however, then the emptier skips its turn. This ensures that the fractional amount of water in each cup jj (i.e., the amount of water modulo 11) is permanently randomized by the initial offset rjr_{j}. The randomization of the fractional amounts of water in each cup has been critical to past randomized analyses [11, 28], and continues to play an important (although perhaps less central) role in this paper.

This paper introduces a new variant of the smoothed greedy algorithm that we call the asymmetric smoothed greedy algorithm. The algorithm assigns a random priorities pj[0,1)p_{j}\in[0,1) to each cup jj (at the beginning of the game) and uses these to “break ties” when cups contain relatively small amounts of water. Interestingly, by always breaking these ties in the same direction, we change the dynamics of the game in a way that allows for new analysis techniques. We describe the algorithm in detail below.

Algorithm description

At the beginning of the game, the emptier selects random offsets rj[0,1)r_{j}\in[0,1) independently and uniformly at random for each cup jj. Prior to the game beginning, rjr_{j} units of water are placed in each cup jj. This water is for “bookkeeping” purposes only, and need not physically exist. During initialization, the emptier also assigns a random priority pj[0,1)p_{j}\in[0,1) independently and uniformly at random to each cup jj.

After each step tt, the emptier selects (up to) pp different cups to remove 11 unit of water from as follows. If there are pp or more cups containing 22 or more units of water, then the emptier selects the pp fullest such cups. Otherwise, the emptier selects all of the cups containing 22 or more units of water, and then resorts to cups containing fill in [1,2)[1,2), choosing between these cups based on their priorities pjp_{j} (i.e., choosing cups with larger priorities over those with smaller priorities). The emptier never removes water from any cup containing less than 11 unit of water.

Threshold crossings and a threshold queue

When discussing the algorithm, several additional definitions and conventions are useful. We say that threshold (j,i)(j,i) is crossed if cup jj contains at least ii units of water for positive integer ii. When i=1i=1, the threshold (j,i)(j,i) is called a light threshold, and otherwise (j,i)(j,i) is called a heavy threshold. One interpretation of the emptying algorithm is that there is a queue QQ of thresholds (j,i)(j,i) that are currently crossed. Whenever the filler places water into cups, this may add thresholds (j,i)(j,i) to the queue. And whenever the emptier removes water from some cup jj, this removes some threshold (j,i)(j,i) from the queue. When selecting thresholds to remove from the queue, the emptier prioritizes heavy thresholds over light ones. Within the heavy thresholds, the emptier prioritizes based on cup height, and within the light thresholds the emptier prioritizes based on cup priorities pjp_{j}.

As a convention, we say that a cup jj is queued if (j,1)(j,1) is in QQ (or, equivalently, if (j,i)(j,i) is in the queue for any ii). The emptier is said to dequeue cup jj whenever threshold (j,1)(j,1) is removed from the queue. The size of the queue QQ refers to the number of thresholds in the queue (rather than the number of cups).

3. Technical Overview

In this section we give an overview of the analysis techniques used in the paper. We begin by discussing the analysis of the asymmetric smoothed greedy algorithm. To start, we focus our analysis on the single-processor cup game, in which p=1p=1.

The unpredictability guarantee

At the heart of the analysis is what we call the unpredictability guarantee, which, roughly speaking, establishes that the filler cannot predict large sets of cups that will all be over-full at the same time as one-another. We show that if an algorithm satisfies a certain version of the unpredictability guarantee, along with certain natural “greedy-like” properties, then the algorithm is guaranteed to exhibit a small tail size.

Formally, we say that an emptying algorithm satisfies RR-unpredictability at a step tt if for any oblivious filling algorithm, and for any set of cups SS whose size is a sufficiently large constant multiple of RR, there is high probability in nn that at least one cup in SS has fill less than 11 after step tt. In other words, for any polynomial f(n)f(n), there exists a constant cc such that: for each set S[n]S\subseteq[n] of cRcR cups, the probability that every cup in SS has height 11 or greater at step tt is at most 1/f(n)1/f(n).

How RR-unpredictability helps

Rather than proving that RR-unpredictability causes the tail size to stay small, we instead show the contrapositive. Namely, we show that if there is a filling strategy that achieves a large tail size, the strategy can be adapted to instead violate RR-unpredictability.

Suppose that the filler is able to achieve tail size cRcR at some step tt, where cc is a large constant. Then during each of the next cRcR steps, the emptier will remove water from cups containing fill 22 or more (here, we use the crucial fact that the emptier always prioritizes cups with fills 22 or greater over cups with fills smaller than 22). This means that, during steps t+1,,t+cRt+1,\ldots,t+cR, the set of cups with fill 11 or greater is monotonically increasing. During these steps the filler can place 11 unit of water into each of the cups 1,2,,cR1,2,\ldots,cR in order to ensure that these cups all contain fill 11 or greater after step t+cRt+cR. Thus the filler can transform the initial tail size of cRcR into a large set of cups S={1,2,,cR}S=\{1,2,\ldots,cR\} that all have fill 11 or greater. In other words, any filling strategy for achieving large tail size (at some step tt) can be harnessed to violate RR-unpredictability (at some later step t+cRt+cR).

The directness of the argument above may seem to suggest that in order to prove the RR-unpredictability, one must first (at least implicitly) prove a bound on tail size. A key insight in this paper is that the use of priorities in the asymmetric smoothed greedy algorithm allows for RR-unpredictability to be analyzed as its own entity.

Our algorithm analysis establishes lognloglogn\log n\log\log n-unpredictability for the first polyn\operatorname{poly}n steps of any cup game, with high probability in nn. This, in turn, implies a bound of O(lognloglogn)O(\log n\log\log n) on tail size.

Establishing unpredictability

We prove that, out of the roughly n/2n/2 cups jj with priorities pj1/2p_{j}\geq 1/2, at most O(lognloglogn)O(\log n\log\log n) of them are queued (i.e., contain fill 11 or greater) at a time, with high probability in nn. Recall that the cups with priority pj1/2p_{j}\geq 1/2 are prioritized by the asymmetric smoothed greedy algorithm when the algorithm is choosing between cups with fills in the interval [1,2)[1,2). This preferential treatment does not extend the case where there are cups containing fill 2\geq 2, however. Remarkably, the limited preferential treatment exhibited by the algorithm is enough to ensure that the number of queued high-priority cups never exceeds O(lognloglogn)O(\log n\log\log n).

The bound of O(lognloglogn)O(\log n\log\log n) on the number of queued cups with priorities 1/2\geq 1/2 implies lognloglogn\log n\log\log n-unpredictability as follows. For any fixed set SS of cups, the number of cups jj in SS with priority pj1/2p_{j}\geq 1/2 will be roughly |S|/2|S|/2 with high probability in nn. If |S|/2|S|/2 is at least a sufficiently large constant multiple of lognloglogn\log n\log\log n, then the number of cups with pj1/2p_{j}\geq 1/2 in SS exceeds the total number of cups with pj1/2p_{j}\geq 1/2 that are queued. Thus SS must contain at least one non-queued cup, as required for the unpredictability guarantee.

In order to bound the number of queued cups with priority pj1/2p_{j}\geq 1/2 by O(lognloglogn)O(\log n\log\log n), we partition the cups into Θ(loglogn)\Theta(\log\log n) priority levels based on their priorities pjp_{j}. Let qq be a sufficiently large constant multiple of loglogn\log\log n. The priority level of a cup jj is given by pjq+1\lfloor p_{j}\cdot q\rfloor+1. (Note that the priority levels are only needed in the analysis, and the algorithm does not have to know qq.) We show that with high probability in nn, there are never more than O(lognloglogn)O(\log n\log\log n) queued cups with priority level q/2\geq q/2. Note that, although we only care about bounding the number of queued whose priority-levels are in the top fifty percentile, our analysis will take advantage of the fact that the priorities pjp_{j} are defined at a high granularity (rather than, for example, being boolean).

The stalled emptier problem

Bounding the number of queued cups with priority level greater than \ell directly is difficult for the following reason: Over the course of a sequence of steps, the filler may cross many light thresholds cups with priority level greater than \ell, while the emptier only removes heavy thresholds from QQ (i.e., the emptier empties exclusively from cups of height 22 or greater). This means that, in a given sequence of steps, the number of queued cups with priority level greater than \ell could increase substantially. We call this the stalled emptier problem. Note that the stalled emptier problem is precisely what enables the connection between tail size and RR-unpredictability above, allowing the filler to transform large tail size into a violation of RR-unpredictability. As a result, any analysis that directly considers the stalled emptier problem must also first bound tail-size, bringing us back to where we started.

Rather than bounding the number of queued cups with priority level greater than \ell, we instead compare the number of queued cups at priority level greater than \ell to the number at priority level \ell. The idea is that, if the stalled-emptier problem allows for the number of queued priority-level greater than \ell to grow large, then it will allow for the number of queued priority-level-\ell cups to grow even larger. That is, without proving any absolute bound on the number of cups at a given priority level, we can still say something about the ratio of high-priority cups to low-priority cups in the queue.

To be precise, we prove that, whenever there are kk queued cups at some priority level \ell, there are at most O(qklogn+logn)O(\sqrt{qk\log n}+\log n) queued cups at priority level >>\ell (recall that q=Θ(loglogn)q=\Theta(\log\log n) is the number of priority levels). Since the number of cups with priority level at least 11 is deterministically anchored at nn, this allows for us to inductively bound the number of queued cups with large priority levels \ell. In particular, the number of queued cups at priority level q/2q/2 or greater never exceeds O(lognloglogn)O(\log n\log\log n).

Comparing the number of queued cups with priority level \ell versus >>\ell

Suppose that after some step tt, there are some large number kk of queued cups with priority level \geq\ell. We wish to show that almost all of these kk cups have priority level exactly \ell.

Before describing our approach in detail (which we do in the following two subheaders), we give an informal description of the approach. Let k1k_{1} be number of priority-level-\ell queued cups, and let k2k_{2} be the number of priority-level-greater-than-\ell queued cups. The only way that there can be a large number k2k_{2} of priority-level-greater-than-\ell cups queued is if they have all entered the queue since the last time that a level-\ell cup was dequeued. This means that the size of QQ has increased by at least k2k_{2} since the last time that a priority-level-\ell cup was dequeued. On the other hand, we show that priority-level-\ell cups accumulate in QQ at a much faster rate than the size of QQ varies. In particular, we show that both the rate at which priority-level \ell cups accumulate in QQ and the rate at which QQ’s size varies are controlled by what we call the “influence” of a time-interval, and that the former is always much larger than the latter. This ensures that k1k2k_{1}\gg k_{2}.

Note that the analysis avoids arguing directly the number of queued high-priority cups small, which could be difficult due to the stalled emptier problem. Intuitively, the analysis instead shows that low priority cups do a good job “pushing” the high priority cups out of the queue, ensuring that the ratio of low-priority cups (i.e., cups with priority level \ell) to high-priority cups (i.e. cups with priority level >>\ell) is always very large.

Relating the number of high-priority queued cups to changes in |Q||Q|

Let t0t_{0} be the most recent step t0tt_{0}\leq t such that at least k+1k+1 distinct cups CC with priority level \ell cross thresholds during steps t0,,tt_{0},\ldots,t. (Recall that kk is the number of queued cups with priority level \geq\ell after step tt.) One can think of the steps t0,,tt_{0},\ldots,t as representing a long period of time in which many cups with priority level \ell have the opportunity to accumulate in QQ. We will now show that the use of priorities in the asymmetric smoothed greedy algorithm causes the following property to hold: The number of queued cups with priority level >>\ell after step tt is bounded above by the amount that |Q||Q| varies during steps t0,,tt_{0},\ldots,t.

Because QQ contains only kk queued cups with priority level \geq\ell after step tt, at least one cup from CC must be dequeued during steps t0,,tt_{0},\ldots,t (otherwise, QQ would contain at least |C|=k+1|C|=k+1 level-\ell cups after step tt). Let tt^{*} be the final step in t0,,tt_{0},\ldots,t out of those that dequeue a cup with priority level \leq\ell, and let QtQ_{t^{*}} and QtQ_{t} denote the queue after steps tt^{*} and tt, respectively.

By design, the only way that the asymmetric smoothed greedy algorithm can dequeue a cup with priority level \leq\ell at step tt^{*} is if the queue QtQ_{t^{*}} consists exclusively of light thresholds (i.e., thresholds of the form (j,1)(j,1)) for cups jj with priority level \leq\ell. Moreover, the thresholds in QtQ_{t^{*}} must remain present in QtQ_{t}, since by the definition of tt^{*} no cups with priority level \leq\ell are dequeued during steps t+1,,tt^{*}+1,\ldots,t.

Since QtQtQ_{t^{*}}\subseteq Q_{t} and QtQ_{t^{*}} contains only thresholds for cups with priority level \leq\ell, the total number of thresholds in QtQ_{t} for cups with priority level >>\ell is at most |Qt||Qt||Q_{t}|-|Q_{t^{*}}|. In other words, the only way that a large number of cups with priority level >>\ell can be queued after step tt is if the size of QQ varies by a large amount during steps t0,,tt_{0},\ldots,t.

Although tt0t-t_{0} may be very large compared to kk (e.g. polyn\operatorname{poly}n) we show that the amount by which |Q||Q| varies during steps t0,,tt_{0},\ldots,t is guaranteed to be small as a function of kk, bounded above by O(kqlogn)O(\sqrt{kq\log n}). This means that, out of the kk cups with priority level \geq\ell in QtQ_{t}, at most O(kqlogn)O(\sqrt{kq\log n}) of them can have priority level +1\ell+1 or larger.

The influence property: bounding the rate at which |Q||Q| varies

The main tool in order to analyze the rate at which QQ’s size varies is to analyze sequences of steps based on their influence. For sequence of steps II, the influence of II is defined to be j=1nmin(1,cj(I))\sum_{j=1}^{n}\min(1,c_{j}(I)), where cj(I)c_{j}(I) is the amount of water poured into each cup jj during interval II. We show that, for any priority level \ell and for any step interval II with influence 2rq2rq for some rr, either r=O(logn)r=O(\log n), or two important properties are guaranteed to hold with high probability:

  • Step interval II crosses thresholds in at least rr cups with priority level \ell. This is true of any interval II with influence at least 2qr2qr by a simple concentration-bound argument.

  • The size of QQ varies by at most O(qrlogn)O(\sqrt{qr\log n}) during step interval II

    The key here is to show that, during each subinterval III^{\prime}\subseteq I, the number of thresholds crossed by the filler is within O(qrlogn)O(\sqrt{qr\log n}) of |I||I^{\prime}|. In order to do this, we take advantage of the initial random offsets rjr_{j} that are placed into each cup by the algorithm. If the filler puts some number cj(I)c_{j}(I^{\prime}) of units of water into a cup jj during II^{\prime}, then the cup jj will deterministically cross cj(I)\lfloor c_{j}(I^{\prime})\rfloor thresholds, and with probability cj(I)cj(I)c_{j}(I^{\prime})-\lfloor c_{j}(I^{\prime})\rfloor will cross one additional threshold (with the outcome depending on the random value rjr_{j}). Since the influence of II^{\prime} is at most 2rq2rq, we know that j(cj(I)cj(I))2rq\sum_{j}(c_{j}(I^{\prime})-\lfloor c_{j}(I^{\prime})\rfloor)\leq 2rq. That is, if we consider only the threshold crossings that are not certain, then the number of them is a sum of independent 0-1 random variables with mean at most 2rq2rq. By a Chernoff bound, this number varies from its mean by at most O(qrlogn)O(\sqrt{qr\log n}), with high probability in nn.

Combined, we call these the influence property. By a union bound, the influence property holds with high probability on all sub-sequences of steps during the cup game, and for all values rr.

The influence property creates a link between the number of cups with priority level \ell that cross thresholds during a sequence of steps II, and the amount by which |Q||Q| varies during steps II. Applying this link with r=k+1r=k+1 to steps t0,,tt_{0},\ldots,t, as defined above, implies that |Q||Q| varies by at most O(qklogn)O(\sqrt{qk\log n}) during steps t0,,tt_{0},\ldots,t. This, in turn, bounds the number of queued cups with priority level +1\ell+1 or larger by O(qklogn)O(\sqrt{qk\log n}) after step tt, completing the analysis.

Extending the analysis to the multi-processor cup game

The primary difficulty in analyzing the multi-processor cup game (i.e., when p>1p>1) is that the emptier must remove water from pp different cups, even if almost all of the water in the system resides in fewer than pp cups. For example, the emptier may dequeue a cup jj even though there are up to p1p-1 other higher-priority cups that are still queued; furthermore, each of these higher-priority cups may contribute a large number of heavy thresholds to the queue QQ.

We solve this issue by leveraging recent bounds on backlog for the pp-processor cup game [28], which prove that the deterministic greedy emptying algorithm achieves backlog O(logn)O(\log n). This can be used to ensure that, for any p1p-1 cups that are queued, each of them can only contribute a relatively small number of thresholds to the queue QQ. These “miss-behaving” thresholds can then be absorbed into the algorithm analysis.

Nearly matching lower bounds on tail size

Our lower-bound constructions extend the techniques used in past works for backlog [28, 11, 19] in order to apply similar ideas to tail size. One of the surprising features of our lower bounds is that they continue to be nearly tight even in the multi-processor case — the same is not known to be true for backlog. We defer further discussion of the lower bounds to Section 5.

Lower bounds against unending guarantees

Finally, we consider the question of whether the analysis of the asymmetric smoothed greedy algorithm can be extended to offer an unending guarantee, i.e., a guarantee that for any step tt, no matter how large, there a high probability at step tt that the backlog and tail size are small.

We show that, without the use of resource augmentation, unending guarantees are not possible for the asymmetric smoothed greedy algorithm, or, more generally, for any monotone memoryless emptying algorithm. Lower bounds against unending guarantees have previously been shown for the multi-processor cup game [28], but remained open for the single-processor cup game.

The filling strategy, which we call the fuzzing algorithm, has a very simple structure: the filler spends a large number (i.e., nΘ~(n)n^{\tilde{\Theta}(n)}) of steps randomly placing water in multiples of 1/21/2 into cups 1,2,,n1,2,\ldots,n. The filler then disregards a random cup, which for convenience we will denote by nn, and spends a large number of steps randomly placing water into the remaining cups 1,2,,n11,2,\ldots,n-1. The filler then disregards another random cup, which we will call cup n1n-1 ,and spends a large number of steps randomly placing water into cups 1,2,,n21,2,\ldots,n-2, and so on. We call the portion of the algorithm during which the filler is focusing on cups 1,2,,i1,2,\ldots,i the ii-cup phase.

Rather than describe the analysis of the fuzzing algorithm (which is somewhat complicated), we instead give an intuition for why the algorithm works. For simplicity, suppose the emptier follows the (standard) smoothed greedy emptying algorithm.

Between the ii-cup phase and the (i1)(i-1)-cup phase, the filler disregards a random cup (that we subsequently call cup ii). Intuitively, at the time that cup ii is discarded, there is a roughly 50%50\% chance that cup ii has more fill than the average of cups 1,2,,i1,2,\ldots,i. Then, during the (i1)(i-1)-cup phase, there is a reasonably high probability that the filler at some point manages to make all of cups 1,2,,i11,2,\ldots,i-1 have almost equal fills to one-another. At this point, the emptier will choose to empty out of cup ii instead of cups 1,2,,i11,2,\ldots,i-1. The fact that the emptier neglects cups 1,2,,i11,2,\ldots,i-1 during the step, even though the filler places 11 unit of water into them, causes their average fill to increase by 1/(i1)1/(i-1). Since this happens with constant probability in every phase, the result is that, by the beginning of the n\sqrt{n}-cup phase there are n\sqrt{n} cups each with expected fill roughly

Ω(1n+1n1++1n+1)=Ω(logn).\Omega\left(\frac{1}{n}+\frac{1}{n-1}+\cdots+\frac{1}{\sqrt{n}+1}\right)=\Omega(\log n).

Formalizing this argument leads to several interesting technical problems. Most notably, the cups 1,2,,i11,2,\ldots,i-1 having almost equal fills (rather than exactly equal fills) may not be enough for cup ii to receive the emptier’s attention. Moreover, if we wish to analyze the asymmetric smoothed greedy algorithm or, more generally, the class of monotone memoryless algorithms, then cups are not necessarily judged by the emptying algorithm based on their fill heights, and may instead be selected based on an essentially arbitrary objective function that need not treat cups symmetrically. These issues are handled in Section 6 by replacing the notion of cups 1,2,,i11,2,\ldots,i-1 having almost equal fills as each other with the notion of cups 1,,i11,\ldots,i-1 reaching a certain type of specially designed equilibrium state that interacts well with the emptier.

4. Algorithm analysis

In this section, we give the full analysis of the pp-processor asymmetric smoothed greedy algorithm. The main result of the section is Theorem 4.10, which bounds the tail size of the game by O(lognloglogn+plogp)O(\log n\log\log n+p\log p) for the first polyn\operatorname{poly}n steps of the game with high probability in nn.

In addition to using the conventions from Section 2 we find it useful to introduce one additional notation: for a sequence of steps II, define cj(I)c_{j}(I) to be the amount of water placed into cup jj during II. We also continue to use the convention from Section 3 that qq is a large constant multiple of loglogn\log\log n, and that each cup jj is assigned a priority level given by pjq+1\lfloor p_{j}\cdot q\rfloor+1.

Recall that a cup jj crosses a threshold (j,i)(j,i) whenever the fill of cup jj increases from some quantity f<if<i to some quantity fif^{\prime}\geq i for ii\in\mathbb{N}. A key property of the smoothed greedy algorithm, which was originally noted by Bender et al. [11], is that the number of threshold crossings across any sequence of steps can be expressed using a sum of independent 0-1 random variables.222Note that, when counting the number of threshold crossings across a sequence of steps, the same threshold (j,i)(j,i) may get crossed multiple times, and thus contribute more than 11 to the count. This remains true for the asymmetric smoothed greedy algorithm, and is formalized in Lemma 4.1.

Lemma 4.1 (Counting threshold crossings).

For a sequence of steps II, and for a cup jj, the number of threshold crossings in cup jj is cj(I)+Xj\lfloor c_{j}(I)\rfloor+X_{j}, where XjX_{j} is a 0-1 random variable with mean cj(I)cj(I)c_{j}(I)-\lfloor c_{j}(I)\rfloor. Moreover, X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are independent.

Proof.

Recall that the emptier only removes water from a cup jj if cup jj contains at least 11 unit. Moreover, the emptier always removes exactly 11 unit of water from cups. Since threshold crossings in each cup jj depend only on the fractional amount of water (i.e., the amount of water modulo 11) in the cup, the behavior of the emptier cannot affect when thresholds are crossed within each cup.

Let t0t_{0} be the final step prior to interval II. For each cup jj, the fractional amount of water in the cup at the beginning of interval II is

(1) rj+cj([1,t0])mod1.r_{j}+c_{j}([1,t_{0}])\bmod 1.

Since rjr_{j} is uniformly random in [0,1][0,1], it follows that (1) is as well. The first cj(I)cj(I)c_{j}(I)-\lfloor c_{j}(I)\rfloor units of water poured into cup jj during interval II will therefore cross a threshold with probability exactly cj(I)cj(I)c_{j}(I)-\lfloor c_{j}(I)\rfloor. The next cj(I)\lfloor c_{j}(I)\rfloor units of water placed into cup jj are then guaranteed to cause exactly cj(I)\lfloor c_{j}(I)\rfloor threshold crossings. The number of crossings in cup jj during the step sequence is therefore cj(I)+Xj\lfloor c_{j}(I)\rfloor+X_{j}, where XjX_{j} is 0-1 random variable with mean cj(I)cj(I)c_{j}(I)-\lfloor c_{j}(I)\rfloor, and where the randomness in XjX_{j} is due to the random initial offset rjr_{j}. Because r1,r2,,rnr_{1},r_{2},\ldots,r_{n} are independent, so are X1,X2,,XnX_{1},X_{2},\ldots,X_{n}. ∎

One consequence of Lemma 4.1 is that, if a sequence of steps II has a large influence ss, then each priority level \ell will have at least Ω(s/q)\Omega(s/q) cups that cross thresholds during interval II (recall that qq is the number of priority levels).

Lemma 4.2 (The influence property, part 1).

Consider a sequence of steps II with influence ss. Let \ell be a priority level. With high probability in nn, at least s2qO(logn)\frac{s}{2q}-O(\log n) distinct cups with priority level \ell cross thresholds during the sequence of steps II.

Proof.

By Lemma 4.1, the probability that cup jj crosses at least one threshold during step sequence II is min(cj(I),1)\min(c_{j}(I),1), independently of other cups jj^{\prime}. The number XX of distinct cups that cross thresholds during interval II is therefore a sum of independent indicator random variables with mean ss, where ss is the influence of II. Since each cup has probability 1q\frac{1}{q} of having priority level \ell, the number YY of cups with priority level \ell that cross thresholds in interval II is a sum of independent indicator random variables with mean sq\frac{s}{q}.

If s/qO(logn)s/q\leq O(\log n), the number of distinct cups with priority level \ell to cross thresholds is at least 0s2qO(logn)0\geq\frac{s}{2q}-O(\log n) trivially. Suppose, on the other hand, that s/qclogns/q\geq c\log n for a sufficiently large constant cc. Then by a Chernoff bound,

Pr[Y<s2q]exp[s8q]1nc/8,\Pr\left[Y<\frac{s}{2q}\right]\leq\exp\Big{[}-\frac{s}{8q}\Big{]}\leq\frac{1}{n^{c/8}},

completing the proof. ∎

The proofs of the preceding lemmas have not needed to explicitly consider the effect of there being a potentially large number pp of processors. In subsequent proofs, the multi-processor case will complicate the analysis in two ways. First, the emptier may sometimes dequeue a cup, even when there are more than pp heavy thresholds in the queue (this can happen when the heavy thresholds all belong to a set of fewer than pp cups). Second, and similarly, the emptier may sometimes be unable to remove a full pp thresholds from the queue QQ, even though |Q|>p|Q|>p (this can happen if of the thresholds in QQ belong to a set of fewer than pp cups). It turns out that both of these problems can be circumvented using the fact that the emptying algorithm achieves small backlog. In particular, this ensures that no single cup can ever contribute more than O(loglogn+logp)O(\log\log n+\log p) thresholds to QQ:

Lemma 4.3 (K. [28]).

In any multi-processor cup game of polyn\operatorname{poly}n length, the asymmetric smoothed greedy algorithm achieves backlog O(loglogn+logp)O(\log\log n+\log p) after each step, with high probability in nn.

Proof.

This follows from Theorem 5.1 of [28]. Although [28] considers the smoothed greedy algorithm (rather than the asymmetric smoothed greedy algorithm), the analysis applies without modification. ∎

Using Lemma 4.3 as a tool to help in the case of p>1p>1, we now return to the analysis approach outlined in Section 3.

Remark 4.4.

The proof of Lemma 4.3 given in [28] is highly nontrivial. We remark that, although Lemma 4.3 simplifies our analysis, there is also an alternative lighter weight approach that one can use in place of the lemma. In particular, one can begin by analyzing the hh-truncated cup game for some sufficiently large hO(loglogn+logp)h\leq O(\log\log n+\log p). In this game, the height of each cup is deterministically bounded above by hh, and whenever the height of a cup exceeds hh, 11 unit of water is removed from the cup (and that unit does not count as part of the emptier’s turn). The hh-truncated cup game automatically satisfies the backlog property stated by Lemma 4.3, allowing for it to be analyzed without requiring the lemma. The analysis can then be used to bound the backlog of the hh-truncated cup game to at most h/2h/2 with high probability (using the analysis by [28] of the greedy algorithm, applied to the O~(logn+p)\tilde{O}(\log n+p) cups in the tail). It follows that with high probability, the hh-truncated cup game and the (standard) cup game are indistinguishable. This means that the high-probability bounds on tail size for the hh-truncated cup game carry over directly to the the standard cup game.

The next lemma shows that, even though many threshold crossings may occur in a sequence of steps II, the size of the queue QQ varies by only a small amount as a function of the influence of II.

Lemma 4.5 (The influence property, part 2).

Consider a sequence of steps II with influence at most ss during a game of length at most polyn\operatorname{poly}n. For each step tIt\in I, let QtQ_{t} denote the queue after step tt. With high probability in nn,

||Qt1||Qt2||O(slogn+logn+p(loglogn+logp))\Big{|}|Q_{t_{1}}|-|Q_{t_{2}}|\Big{|}\leq O(\sqrt{s\log n}+\log n+p(\log\log n+\log p))

for all subintervals [t1,t2]I[t_{1},t_{2}]\subseteq I.

Proof.

We begin with a simpler claim:

Claim 4.6.

For any subinterval III^{\prime}\subseteq I, the number of threshold crossings during II^{\prime} is within O(slogn+logn)O(\sqrt{s\log n}+\log n) of p|I|p|I^{\prime}| with high probability in nn.

Proof.

Because II has influence at most ss so does II^{\prime}. Lemma 4.1 tells us that, during II^{\prime}, the number of threshold crossings XX that occur satisfies 𝔼[X]=p|I|\mathbb{E}[X]=p|I^{\prime}|. Moreover, XX satisfies X=A+j=1nXjX=A+\sum_{j=1}^{n}X_{j}, where AA is a fixed value and the XjX_{j}’s are independent 0-1 random variables, each taking value 11 with probability cj(I)cj(I)min(cj(I),1)c_{j}(I^{\prime})-\lfloor c_{j}(I^{\prime})\rfloor\leq\min(c_{j}(I^{\prime}),1). Note that II^{\prime} has influence jmin(cj(I),1)s\sum_{j}\min(c_{j}(I^{\prime}),1)\leq s, and thus 𝔼[j=1nXj]s\mathbb{E}\left[\sum_{j=1}^{n}X_{j}\right]\leq s. By a multiplicative Chernoff bound, it follows that for δ<1\delta<1,

Pr[|X𝔼[X]|δs]2exp[δ2s/3].\Pr[|X-\mathbb{E}[X]|\geq\delta s]\leq 2\exp\left[-\delta^{2}s/3\right].

Set δ=clogn/s\delta=c\sqrt{\log n}/\sqrt{s} for a sufficiently large constant cc. If δ>1\delta>1, then sO(logn)s\leq O(\log n) and |X𝔼[X]||X-\mathbb{E}[X]| is deterministically O(logn)O(\log n). Otherwise,

Pr[|X𝔼[X]|cslogn]2exp[c2logn/3]1polyn.\Pr[|X-\mathbb{E}[X]|\geq c\sqrt{s\log n}]\leq 2\exp\left[-c^{2}\log n/3\right]\leq\frac{1}{\operatorname{poly}n}.

Since 𝔼[X]=p|I|\mathbb{E}[X]=p|I^{\prime}|, it follows that the number of threshold crossings in interval II^{\prime} is within O(slogn+logn)O(\sqrt{s\log n}+\log n) of p|I|p|I^{\prime}| with high probability in nn. ∎

Applying a union bound to the polyn\operatorname{poly}n subintervals of steps III^{\prime}\subseteq I, Claim 4.6 tells us that every subinterval III^{\prime}\subseteq I contains p|I|±O(slogn+logn)p|I^{\prime}|\pm O(\sqrt{s\log n}+\log n) threshold crossings with high probability in nn.

To complete the proof, consider some subinterval III^{\prime}\subseteq I and let mm be the (absolute) amount by which QQ changes in size during II^{\prime}. We wish to show that mO(slogn+logn+p(loglogn+logp))m\leq O(\sqrt{s\log n}+\log n+p(\log\log n+\log p)).

Suppose that |Q||Q| shrinks by mm during II^{\prime}. Then the number of threshold crossings in subinterval II^{\prime} would have to be at most p|I|mp|I^{\prime}|-m, meaning that mO(slogn+O(logn))m\leq O(\sqrt{s\log n}+O(\log n)), as desired.

Suppose, on the other hand, that |Q||Q| grows by mm during II^{\prime}. Call a step tIt\in I^{\prime} removal-friendly if the emptier removes pp full units of water during step tt (i.e., prior to the emptier removing water, there are at least pp cups with height 11 or greater). By Lemma 4.3, with high probability in nn, the size of QQ after any removal-unfriendly step is at most O(p(loglogn+logp))O(p(\log\log n+\log p)). If II^{\prime} consists exclusively of removal-friendly steps, then the filler must cross at least p|I|+mp|I^{\prime}|+m threshold crossings in order to increase |Q||Q| by mm; thus mO(slogn+logn)m\leq O(\sqrt{s\log n}+\log n). On the other hand, if II^{\prime} contains at least one removal-unfriendly step, then there must be some last such step tt in I=(t0,t1]I^{\prime}=(t_{0},t_{1}]. Since |Q|O(p(loglogn+logp))|Q|\leq O(p(\log\log n+\log p)) after step tt but |Q|m|Q|\geq m after step t1t_{1}, it must be that during the steps (t,t1](t,t_{1}] the size of QQ increases by at least mO(p(loglogn+logp))m-O(p(\log\log n+\log p)). Since the interval (t,t1)(t,t_{1}) consists entirely of removal-friendly steps, we can apply the reasoning from the first case (i.e., the case of only removal-friendly steps) to deduce that mO(slogn+logn+p(loglogn+logp))m\leq O(\sqrt{s\log n}+\log n+p(\log\log n+\log p)), completing the proof. ∎

Combined, Lemmas 4.2 and 4.5 give the influence property discussed in Section 3. Using this property, we can now relate the number of queued cups with priority level \geq\ell to the number of queued cups with priority level +1\geq\ell+1 for a given priority level \ell\in\mathbb{N}.

Lemma 4.7 (Accumulation of low-priority cups).

Let tpolynt\leq\operatorname{poly}n, let KK be the number of queued cups with priority level \geq\ell after step tt, and mm be the number of queued cups with priority level +1\geq\ell+1 after step tt. With high probability in nn,

(2) mO(qKlogn+logn+p(loglogn+logp)).m\leq O(\sqrt{qK\log n}+\log n+p(\log\log n+\log p)).
Proof.

For each k{1,2,,n}k\in\{1,2,\ldots,n\}, define IkI_{k} to be the smallest step-interval ending at step tt and with influence at least 2qk2qk (or define Ik=[1,t]I_{k}=[1,t] if the total influence of [1,t][1,t] is less than 2qk2qk). By Lemmas 4.2 and 4.5, each IkI_{k} satisfies the following two properties with high probability in nn:

  • The many-crossings property. Either IkI_{k} is all of [1,t][1,t], or the number of priority-level-\ell cups to cross thresholds during IkI_{k} is at least kO(logn)k-O(\log n).

  • The low-variance property. The size of QQ varies by at most Bk:=O(qklogn+logn+p(loglogn+logp))B_{k}:=O(\sqrt{qk\log n}+\log n+p(\log\log n+\log p)) during IkI_{k}. To see this, we use the fact that IkI_{k} has influence at most 2qk+p2qk+p, which by Lemma 4.5 limits the amount by which QQ varies to

    O((qk+p)logn+logn+p(loglogn+logp)).O(\sqrt{(qk+p)\log n}+\log n+p(\log\log n+\log p)).

    Since (qk+p)lognqklogn+plognqklogn+p+logn\sqrt{(qk+p)\log n}\leq\sqrt{qk\log n}+\sqrt{p\log n}\leq\sqrt{qk\log n}+p+\log n, it follows that the amount by which QQ varies during IkI_{k} is at most O(qklogn+logn+p(loglogn+logp))O(\sqrt{qk\log n}+\log n+p(\log\log n+\log p)), with high probability in nn.

Collectively, this pair of properties is called the influence property. By a union bound, the influence property holds for all k{1,2,,n}k\in\{1,2,\ldots,n\} with high probability in nn. It follows that the property also holds for k=Kk=K (recall KK is the number of queued cups with priority level \geq\ell after step tt).

If IK=[1,t]I_{K}=[1,t], then the total size of QQ can be at most BKB_{K} (since QQ begins as size 0 at the start of IKI_{K}). It follows that mBKm\leq B_{K}, meaning that (2) is immediate. In the rest of the proof, we focus on the case in which IK[1,t]I_{K}\neq[1,t].

If the emptier never dequeues any priority-level-\ell cups during IKI_{K}, then by the many-crossings property, there are at least KO(logn)K-O(\log n) priority-level-\ell cups queued after step tt. The number of queued cups with priority levels greater than \ell is therefore at most O(logn)O(\log n), as desired.

Suppose, on the other hand, that there is at least one step in IKI_{K} at which the emptier dequeues a priority-level-\ell (or smaller) cup, and let tt^{*} be the last such step. Let QtQ_{t^{*}} be the set of queued thresholds after tt^{*}, and let QtQ_{t} be the set of queued thresholds after step tt. Call a threshold in QtQ_{t^{*}} permanent if it is a light threshold for a cup with priority level \leq\ell. All permanent thresholds in QtQ_{t^{*}} are guaranteed to also be in QtQ_{t}, since tt^{*} is the final step in IKI_{K} during which the emptier dequeues such a threshold. The non-permanent thresholds in QtQ_{t^{*}} must reside in a set of fewer than pp cups, since the emptier would rather have dequeued one of them during step tt^{*} than to have dequeued a cup with priority level \leq\ell. By Lemma 4.3, the number of non-permanent thresholds in QtQ_{t^{*}} is therefore at most O(ploglogn+plogp)O(p\log\log n+p\log p), with high probability in nn.

By the low-variance property, the sizes of QtQ_{t} and QtQ_{t^{*}} differ by at most BKB_{K}. It follows that the permanent thresholds in QtQ_{t^{*}} make up all but

BK+O(ploglogn+plogp)O(BK)B_{K}+O(p\log\log n+p\log p)\leq O(B_{K})

of the thresholds in QtQ_{t}.

Recall that QtQ_{t} contains thresholds from KK different cups with priority level \geq\ell. It follows that QtQ_{t^{*}} contains permanent thresholds from at least KO(BK)K-O(B_{K}) different cups with priority level \geq\ell. The permanent thresholds in QtQ_{t^{*}} are all for cups with priority level \leq\ell, however. Thus there are at least KO(BK)K-O(B_{K}) cups with priority level \ell that are queued after step tt^{*} and remain queued after step tt. This bounds the number of queued cups after step tt with priority level greater than \ell by at most O(BK)O(B_{K}), completing the proof.

Since the number of queued cups with priority level 1\geq 1 can never exceed nn, Lemma 2 allows for us to bound the number of queued cups with priority level \geq\ell inductively. We argue that if qq is a sufficiently large constant multiple of loglogn\log\log n, then the number of queued cups with priority level q/2\geq q/2 never exceeds O(lognloglogn+p(loglogn+logp))O(\log n\log\log n+p(\log\log n+\log p)), with high probability in nn. This can then be used to obtain (lognloglogn+plogp)(\log n\log\log n+p\log p)-unpredictability, as defined in Section 3.

Lemma 4.8 (The unpredictability guarantee).

Consider a cup game of length polyn\operatorname{poly}n. For any step tt, and for any set of cups SS whose size is a sufficiently large constant multiple of lognloglogn+plogp\log n\log\log n+p\log p, at least one cup in SS is not queued after step tt, with high probability in nn. In other words, each step tt in the game satisfies (lognloglogn+plogp)(\log n\log\log n+p\log p)-unpredictability.

Proof.

Suppose the number of priority levels qq is set to be a sufficiently large constant multiple of loglogn\log\log n. For {1,2,,q}\ell\in\{1,2,\ldots,q\}, let mm_{\ell} denote the maximum number of queued cups with priority level \geq\ell during the game. We claim that mq/2O(lognloglogn+plogp)m_{q/2}\leq O(\log n\log\log n+p\log p) with high probability in nn.

By Lemma 2, for any 1<q1\leq\ell<q,

m+1O(qmlogn+logn+ploglogn+plogp),m_{\ell+1}\leq O(\sqrt{qm_{\ell}\log n}+\log n+p\log\log n+p\log p),

with high probability in nn. If we define X=qlogn+logn+ploglogn+plogpX=q\log n+\log n+p\log\log n+p\log p, then it follows that,

(3) m+1O(Xm+X).m_{\ell+1}\leq O(\sqrt{Xm_{\ell}}+X).

For each priority level \ell, let δ\delta_{\ell} be the ratio δ=mX\delta_{\ell}=\frac{m_{\ell}}{X}. By (3), for any 1<q1\leq\ell<q, either δ+1O(1)\delta_{\ell+1}\leq O(1) or

δ+1O(δ).\delta_{\ell+1}\leq O(\sqrt{\delta_{\ell}}).

It follows that, as long as qq is a sufficiently large constant multiple of loglogn\log\log n, then δq/2O(1)\delta_{q/2}\leq O(1), and thus mq/2O(X)m_{q/2}\leq O(X).

Now consider a set of cups SS of size at least cXcX, where cc is a sufficiently large constant. By a Chernoff bound, the number of cups with priority level greater than q/2q/2 in SS is at least cX/4cX/4, with high probability in nn. Since cc is a sufficiently large constant, and since mq/2O(X)m_{q/2}\leq O(X), this implies that SS contains more than mq/2m_{q/2} cups with priority level q/2q/2 or greater. Thus the priority-level-\ell cups in SS cannot all be queued after step tt.

Note that XO(lognloglogn+plogp)X\leq O(\log n\log\log n+p\log p). Thus every set SS whose size is a sufficiently large constant multiple of lognloglogn+plogp\log n\log\log n+p\log p has high probability of containing at least one non-queued cup after step tt, completing the proof of lognloglogn+plogp\log n\log\log n+p\log p-unpredictability. ∎

To complete the analysis of the algorithm, we must formalize the connection between the unpredictability guarantee and tail size.

Lemma 4.9.

Suppose that a (randomized) emptying algorithm for the pp-processor cup game on nn cups satisfies RR-unpredictability in the steps of any game of polynomial length, and further satisfies the “greediness property” that whenever there is a cup of height at least 22, the algorithm empties out of such a cup. Then in any game of polynomial length, the tail size ss after each step tt is O(R+p)O(R+p) with high probability in nn.

Proof.

Consider a polynomial fpolynf\in\operatorname{poly}n, let cc be a large constant, and let tpolynt\leq\operatorname{poly}n. Suppose for contradiction that there is a filling strategy such that, at time tt the tail size is at least cR+pcR+p with probability at least 1/f(t)1/f(t). Since the tail size is at least cR+pcR+p at time tt, then during each of steps I={t+1,,t+cR/p}I=\{t+1,\ldots,t+\lceil cR/p\rceil\}, the emptier removes water exclusively from cups with fills at least 22. This means that the set of cups containing 11 or more units of water is monotonically increasing during steps t+1,,t+cR/pt+1,\ldots,t+\lceil cR/p\rceil. If the filler places 11 unit of water into each cup 1,2,,cR1,2,\ldots,cR during steps t+1,,t+cR/pt+1,\ldots,t+\lceil cR/p\rceil, then it follows that each of cups 1,2,,cR1,2,\ldots,cR has fill 11 or greater after step t+cR/pt+\lceil cR/p\rceil.

The preceding construction guarantees that, with probability at least 1/f(x)1/f(x), all of cups 1,2,,cR1,2,\ldots,cR contain at least 11 unit of water at step t+cR/pt+\lceil cR/p\rceil. If cc is a sufficiently large constant, this violates RR-unpredictability at step t+cR/pt+\lceil cR/p\rceil, a contradiction. ∎

Using the unpredictability guarantee, we can now complete the algorithm analysis:

Theorem 4.10.

Consider a pp-processor cup game that lasts for polyn\operatorname{poly}n steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in nn, the number of cups containing 22 or more or units of water never exceeds O(lognloglogn+plogp)O(\log n\log\log n+p\log p) and the backlog never exceeds O(loglogn+logp)O(\log\log n+\log p) during the game.

Proof.

By Lemma 4.8, (lognloglogn+plogp)(\log n\log\log n+p\log p)-unpredictability holds for each step in any game of length polyn\operatorname{poly}n. By Lemma 4.9, it follows that the tail size remains at most O(lognloglogn+plogp)O(\log n\log\log n+p\log p), with high probability in nn, during any game of length polyn\operatorname{poly}n.

Lemma 4.3 bounds the height of the fullest cup in each step by O(loglogn+logp)O(\log\log n+\log p) with high probability in nn. Alternatively, by considering a cup game consisting only of the cups that contain greater than 22 units of water, the analysis of the deterministic greedy emptying algorithm (see [1] for p=1p=1 and [28] for p>1p>1) on O(lognloglogn+plogp)O(\log n\log\log n+p\log p) cups implies that no cup ever contains more than O(loglogn+logp)O(\log\log n+\log p) water, with high probability in nn. ∎

5. Lower Bounds

In this section we prove that the asymmetric smoothed greedy algorithm achieves (near) optimal tail size within the class of backlog-bounded algorithms.

An emptying algorithm is backlog bounded if the algorithm guarantees that the backlog never exceeds f(n)f(n) for some polynomial ff. This is a weak requirement in that that the greedy algorithm achieves a bound of O(logn)O(\log n) on backlog [1, 28]. The main result in this section states that any backlog-bounded emptying algorithm must allow for a tail size of Ω~(logn+p)\tilde{\Omega}(\log n+p) with probability 1polyn\frac{1}{\operatorname{poly}n}. The lower bound continues to hold, even when the height-requirement for a cup to be in the tail is increased to an arbitrarily large constant (rather than 22). When p=1p=1, the lower bound also applies to non-backlog-bounded emptying algorithms (Lemma 5.3).

Theorem 5.1.

Let c1c_{1} be a constant, and suppose np+c2n\geq p+c_{2} for sufficiently large constant c2c_{2}. For any backlog-bounded emptying strategy, there is a polyn\operatorname{poly}n-step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least Ω(logn/loglogn+p)\Omega(\log n/\log\log n+p) cups with fill c1c_{1} or greater, with probability at least 1polyn\frac{1}{\operatorname{poly}n}.

To prove Theorem 5.1, we begin by describing a simple lower-bound construction that we call the (p,k,c)(p,k,c)-filling strategy. The strategy is structurally similar to the lower-bound construction for backlog given by Bender et al [11].

Lemma 5.2.

Let k,ck,c\in\mathbb{N} such that c2c\geq 2, knk\leq n, and kpec2\frac{k}{pe^{c}}\geq 2. Then there exists an O(k)O(k)-step oblivious randomized filling strategy for the pp-processor cup game on nn cups that causes Ω(k/ec)\Omega(k/e^{c}) cups to each have fill at least Θ(c)\Theta(c), with probability at least 1kk\frac{1}{k^{k}}. We call this strategy the (p,k,c)(p,k,c)-filling strategy.

Proof.

Define the (p,k,c)(p,k,c)-filling strategy for the pp-processor cup game on nkn\geq k cups as follows. In each step ii of the strategy, the filler places pkp(i1)\frac{p}{k-p(i-1)} units of water into each of kp(i1)k-p(i-1) cups. The sets of cups SiS_{i} used in each step ii are selected so that Si+1=Si{x1,x2,,xp}S_{i+1}=S_{i}\setminus\{x_{1},x_{2},\ldots,x_{p}\} for some random distinct x1,x2,,xpSix_{1},x_{2},\ldots,x_{p}\in S_{i}. The (p,k,c)(p,k,c)-filling strategy completes after tt steps where t=kp(1ec)1t=\lfloor\frac{k}{p}(1-e^{-c})\rfloor-1. Note that kpipk-pi\geq p for every step ii.

We say that the (p,k,c)(p,k,c)-filling strategy succeeds if at the beginning of each step ii none of the cups in SiS_{i} have been touched (i.e., emptied from) by the emptier. If the (p,k,c)(p,k,c)-filling strategy succeeds, then at the end the ii-th step of the strategy there will be kpik-pi cups each with fill

pk+pkp++pkp(i1)\displaystyle\frac{p}{k}+\frac{p}{k-p}+\cdots+\frac{p}{k-p(i-1)}
=Θ(1k+1k1++1kpi+1)\displaystyle=\Theta\left(\frac{1}{k}+\frac{1}{k-1}+\cdots+\frac{1}{k-pi+1}\right)
=Θ(logkkpi),\displaystyle=\Theta\left(\log\frac{k}{k-pi}\right),

where the first equality uses the fact that kpipk-pi\geq p.

Now consider the final step tt of a successful (p,k,c)(p,k,c)-filling strategy. By the requirement that kpec2\frac{k}{pe^{c}}\geq 2,

kpt=kpkp(11/ec)pkecpk2ec.k-pt=k-p\Big{\lfloor}\frac{k}{p}(1-1/e^{c})\Big{\rfloor}-p\geq\frac{k}{e^{c}}-p\geq\frac{k}{2e^{c}}.

It follows that, after step tt of a successful (p,k,c)(p,k,c)-filling strategy, there are at least Θ(k/ec)\Theta(k/e^{c}) cups, each with fill at least

Ω(logkk/(2ec))=Ω(c).\Omega\left(\log\frac{k}{k/(2e^{c})}\right)=\Omega(c).

Next we evaluate the probability of a (p,k,c)(p,k,c)-filling strategy being successful. If the first ii steps of the (p,k,c)(p,k,c)-filling strategy all succeed, then the (i+1)(i+1)-th step has probability at least 1kp\frac{1}{k^{p}} of succeeding. In particular, the emptier may touch up to pp cups j1,,jpSij_{1},\ldots,j_{p}\in S_{i} during step ii, and then the set Si+1=Si{x1,,xp}S_{i+1}=S_{i}\setminus\{x_{1},\ldots,x_{p}\} has probability at least 1kp\frac{1}{k^{p}} of removing a superset of those cups from SiS_{i} to get Si+1S_{i+1}. Since there are at most k/pk/p steps, the (p,k,c)(p,k,c)-filling strategy succeeds with probability at least 1kk\frac{1}{k^{k}}. ∎

If we assume that logn/loglogn\log n/\log\log n is a sufficiently large constant multiple of pp, then we can apply Lemma 5.2 directly to achieve a tail size of size Ω~(logn)\tilde{\Omega}(\log n). (Furthermore, note that Lemma 5.3 does not have any requirement that the emptier be backlog-bounded.)

Lemma 5.3.

Let c1c_{1} be a positive constant, and suppose plognc2loglognp\leq\frac{\log n}{c_{2}\log\log n} for a sufficiently large constant c2c_{2} (where c2c_{2} is large relative to c1c_{1}). Then there is an O(logn/loglogn)O(\log n/\log\log n)-step oblivious randomized filling strategy for the pp-processor cup game on nn cups that causes Ω(logn)\Omega(\log n) cups to all have height c1c_{1} or greater after some step tpolynt\leq\operatorname{poly}n with probability at least 1polyn\frac{1}{\operatorname{poly}n}.

Proof.

Let cc\in\mathbb{N} be a sufficiently large constant compared to c1c_{1}. By assumption that c2c_{2} is sufficiently large in terms of c1c_{1}, we may also assume that

(4) logn/loglognpec2.\frac{\log n/\log\log n}{pe^{c}}\geq 2.

By (4), we can use Lemma 5.2 to analyze the (p,logn/loglogn,c)(p,\log n/\log\log n,c)-filling strategy. The strategy causes

Ω(lognloglognec)Ω(logn/loglogn)\Omega\left(\frac{\log n}{\log\log n}\cdot e^{-c}\right)\geq\Omega(\log n/\log\log n)

cups to all have height at least c1c_{1} with probability at least

(logn/loglogn)logn/loglogn1polyn.\left(\log n/\log\log n\right)^{-\log n/\log\log n}\geq\frac{1}{\operatorname{poly}n}.

The next lemma gives a filling strategy for achieving tail size Ω(p)\Omega(p) against any backlog-bounded emptying strategy. Remarkably, the construction in Lemma 5.4 succeeds with probability 1eΩ(p)1-e^{-\Omega(p)} (rather than with probability 1/polyn1/\operatorname{poly}n).

Lemma 5.4.

Let c1c_{1} be a constant, and suppose np+c2n\geq p+c_{2} for sufficiently large constant c2c_{2}. For any backlog-bounded emptying strategy, there is a polyn\operatorname{poly}n-step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least Ω(p)\Omega(p) cups with fill c1c_{1} or greater, with probability 1eΩ(p)1-e^{-\Omega(p)}.

Proof.

For the sake of simplicity, we allow for the filler to sometimes swap two cups, meaning that the labels of the cups are interchanged.

The basic building block of the algorithm is a mini-phase, which consists of O(1)O(1) steps. In each step of a mini-phase the filler places 11 unit of water into each of cups 1,2,,p11,2,\ldots,p-1, and then strategically distributes 11 additional unit of water among cups p,p+1,,np,p+1,\ldots,n. Using the final unit of water, the filler follows a (1,cec,c)(1,ce^{c},c)-filling strategy on cups p,p+1,,np,p+1,\ldots,n, where cc is a sufficiently large constant relative to c1c_{1} satisfying np+cecn\geq p+ce^{c}. We say that a mini-phase succeeds if the emptier removes only 11 unit of water from cups {p,p+1,,n}\{p,p+1,\ldots,n\} during each step in the mini-phase, and the (1,cec,c)(1,ce^{c},c)-filling strategy succeeds within the mini-phase. By the Lemma 5.2, any successful mini-phase will cause at least one cup jj to have fill at least c1c_{1} at the end of the mini-phase (and the filler will know jj).

Mini-phases are composed together by the filler to get phases. During each ii-th phase, the filler selects a random wi[1,f(n)n2]w_{i}\in[1,f(n)n^{2}] and performs wiw_{i} mini-phases (recall that f(n)f(n) is the polynomial such that the emptier achieves backlog f(n)f(n) or smaller). After the wiw_{i}-th mini-phase, the filler swaps cups ii and jj, where ii is the phase number and jj is the the cup containing fill c1\geq c_{1} in the event that the most recent mini-phase succeeded. The full filling algorithm consists of p1p-1 phases.

We claim that each phase ii has constant probability of ending in a successful mini-phase (and thus swapping cup ii with a new cup jpj\geq p having fill c1\geq c_{1}). Using this claim, one can complete the analysis as follows. If the swap in phase ii is at the end of a successful mini-phase, then after the swap, the (new) cup ii will have fill c1\geq c_{1}, and will continue to have fill c1\geq c_{1} for the rest of the filling algorithm, since the filler puts 11 unit in cup ii during every remaining step. At the end of the algorithm, the number of cups with fill c1\geq c_{1} is therefore bounded below by a sum of p1p-1 independent 0-11 random variables with total mean Ω(p)\Omega(p). This means that the number of such cups with fill c1\geq c_{1} is at least Ω(p)\Omega(p) with probability 1eΩ(p)1-e^{-\Omega(p)}, as desired.

It remains to analyze the probability that a given phase ii ends with a successful mini-phase.

Call a mini-phase clean if the emptier removes 11 unit of water from each cup 1,2,,p11,2,\ldots,p-1 during each step of the mini-phase, and dirty otherwise. Because each dirty mini-phase increases the total amount of water in cups 1,2,,p11,2,\ldots,p-1 by at least 11, and because the emptying algorithm prevents backlog from ever exceeding f(n)f(n), there can be at most O(pf(n))O(pf(n)) dirty mini-phases during phase ii.

By Lemma 5.2, each mini-phase (independently) has at least a constant probability of either being dirty or of succeeding. Out of the f(n)n2f(n)n^{2} possible mini-phases in phase ii, there can only be O(f(n)p)o(f(n)n2)O(f(n)p)\leq o(f(n)n^{2}) dirty mini-phases. It follows that, with probability 1eΩ(f(n)n2)1-e^{-\Omega(f(n)n^{2})}, at least a constant fraction of the possible mini-phases ss succeed (or would have succeeded in the event that wiw_{i} were at least as large as ss). Thus the wiw_{i}-th mini-phase succeeds with constant probability. ∎

Combining the preceding lemmas, we prove Theorem 5.1.

Proof of Theorem 5.1.

If pΩ(logn/loglogn)p\geq\Omega(\log n/\log\log n), then the theorem follows immediately from Lemma 5.4. On the other hand, if pp is a sufficiently large constant factor smaller than logn/loglogn\log n/\log\log n, then the theorem follows from Lemma 5.3. ∎

6. Lower Bounds Against Unending Guarantees

In this section, we prove upper and lower bounds for unending guarantees, which are probabilistic guarantees that hold for each step tt, even when tt is arbitrarily large. As a convention, we will use fj(t)f_{j}(t) to denote the fill of cup jj after step tt.

The main result of the section is a lower bound showing that no monotone stateless emptier can achieve an unending guarantee of o(logn)o(\log n) backlog.

Definition 6.1.

An emptier is said to be stateless if the emptier’s decision depends only on the state of the cups at each step. An emptier is said to be monotone if the following holds: given a state SS of the cups in which the emptier selects some cup jj to empty, if we define SS^{\prime} to be SS except that the amount of water in some cup iji\neq j has been reduced, then the emptier still selects cup jj in state SS^{\prime}. A monotone stateless emptier is any emptier that is both monotone and stateless.

The monotonicity and stateless property dictate only how the emptier selects a cup jj in each step. Once a cup jj is selected the emptier is permitted to either (a) remove 11 full unit of water from that cup, or (b) skip their turn. This decision is allowed to be an arbitrary function of the state of the cups.

We begin in Section 6.1 by showing that all monotone stateless emptiers can be modeled as using a certain type of score function to make emptying decisions.

In Section 6.2, we give an oblivious filling strategy, called the fuzzing algorithm, that prevents monotone stateless emptiers from achieving unending probabilistic guarantees of o(logn)o(\log n) backlog (in fact, the filling strategy places an expected Θ(n2/3logn)\Theta(n^{2/3}\log n) water into Θ(n2/3)\Theta(n^{2/3}) cups, meaning that bounds on tail size are also not viable, unless backlog is allowed to be polynomially large). The fuzzing algorithm is named after what is known as the fuzzing technique [39] for detecting security vulnerabilities in computer systems – by barraging the system with random noise, one accidentally discovers and exploits the structural holes of the system.

In Section 6.3 we show that the fuzzing algorithm continues to prevent unending guarantees, even when the emptier is equipped with a global clock, allowing for the emptier to adapt to the number of steps that have occurred so far in the game.

Finally, in Sections 6.4 and 6.5, we determine the exact values of the resource-augmentation parameter ε\varepsilon for which the smoothed greedy and asymmetric smoothed greedy emptying algorithms achieve single-processor unending guarantees. In particular, we show that the minimum attainable value of ε\varepsilon is 2polylogn2^{-\operatorname{polylog}n}.

6.1. Score-Based Emptiers

In this section, we prove an equivalence between monotone stateless emptiers, and what we call score-based emptiers. We then state several useful properties of score-based emptiers.

A score-based emptier has score functions σ1,σ2,,σn\sigma_{1},\sigma_{2},\ldots,\sigma_{n}. When selecting which cup to empty from, the emptier selects the cup jj whose fill fjf_{j} maximizes σj(fj)\sigma_{j}(f_{j}). The emptier can then select whether to either (a) remove 11 full unit of water from the cup, or (b) skip their turn; this decision is an arbitrary function of the state of the cups. The score functions are required to be monotonically increasing functions, meaning that σi(a)<σi(b)\sigma_{i}(a)<\sigma_{i}(b) whenever a<ba<b. Moreover, in order to break ties, all of the scores in the multiset {σi(j/2)i[n],j+}\{\sigma_{i}(j/2)\mid i\in[n],j\in\mathbb{Z}^{+}\} are required to be distinct. (We only consider fills of the form j/2j/2 because in our lower bound constructions all fills will be multiples of 1/21/2.)

It is easy to see that any score-based emptier is also a monotone stateless emptier. The following theorem establishes that the other direction is true as well:

Theorem 6.2.

Consider cup games in which the filler always places water into cups in multiples of 1/21/2. For these cup games, every monotone stateless emptying algorithm is equivalent to some score-based emptying algorithm.

For a set of cups 1,2,,k1,2,\ldots,k, a state of the cups is a tuple S=S(1),S(2),,S(k)S=\langle S(1),S(2),\ldots,S(k)\rangle, where S(j)S(j) indicates the amount of water in cup jj. Throughout this section we will restrict ourselves to states where S(j)S(j) is a non-negative integer multiple of 1/21/2.

In order to prove Theorem 6.2, we first derive several natural properties of monotone stateless emptiers. We say that the pair (j1,r1)(j_{1},r_{1}) dominates the pair (j2,r2)(j_{2},r_{2}) if either (a) j1=j2j_{1}=j_{2} and r1>r2r_{1}>r_{2}, or if (b) j1j2j_{1}\neq j_{2} and in the cup state where the only two non-empty cups are j1j_{1} and j2j_{2} with r1r_{1} and r2r_{2} water, respectively, the emptier selects cup j1j_{1}. We say that a cup j1j_{1} dominates a cup j2j_{2} in a state SS if (j1,r1)(j_{1},r_{1}) dominates (j2,r2)(j_{2},r_{2}), where r1r_{1} and r2r_{2} are the amounts of water in cups j1j_{1} and j2j_{2}, respectively, in state SS.

The next lemma shows that the emptiers decision in each step is determined by which cup dominates the other cups.

Lemma 6.3.

Let SS be any state of the cups 1,2,,n1,2,\ldots,n, and suppose the emptier is following a monotone stateless algorithm. Then the cup jj that the emptier selects from SS is the unique cup jj that dominates all other cups.

Proof.

It suffices to show that cup jj dominates all other cups, since only one cup can have this property. Consider a cup jjj^{\prime}\neq j, and let r1r_{1} and r2r_{2} be the amounts of water in cups jj and jj^{\prime}, respectively, in state SS. Let SS^{\prime} be the state in which the only non-empty cups are jj and jj^{\prime} with r1r_{1} and r2r_{2} units of water, respectively. By the monotonicity property of the emptier, it must be that the emptier selects cup jj over cup jj^{\prime} in state SS^{\prime}. Thus cup jj dominates cup jj^{\prime} in state SS, as desired. ∎

Next we show that domination is a transitive property.

Lemma 6.4.

Consider any monotone stateless emptying algorithm. If (j1,r1)(j_{1},r_{1}) dominates (j2,r2)(j_{2},r_{2}) and (j2,r2)(j_{2},r_{2}) dominates (j3,r3)(j_{3},r_{3}), then (j1,r1)(j_{1},r_{1}) dominates (j3,r3)(j_{3},r_{3}).

Proof.

We begin by considering the case where j1,j2,j3j_{1},j_{2},j_{3} are distinct. Consider the cup state SS in which the only three cups that contain water are j1,j2,j3j_{1},j_{2},j_{3}, and they contain r1,r2,r3r_{1},r_{2},r_{3} water, respectively. By Lemma 6.3, one of cups j1,j2,j3j_{1},j_{2},j_{3} must dominate the others. Since j2j_{2} is dominated by j1j_{1} and j3j_{3} is dominated by j2j_{2}, it must be that j1j_{1} is the cup that dominates. Thus (j1,r1)(j_{1},r_{1}) dominates (j3,r3)(j_{3},r_{3}), as desired.

Next we consider the case where j1=j2j_{1}=j_{2} and j2j3j_{2}\neq j_{3}. Suppose for contradiction that (j1,r1)(j_{1},r_{1}) does not dominate (j3,r3)(j_{3},r_{3}). Consider the cup state SS in which j1j_{1} and j3j_{3} are the only cups containing water, and they contain r1r_{1} and r3r_{3} units of water, respectively. In state SS, cup j3j_{3} dominates cup j1j_{1}. By monotonicity, it follows that if we decrease the fill of j2j_{2} to r2r_{2}, then cup j3j_{3} must still dominate cup j1=j2j_{1}=j_{2}. But this means that (j3,r3)(j_{3},r_{3}) dominates (j2,r2)(j_{2},r_{2}), a contradiction.

Next we consider the case where j1j2j_{1}\neq j_{2} and j2=j3j_{2}=j_{3}. Consider the cup state SS in which j1j_{1} and j2j_{2} are the only cups containing water, and they contain r1r_{1} and r2r_{2} units of water, respectively. In state SS, cup j1j_{1} dominates cup j2j_{2}. By monotonicity, it follows that if we decrease the fill of j2j_{2} to r3r_{3}, then cup j1j_{1} must still dominate cup j2=j3j_{2}=j_{3}. This means that (j3,r3)(j_{3},r_{3}) dominates (j2,r2)(j_{2},r_{2}), as desired.

Finally we consider the case where j1=j2=j3j_{1}=j_{2}=j_{3}. In this case it must be that r1>r2r_{1}>r_{2} and r2>r3r_{2}>r_{3}. Thus r1>r3r_{1}>r_{3}, meaning that (j1,r1)(j_{1},r_{1}) dominates (j3,r3)(j_{3},r_{3}), as desired. ∎

By exploiting the transitivity of the domination property, we can now prove Theorem 6.2.

Proof of Theorem 6.2.

Consider the set X={(j,r)j[n],r{0,0.5,1,1.5,}}X=\{(j,r)\mid j\in[n],r\in\{0,0.5,1,1.5,\ldots\}\}. For
(j1,r1),(j2,r2)X(j_{1},r_{1}),(j_{2},r_{2})\in X, say that (j1,r1)<(j2,r2)(j_{1},r_{1})<(j_{2},r_{2}) if (j1,r1)(j_{1},r_{1}) dominates (j2,r2)(j_{2},r_{2}). By Lemma 6.4, the set XX is totally ordered by the << operation. Since every totally ordered set is also well ordered, and since every countably-infinite well ordered set is order-isomorphic to the natural numbers [16], it follows that XX is order isomorphic to the natural numbers. That is, there is a bijection σ:X\sigma:X\rightarrow\mathbb{N} that preserves the << relationship.

Let σi:{0,0.5,1,1.5,}\sigma_{i}:\{0,0.5,1,1.5,\ldots\}\rightarrow\mathbb{N} be the function σi(r)=σ((i,r))\sigma_{i}(r)=\sigma((i,r)). By Lemma 6.3, the emptier always selects the cup jj whose fill fjf_{j} maximizes σj(fj)\sigma_{j}(f_{j}). It follows that the emptier is a score-based emptier. ∎

We conclude the section by observing a useful property of score-based emptiers, namely the existence of what we call equilibrium states.

We say that a state on kk cups is an equilibrium state if for every pair of distinct cups i,j{1,2,,k}i,j\in\{1,2,\ldots,k\}, σi(S(i)+1/2)>σj(S(j))\sigma_{i}(S(i)+1/2)>\sigma_{j}(S(j)). That is, for any cup ii, if 1/21/2 unit of water is added to any cup ii, then that cup’s score function will exceed the score function of all other cups {1,2,,k}{i}\{1,2,\ldots,k\}\setminus\{i\}.

Lemma 6.5.

Consider cups 1,2,,k1,2,\ldots,k, and suppose their total fill mm is a non-negative integer multiple of 1/21/2. For any set of score functions, σ1,,σk\sigma_{1},\ldots,\sigma_{k}, there is a unique equilibrium state for cups 1,2,,k1,2,\ldots,k in which the total amount of water in the cups is mm.

Proof.

Consider any state S=S(1),,S(k)S=\langle S(1),\ldots,S(k)\rangle for cups 1,2,,k1,2,\ldots,k in which the total fill of the cups is mm. Define the score severity of SS to be maxi[k]σi(S(i))\max_{i\in[k]}\sigma_{i}(S(i)). If SS is not an equilibrium state, then we can move 1/21/2 units of water from some cup i{1,2,,k}i\in\{1,2,\ldots,k\} to some other cup j{1,2,,k}j\in\{1,2,\ldots,k\} in a way that decreases the score severity of SS.

Let 𝒮m\mathcal{S}_{m} be the set of states for cup 1,2,,k1,2,\ldots,k in which each cup contains a multiple of 1/21/2 units of water, and in which the total amount of water in cups is mm. Since 𝒮m\mathcal{S}_{m} is finite, there must be a state S𝒮mS\in\mathcal{S}_{m} with minimum score severity. By the preceding paragraph, it follows that SS is an equilibrium state.

Finally, we prove uniqueness. Suppose S,SS,S^{\prime} are distinct equilibrium states in 𝒮m\mathcal{S}_{m}. Then some cup ii in SS^{\prime} must have greater fill than the same cup ii in SS. But by the equilibrium property, adding 1/21/2 units of water to cup ii in SS increases the score function of cup ii to be larger than any other cup’s score function in SS. Thus SS^{\prime} must have a larger score severity than does SS. Likewise, SS must have a larger score severity than SS^{\prime}, a contradiction. ∎

6.2. The Oblivious Fuzzing Filling Algorithm

In this section, we describe a simple filling algorithm that, when pitted against a score-based emptier, achieves backlog Ω(logn)\Omega(\log n) after nΘ(nlogn)n^{\Theta(n\log n)} steps with at least constant probability. Note that, throughout this section, we focus only on cup games that do not have resource augmentation.

The filling strategy, which we call the oblivious fuzzing algorithm has a very simple structure. At the beginning of the algorithm, the filler randomly permutes the labels 1,2,,n1,2,\ldots,n of the cups. The filler then begins their strategy by spending a large number (i.e., nΘ(nlogn)n^{\Theta(n\log n)}) of steps randomly placing water into cups 1,2,,n1,2,\ldots,n. The filler then disregards cup nn (note that cup nn is a random cup due to the random-permutation step!), and spends a large number of steps randomly placing water into cups 1,2,,n11,2,\ldots,n-1. The filler then disregards cup n1n-1 and spends a large number of steps randomly placing water into cups 1,2,,n21,2,\ldots,n-2, and so on.

Formally, the oblivious fuzzing algorithm works as follows. Let cc\in\mathbb{N} be a sufficiently large constant, and relabel the cups (from the filler’s perspective) with a random permutation of 1,2,,n1,2,\ldots,n. The filling strategy consists of nn phases of ncnn^{cn} steps. The ii-th phase is called the (ni+1)(n-i+1)-cup phase because it focuses on cups 1,2,,(ni+1)1,2,\ldots,(n-i+1). In each step of the ii-th phase, the filler selects random values x1,x2{1,2,,ni+1}x_{1},x_{2}\in\{1,2,\ldots,n-i+1\} uniformly and independently, and then places 12\frac{1}{2} water into each of cups x1,x2x_{1},x_{2}. If x1=x2x_{1}=x_{2}, then the cup x1x_{1} receives a full unit of water.

One interesting characteristic of the oblivious fuzzing algorithm is that it represents a natural workload in the scheduling problem that the cup game models. One can think of the cups as representing nn tasks and water as representing work that needs to be scheduled. In this scheduling problem, the oblivious fuzzing filling algorithm simply assigns work to tasks at random, and selects one task every ncnlognn^{cn\log n} steps to stop receiving new work.

In this section, we prove the following theorem.

Theorem 6.6.

Consider a cup game on nn cups. Suppose that the emptier follows a score-based emptying algorithm, and that the filler follows the oblivious fuzzing filling algorithm. Then at the beginning of the n2/3n^{2/3}-cup phase, the average fill of cups 1,2,,n2/31,2,\ldots,n^{2/3} is Ω(logn)\Omega(\log n), in expectation.

For each {1,2,,n1}\ell\in\{1,2,\ldots,n-1\}, call a step tt in the \ell-cup phase emptier-wasted if the emptier fails to remove water from any of cups 1,2,1,2,\ldots\ell during step tt (either because the emptier skips their turn, or because the emptier selects a cup j>j>\ell). We show that for each {n2/3+1,n2/3+2,,n1}\ell\in\{n^{2/3}+1,n^{2/3}+2,\ldots,n-1\}, the \ell-cup phase has at least Ω(1)\Omega(1) emptier-wasted steps in expectation (or the average height of cups in that phase is already Ω(logn)\Omega(\log n)). During an emptier-wasted step tt, the total amount of water in cups 1,2,,1,2,\ldots,\ell increases by 11 (since the filler places water into the \ell cups, and the emptier does not remove water from them). It follow that, during the \ell-cup phase, the average amount of water in cups 1,2,,1,2,\ldots,\ell increases by Ω(1)\Omega\left(\frac{1}{\ell}\right) in expectation. Applying this logic to every phase gives Theorem 6.6. The key challenge is show that, within the \ell-cup phase, the expected number of emptier-wasted steps is Ω(1)\Omega(1).

For each {1,2,,n1}\ell\in\{1,2,\ldots,n-1\}, define the initial water level mm_{\ell} of the \ell-cup phase to be the total amount of water in cups 1,2,,+11,2,\ldots,\ell+1 at the beginning of the phase. Define the equilibrium state E=E(1),,E(+1)E_{\ell}=\langle E_{\ell}(1),\ldots,E_{\ell}(\ell+1)\rangle for the \ell-cup phase to be the equilibrium state for cups 1,2,,+11,2,\ldots,\ell+1 in which the total amount of water is m+1m_{\ell}+1 (note that EE_{\ell} exists and is unique by Lemma 6.5). One can think of m+1m_{\ell}+1 as representing the total amount of water in cups 1,2,,+11,2,\ldots,\ell+1 after the filler places 11 unit of water into the cups at the beginning of the first step in the \ell-cup phase.

Define the bolus bb_{\ell} of the \ell-cup phase as follows. If rr is the amount of water in cup +1\ell+1 at the beginning of the \ell-cup phase, then b=max(0,rE(+1))b_{\ell}=\max(0,r-E_{\ell}(\ell+1)). That is, bb_{\ell} is the amount by which cup +1\ell+1 exceeds its equilibrium fill.

We begin by showing that, if mO(nlogn)m_{\ell}\leq O(n\log n), then the expected number of emptier-wasted steps in phase \ell is at least 𝔼[b/2]\mathbb{E}[b_{\ell}/2]. The basic idea is that, whenever fewer than bb_{\ell} emptier-wasted steps have occurred, the filler has some small probability of reaching a state in which all of cups 1,2,,1,2,\ldots,\ell have fills no greater than E(1),E(2),,E()E_{\ell}(1),E_{\ell}(2),\ldots,E_{\ell}(\ell), respectively. If this happens, then the score function of cup +1\ell+1 will exceed that of any of cups 1,2,,1,2,\ldots,\ell, and an emptier-wasted step occurs. Thus, whenever fewer than bb_{\ell} emptier-wasted steps have occurred, the filler has a small probability of incurring an emptier-wasted step (within the next O(nlogn)O(n\log n) steps). Since the \ell-cup phase is very long, the filler has many opportunities to induce an emptier-wasted step in this way. It follows that, with high probability, there will be at least bb_{\ell} emptier-wasted steps in the \ell-cup phase. Lemma 6.7 presents this argument in detail.

Lemma 6.7.

Let [n1]\ell\in[n-1], condition on mnlognm_{\ell}\leq n\log n, and condition on some value for bb_{\ell}. Under these conditions, the the expected number of emptier-wasted steps in the \ell-cup phase is at least b/2b_{\ell}/2.

Proof.

Call a step in the \ell-cup phase equilibrium-converging if for each cup jj that the filler places water into, the fill xx of cup jj after the water is placed satisfies xE(j)x\leq E_{\ell}(j). One can think of an equilibrium-converging step as being a step in which the filler’s behavior pushes each cup jj towards its equilibrium state, without pushing any cups above their equilibrium state.

Call a step in the \ell-cup phase a convergence-enabling if the total amount of water in cups 1,2,,1,2,\ldots,\ell is less than (i=1E(i))1\left(\sum_{i=1}^{\ell}E_{\ell}(i)\right)-1 at the beginning of the step.

Convergence-enabling steps have two important properties.
The Convergence Property: For any convergence-enabling step tt, there is some pair of cups j,kj,k (possibly j=kj=k) that the filler can place water into in order so that the step is equilibrium converging. Thus, whenever a convergence-enabling step occurs, there is probability of at least 1/21/\ell^{2} that the step is equilibrium converging.
The Bolus Property: At the beginning of any convergence-enabling step, the amount of water in cup +1\ell+1 must be greater than E(+1)E_{\ell}(\ell+1). This is a consequence of the fact that the total amount of water in cups 1,,+11,\ldots,\ell+1 is at least m=(i=1+1E(i))1m_{\ell}=\left(\sum_{i=1}^{\ell+1}E_{\ell}(i)\right)-1.

Break the \ell-cup phase into sequences of steps L1,L2,L3,L_{1},L_{2},L_{3},\ldots, where each LiL_{i} is 2nlogn2n\log n steps. We begin by showing that, if LiL_{i} contains a convergence-enabling step and consists of only equilibrium-converging steps, then LiL_{i} must also contain at least one emptier-wasted step.

Claim 6.8.

Suppose mnlognm_{\ell}\leq n\log n. Suppose that the first step of LiL_{i} is convergence-enabling. If all of the steps in LiL_{i} are equilibrium converging, then at least one of the steps must be emptier-wasted.

Proof.

At the end of each step tt, let fj(t)f_{j}(t) denote the amount of water in each cup jj. Define the potential function ϕ(t)\phi(t) to be

ϕ(t)=j=1{1+fj(t)E(j) if fj(t)>E(j),0 otherwise.\phi(t)=\sum_{j=1}^{\ell}\begin{cases}1+f_{j}(t)-E_{\ell}(j)&\text{ if }f_{j}(t)>E_{\ell}(j),\\ 0&\text{ otherwise.}\end{cases}

Since the first step of LiL_{i} is convergence-enabling, the total amount of water in the cups at the beginning of LiL_{i} is at most (i1E(i))1mnlogn\left(\sum_{i-1}^{\ell}E_{\ell}(i)\right)-1\leq m_{\ell}\leq n\log n. It follows that, at the beginning of LiL_{i}, the potential function ϕ\phi is at most nlogn+n2nlogn1n\log n+n\leq 2n\log n-1.

Whenever a step tt is both equilibrium-converging and non-emptier-wasted, we have that either ϕ(t1)=0\phi(t-1)=0 or ϕ(t)<ϕ(t1)1\phi(t)<\phi(t-1)-1. Since ϕ\phi is at most 2nlogn12n\log n-1 at the beginning of LiL_{i}, we cannot have ϕ(t)<ϕ(t1)1\phi(t)<\phi(t-1)-1 for every step in LiL_{i}. Thus, if every step in LiL_{i} is equilibrium converging, then there must be at least one step that is either emptier-wasted or that satisfies ϕ(t1)=0\phi(t-1)=0.

To complete the claim, we show that if there is at least one step in LiL_{i} for which ϕ(t1)=0\phi(t-1)=0, and step tt is equilibrium-converging, then there also be at least one emptier-wasted step. Suppose ϕ(t1)=0\phi(t-1)=0, that step tt is equilibrium-converging, and that no steps in LiL_{i} are emptier-wasted. Since there are no emptier-wasted steps in LiL_{i}, every step in LiL_{i} must be equilibrium-enabling, and thus cup +1\ell+1 contains more than E(+1)E_{\ell}(\ell+1) water at the beginning of step tt (by the Bolus Property of equilibrium-enabling steps). Since ϕ(t1)=0\phi(t-1)=0 and step tt is equilibrium converging, the cups 1,2,,1,2,\ldots,\ell contain fills at most E(1),E(2),,E()E_{\ell}(1),E_{\ell}(2),\ldots,E_{\ell}(\ell), respectively, after the filler places water in step tt. It follows that, during step tt, the emptier will choose cup +1\ell+1 over all of cups 1,2,,1,2,\ldots,\ell. Thus step tt is an emptier-wasted step, a contradiction. ∎

Next we use Claim 6.8 in order to show that, if mnlognm_{\ell}\leq n\log n and LiL_{i} contains a convergence-enabling step, then LiL_{i} has probability at least 1/n4nlogn1/n^{4n\log n} of containing an emptier-wasted step.

Claim 6.9.

Condition on the fact that the first step of LiL_{i} is convergence-enabling and that mnlognm_{\ell}\leq n\log n. Then LiL_{i} contains an emptier-wasted step with probability at least 1/n4nlogn1/n^{4n\log n}.

Proof.

Since the first step of LiL_{i} is convergence-enabling, either every step of LiL_{i} is convergence-enabling or there is at least one emptier-wasted step. Recall by the Convergence Property that each convergence-enabling step has probability at least 1/n21/n^{2} of being equilibrium-converging. Thus there is probability at least 1/n4nlogn1/n^{4n\log n} that every step of LiL_{i} (up until the first emptier-wasted step) is equilibrium-converging. By Claim 6.8, it follows that the probability of there being an emptier-wasted step is at last 1/n4nlogn1/n^{4n\log n}. ∎

We can now complete the proof of the lemma. For each LiL_{i}, if the number of emptier-wasted steps in L1,,Li1L_{1},\ldots,L_{i-1} is less than b\lceil b_{\ell}\rceil, then the first step of LiL_{i} is convergence-enabling. Since mnlognm_{\ell}\leq n\log n, then by Claim 6.9, it follows that LiL_{i} has probability at least 1/n4n1/n^{4n} of containing an emptier-wasted step.

Now collect the LiL_{i}’s into collections of size n4nlogn+1n^{4n\log n+1}, so that the kk-th collection is given by

Ck=L(k1)n4nlogn+1+1,,Lkn4nlogn+1.C_{k}=\langle L_{(k-1)n^{4n\log n+1}+1},\ldots,L_{kn^{4n\log n+1}}\rangle.

Note that, as long as the constant cc used to define the fuzzing algorithm is sufficiently large, then the ll-cup phase is long enough so that it contains at least bmnlogn\lceil b_{\ell}\rceil\leq m_{\ell}\leq n\log n collections C1,C2,,CbC_{1},C_{2},\ldots,C_{\lceil b_{\ell}\rceil}.

Say that a step collection CiC_{i} failed if, at the beginning of the step collection, the number of emptier-wasted steps that have occurred is less than bb_{\ell}, and CiC_{i} contains no emptier-wasted steps. The probability of a given CiC_{i} failing is at most,

(11/n4nlogn)n4nlogn+11/en.\left(1-1/n^{4n\log n}\right)^{n^{4n\log n+1}}\leq 1/e^{n}.

It follows that the probability of any of C1,,CbC_{1},\ldots,C_{\lceil b_{\ell}\rceil} failing is at most,

1(11/en)b1(11/en)nlogn1/2.1-(1-1/e^{n})^{\lceil b_{\ell}\rceil}\leq 1-(1-1/e^{n})^{n\log n}\leq 1/2.

If none of the collections C1,,CbC_{1},\ldots,C_{\lceil b_{\ell}\rceil} fail, then there must be at least b\lceil b_{\ell}\rceil emptier-wasted steps. Thus the expected number of emptier-wasted steps that occur during the phase is at least b/2b_{\ell}/2. ∎

In order to show that the expected number of emptier-wasted steps in phase \ell is Ω(1)\Omega(1) (at least, whenever mnlognm_{\ell}\leq n\log n), it suffices to show that expected bolus bb_{\ell} is Ω(1)\Omega(1) (conditioned on mnlognm_{\ell}\leq n\log n).

In order to prove a lower-bound on the bolus, we examine a related quantity that we call the variation. If t+1t+1 is the first step of the \ell-cup phase, then the variation vv_{\ell} of the \ell-cup phase is defined to be,

v=j=1+1|fj(t)E(j)|.v_{\ell}=\sum_{j=1}^{\ell+1}|f_{j}(t)-E_{\ell}(j)|.

The variation vv_{\ell} captures the degree to which the fills of cups 1,2,,+11,2,\ldots,\ell+1 differ from their equilibrium fills. The next lemma shows that, if the variation vv_{\ell} is large, then so will be the bolus bb_{\ell} in expectation.

Lemma 6.10.

Let {1,2,,n1}\ell\in\{1,2,\ldots,n-1\}. Fix some value of vv_{\ell} and of mm_{\ell}. Then

𝔼[b]=v2(+1).\mathbb{E}[b_{\ell}]=\frac{v_{\ell}}{2(\ell+1)}.
Proof.

Let t+1t+1 be the first step in the \ell-cup phase. By the definition of EE_{\ell}, we have that,

j=1+1fj(t)=j=1+1E(j).\sum_{j=1}^{\ell+1}f_{j}(t)=\sum_{j=1}^{\ell+1}E_{\ell}(j).

Thus,

j=1+1max(0,fj(t)E(j))=j=1+1max(0,E(j)fj(t)).\sum_{j=1}^{\ell+1}\max(0,f_{j}(t)-E_{\ell}(j))=\sum_{j=1}^{\ell+1}\max(0,E_{\ell}(j)-f_{j}(t)).

Hence,

j=1+1max(0,fj(t)E(j))=v/2.\sum_{j=1}^{\ell+1}\max(0,f_{j}(t)-E_{\ell}(j))=v_{\ell}/2.

Since the cups 1,2,,+11,2,\ldots,\ell+1 are randomly labeled, we have by symmetry that,

𝔼[max(0,f+1(t)E(+1))]=1+1𝔼[j=1+1max(0,fj(t)E(j))]=v2(+1).\mathbb{E}[\max(0,f_{\ell+1}(t)-E_{\ell}(\ell+1))]=\frac{1}{\ell+1}\mathbb{E}\left[\sum_{j=1}^{\ell+1}\max(0,f_{j}(t)-E_{\ell}(j))\right]=\frac{v_{\ell}}{2(\ell+1)}.

Since b=max(0,f+1(t)E(+1))b_{\ell}=\max(0,f_{\ell+1}(t)-E_{\ell}(\ell+1)), the proof of the lemma is complete. ∎

By Lemma 6.10, if our goal is to show that 𝔼[bmnlogn]Ω(1)\mathbb{E}[b_{\ell}\mid m_{\ell}\leq n\log n]\geq\Omega(1), then it suffices to show that 𝔼[vmnlogn]Ω()\mathbb{E}[v_{\ell}\mid m_{\ell}\leq n\log n]\geq\Omega(\ell).

Lemma 6.11.

Let n>1n>1. For {n2/3+1,,n1}\ell\in\{n^{2/3}+1,\ldots,n-1\}, the variation vv_{\ell} satisfies, 𝔼[vmnlogn]Ω()\mathbb{E}[v_{\ell}\mid m_{\ell}\leq n\log n]\geq\Omega(\ell).

Proof.

Let t+1t+1 be the first step of the \ell-cup phase. Recall that

v=j=1+1|fj(t)E(j)|.v_{\ell}=\sum_{j=1}^{\ell+1}|f_{j}(t)-E_{\ell}(j)|.

Note that the equilibrium state EE_{\ell} depends on the amount of water mm_{\ell} in cups 1,2,,+11,2,\ldots,\ell+1 at the beginning of step tt. Let E(m)E^{(m)}_{\ell} denote the equilibrium state in the case where m=mm_{\ell}=m, and let v(m)v_{\ell}^{(m)} denote the variation of the cups at step tt from E(m)E^{(m)}_{\ell}, i.e.,

v(m)=j=1+1|fj(t)E(m)(j)|.v_{\ell}^{(m)}=\sum_{j=1}^{\ell+1}|f_{j}(t)-E^{(m)}_{\ell}(j)|.

Since we are conditioning on mnlognm_{\ell}\leq n\log n, it suffices to consider values of m{k/2k{0,1,2,,2nlogn}}m\in\{k/2\mid k\in\{0,1,2,\ldots,2n\log n\}\}. For each such value of mm, we will show that Pr[v(m)=Ω()]1O(1/n2)\Pr[v_{\ell}^{(m)}=\Omega(\ell)]\geq 1-O(1/n^{2}). By a union bound, it follows that, Pr[v=Ω()mnlogn]1O(logn/n)\Pr[v_{\ell}=\Omega(\ell)\mid m_{\ell}\leq n\log n]\geq 1-O(\log n/n). It follows that 𝔼[vmnlogn]Ω()\mathbb{E}[v_{\ell}\mid m_{\ell}\leq n\log n]\geq\Omega(\ell), as desired.

To complete the proof of the lemma, we must examine Pr[v(m)=Ω()]\Pr[v_{\ell}^{(m)}=\Omega(\ell)]. To do this, we break the water placed by the filler into two parts: Let aja_{j} denote the amount of water placed into each cup jj by the filler in the first nn steps of the first phase, and let bjb_{j} denote the amount of water placed into each cup jj by the filler in steps n+1,n+2,,t1n+1,n+2,\ldots,t-1. Finally, let cjc_{j} denote the total amount of water removed from cup jj by the emptier during steps 1,2,,t11,2,\ldots,t-1.

The role of aja_{j} will be similar to that of the random offsets in the smoothed greedy emptying algorithm. Interestingly, these random offsets now work in the filler’s favor, rather than the emptier’s.

Consider the quantity Xj=fj(t)E(m)(j)(mod1)X_{j}=f_{j}(t)-E^{(m)}_{\ell}(j)\pmod{1}. We can lower bound the variation v(m)v_{\ell}^{(m)} by,

(5) v(m)j=1+1Xj.v_{\ell}^{(m)}\geq\sum_{j=1}^{\ell+1}X_{j}.

Each XjX_{j} can be written as,

Xj=aj+bjcjE(m)(j)(mod1).X_{j}=a_{j}+b_{j}-c_{j}-E^{(m)}_{\ell}(j)\pmod{1}.

Since the emptier always removes water in chunks of size 11, cj0mod1c_{j}\equiv 0\mod 1. Thus,

(6) Xj=aj+bjE(m)(j)(mod1).X_{j}=a_{j}+b_{j}-E^{(m)}_{\ell}(j)\pmod{1}.

For the sake of analysis, fix the filler’s behavior in steps n+1,n+2,n+1,n+2,\ldots, meaning that the only randomness is in the aja_{j}’s and the bjb_{j}’s are fixed. Define dj=bjE(m)(j)d_{j}=b_{j}-E^{(m)}_{\ell}(j). By (5) and (6), our goal is to show that

(7) j=1{1/2 if ajdjmod10 otherwiseΩ(),\sum_{j=1}^{\ell}\begin{cases}1/2\text{ if }a_{j}\not\equiv d_{j}\mod 1\\ 0\text{ otherwise}\end{cases}\geq\Omega(\ell),

with probability at least 1O(1/n2)1-O(1/n^{2}).

We begin by showing that the left side of (7) has expected value Ω()\Omega(\ell). Note that, Pr[aj=0]=(11/n)2n1/16\Pr[a_{j}=0]=(1-1/n)^{2n}\geq 1/16 and Pr[aj=1/2]=(2n1)1/n(11/n)2n1=2(11/n)2n11/4\Pr[a_{j}=1/2]=\binom{2n}{1}\cdot 1/n\cdot(1-1/n)^{2n-1}=2(1-1/n)^{2n-1}\geq 1/4. Thus Pr[ajdjmod1]1/16\Pr[a_{j}\not\equiv d_{j}\mod 1]\geq 1/16, implying that the left side of (7) has expected value at least /16\ell/16.

In order to prove that (7) holds with probability at least 1O(1/n2)1-O(1/n^{2}), we show that the left side of (7) is tightly concentrated around its mean. If the XjX_{j}’s were independent of one another then we could achieve this with a Chernoff bound. Since the XjX_{j}’s are dependent, we will instead use McDiarmid’s inequality.

Theorem 6.12 (McDiarmid ’89 [35]).

Let A1,,AmA_{1},\ldots,A_{m} be independent random variables over an arbitrary probability space. Let FF be a function mapping (A1,,Am)(A_{1},\ldots,A_{m}) to \mathbb{R}, and suppose FF satisfies,

supa1,a2,,an,ai¯|F(a1,a2,,ai1,ai,ai+1,,an)F(a1,a2,,ai1,ai¯,ai+1,,an)|R,\sup_{a_{1},a_{2},\ldots,a_{n},\overline{a_{i}}}|F(a_{1},a_{2},\ldots,a_{i-1},a_{i},a_{i+1},\ldots,a_{n})-F(a_{1},a_{2},\ldots,a_{i-1},\overline{a_{i}},a_{i+1},\ldots,a_{n})|\leq R,

for all 1in1\leq i\leq n. That is, if A1,A2,,Ai1,Ai+1,,AnA_{1},A_{2},\ldots,A_{i-1},A_{i+1},\ldots,A_{n} are fixed, then the value of AiA_{i} can affect the value of F(A1,,An)F(A_{1},\ldots,A_{n}) by at most RR; this is known as the Lipschitz condition. Then for all S>0S>0,

Pr[|F(A1,,An)𝔼[F(A1,,An)]|RS]2e2S2/n.\Pr[|F(A_{1},\ldots,A_{n})-\mathbb{E}[F(A_{1},\ldots,A_{n})]|\geq R\cdot S]\leq 2e^{-2S^{2}/n}.

We will apply McDiarmid’s inequality to the quantity F=i=1+1XiF=\sum_{i=1}^{\ell+1}X_{i} (i.e., the left side of (7)). Recall that the filler’s behavior in steps n+1,,t1n+1,\ldots,t-1 has been fixed, meaning that the value of FF is a function of the filler’s behavior in steps 1,2,,n1,2,\ldots,n. Thus FF is a function of 2n2n independent random variables A1,A2,,A2nA_{1},A_{2},\ldots,A_{2n}, where A2i1A_{2i-1} and A2iA_{2i} are the cups that the filler places water into during step ii. Moreover, the outcome of each AiA_{i} can only change the value of F(A1,,A2n)F(A_{1},\ldots,A_{2n}) by at most ±1/2\pm 1/2. Thus F(A1,,A2n)F(A_{1},\ldots,A_{2n}) satisfies the Lipschitz condition with R=1/2R=1/2.

We apply McDiarmid’s inequality to F(A1,,A2n)=i=1+1XiF(A_{1},\ldots,A_{2n})=\sum_{i=1}^{\ell+1}X_{i}, with R=1/2R=1/2 and S=/16S=\ell/16 in order to conclude that,

Pr[i=1+1Xi</32]2eΩ(2/n)eΩ(n1/3)O(1/n2).\Pr[\sum_{i=1}^{\ell+1}X_{i}<\ell/32]\leq 2e^{\Omega(\ell^{2}/n)}\leq e^{\Omega(n^{1/3})}\leq O(1/n^{2}).

Thus (7) holds with probability at least 1O(1/n2)1-O(1/n^{2}), as desired. ∎

Combining the preceding lemmas, we get that the expected number of emptier-wasted steps in each phase is Ω(1)\Omega(1).

Lemma 6.13.

Suppose n>1n>1. For {n2/3+1,,n1}\ell\in\{n^{2/3}+1,\ldots,n-1\}, if mnlognm_{\ell}\leq n\log n, then the expected number of emptier-wasted steps in the \ell-cup phase is Ω(1)\Omega(1).

Proof.

By Lemma 6.7, the expected number of emptier-wasted steps is at least 𝔼[b/2mnlogn]\mathbb{E}[b_{\ell}/2\mid m_{\ell}\leq n\log n]. By Lemma 6.10, 𝔼[bmnlogn]=𝔼[v/2(+1)mnlogn]\mathbb{E}[b_{\ell}\mid m_{\ell}\leq n\log n]=\mathbb{E}[v_{\ell}/2(\ell+1)\mid m_{\ell}\leq n\log n]. By Lemma 6.11, 𝔼[v/2(+1)mnlogn]Ω(1)\mathbb{E}[v_{\ell}/2(\ell+1)\mid m_{\ell}\leq n\log n]\geq\Omega(1). Thus the expected number of emptier-wasted steps in the \ell-cup phase is Ω(1)\Omega(1). ∎

We can now complete the proof of Theorem 6.6.

Proof of Theorem 6.6.

For each {n2/3+1,,n}\ell\in\{n^{2/3}+1,\ldots,n\}, let ϕ\phi_{\ell} denote the average fill of cups 1,2,,1,2,\ldots,\ell at the end of the \ell-cup phase. We will show that 𝔼[ϕn2/3+1]=Ω(logn)\mathbb{E}[\phi_{n^{2/3}+1}]=\Omega(\log n), completing the proof of the theorem.

Consider the \ell-cup phase, where {n2/3+1,,n}\ell\in\{n^{2/3}+1,\ldots,n\}. At the beginning of the phase, the average amount of water ψ\psi_{\ell} in cups 1,2,,1,2,\ldots,\ell has expected value 𝔼[ψ]=𝔼[ϕ+1]\mathbb{E}[\psi_{\ell}]=\mathbb{E}[\phi_{\ell+1}], since the cups 1,2,,+11,2,\ldots,\ell+1 are indistinguishable up until the beginning of the \ell-cup phase. If ww_{\ell} is the expected number of emptier-wasted steps in phase \ell, then, 𝔼[ϕ]=𝔼[ψ]+w/\mathbb{E}[\phi_{\ell}]=\mathbb{E}[\psi_{\ell}]+w_{\ell}/\ell. Hence,

𝔼[ϕn2/3]Ω(wnn1+nn1n2++wn2/3+1n2/3).\mathbb{E}[\phi_{n^{2/3}}]\geq\Omega\left(\frac{w_{n}}{n-1}+\frac{n_{n-1}}{n-2}+\cdots+\frac{w_{n^{2/3}+1}}{n^{2/3}}\right).

Let XX denote the event that mnlognm_{\ell}\leq n\log n for all {n2/3+1,,n}\ell\in\{n^{2/3}+1,\ldots,n\}. If Pr[X]1/2\Pr[X]\geq 1/2, then it follows from Lemma 6.13 that wΩ(1)w_{\ell}\geq\Omega(1) for each {n2/3+1,,n}\ell\in\{n^{2/3}+1,\ldots,n\}. Thus,

𝔼[ϕn2/3]Ω(1n+1n2++1n2/3+1)Ω(logn),\mathbb{E}[\phi_{n^{2/3}}]\geq\Omega\left(\frac{1}{n}+\frac{1}{n-2}+\cdots+\frac{1}{n^{2/3}+1}\right)\geq\Omega(\log n),

as desired.

Now consider the case where Pr[X]<1/2\Pr[X]<1/2. That is, with probability at least 1/21/2, there is some \ell for which mnlognm_{\ell}\geq n\log n. Let \ell be the largest \ell for which mnlognm_{\ell}\geq n\log n. Then the average fill ϕ\phi_{\ell} of cups 1,2,,+11,2,\ldots,\ell+1 at the beginning of the \ell-cup phase is at least logn\log n. Note that, for any phase rr and value kk, we have 𝔼[ϕr1ϕr=k]k\mathbb{E}[\phi_{r-1}\mid\phi_{r}=k]\geq k. Given that ϕlogn\phi_{\ell}\geq\log n, it follows that 𝔼[ϕ1],𝔼[ϕ2],logn\mathbb{E}[\phi_{\ell-1}],\mathbb{E}[\phi_{\ell-2}],\ldots\geq\log n. This means that 𝔼[ϕn2/3X]logn\mathbb{E}[\phi_{n^{2/3}}\mid X]\geq\log n. Since XX occurs with probability at least 1/21/2, we have that 𝔼[ϕn2/3]Ω(logn)\mathbb{E}[\phi_{n^{2/3}}]\geq\Omega(\log n), completing the proof. ∎

6.3. Giving the Emptier a Time Stamp

In this section, we show that unending guarantees continue to be impossible, even if the score-based emptier is permitted to change their algorithm based on a global time stamp.

A dynamic score-based emptying algorithm 𝒜\mathcal{A} is dictated by a sequence 𝒳1,𝒳2,𝒳3,\langle\mathcal{X}_{1},\mathcal{X}_{2},\mathcal{X}_{3},\ldots\rangle, where each XiX_{i} is a score-based emptying algorithm. On step tt of the cup game, the algorithm 𝒜\mathcal{A} follows algorithm 𝒳t\mathcal{X}_{t}.

Define the extended oblivious fuzzing filling algorithm to be the oblivious fuzzing filling strategy, except that each phase’s length is increased to consist of T(n)T(n) steps, where TT is a sufficiently large function of nn (that we will choose later).

Theorem 6.14.

Consider a cup game on nn cups. Suppose the emptier is a dynamic score-based emptier. Suppose the filler follows the extended oblivious fuzzing filling algorithm. Then at the beginning of the n2/3n^{2/3}-cup phase, the average fill of cups 1,2,,n2/31,2,\ldots,n^{2/3} is Ω(logn)\Omega(\log n), in expectation.

Understanding when two score-based algorithms can be treated as “equivalent”

We say that the cups 1,2,,n1,2,\ldots,n are in a legal state if each cup contains an integer multiple of 1/21/2 water, and the total water in the cups is at most nlognn\log n. By the assumption that the emptier is backlog-bounded, and that the filler follows the extended oblivious fuzzing filling algorithm, we know that the cup game considered in Theorem 6.14 will always be in a legal state.

Let 𝐋\mathbf{L} denote the set of legal states. For each score-based emptying algorithm 𝒳\mathcal{X}, we define the behavior vector B(𝒳)B(\mathcal{X}) of 𝒳\mathcal{X} to be the set

B(𝒳)={(L,k)𝒳 empties from cup k when the cups are in state L𝐋}.B(\mathcal{X})=\{(L,k)\mid\mathcal{X}\text{ empties from cup }k\text{ when the cups are in state }L\in\mathbf{L}\}.

Note that, for some states L𝐋L\in\mathbf{L}, the emptier 𝒳\mathcal{X} may choose not to empty from any cup—in this case, LL does not appear in any pair in B(𝒳)B(\mathcal{X}).

The behavior vector B(𝒳)B(\mathcal{X}) captures 𝒳\mathcal{X}’s behavior on all legal states. If B(𝒳)=B(𝒳)B(\mathcal{X})=B(\mathcal{X}^{\prime}) for two score-based emptying algorithms 𝒳\mathcal{X} and 𝒳\mathcal{X}^{\prime}, then we treat the two emptying algorithms as being the same (since their behavior is indistinguishable on the cup games that we are analyzing). This means that the number of distinct score-based emptying algorithms is finite, bounded by (n+1)||(n+1)^{|\mathcal{L}|}. We will use 𝒜\mathcal{A} to denote the set of distinct score-based emptying algorithms. Formally, each element of 𝒜\mathcal{A} is an equivalence class of algorithms, where each score-based algorithm is assigned to an equivalence class based on its behavior vector B(𝒳)B(\mathcal{X}).

Associating each phase with a score-based algorithm that it “focuses” on

In order to analyze the \ell-cup phase of the extended oblivious fuzzing filling algorithm, we break the phase into segments, where each segment consists of 2nlogn|𝒜|2n\log n\cdot|\mathcal{A}| steps. For each segment, there must be an algorithm A𝒜A\in\mathcal{A} that the emptier uses at least 2nlogn2n\log n times within the segment. We say that the segment focuses on emptying algorithm AA.

Let KK denote the number of segments in the \ell-cup phase. By the pigeon-hole principle, there must be some algorithm A𝒜A\in\mathcal{A} such that at least K/|𝒜|K/|\mathcal{A}| of the segments in the \ell-cup phase focus on AA. We say that the \ell-cup phase focuses on algorithm AA.

For each {1,,n}\ell\in\{1,\ldots,n\}, let AA_{\ell} denote the score-based emptying algorithm that the \ell-cup phase focuses on (if there are multiple such algorithms AA_{\ell}, select one arbitrarily).

Our proof of Theorem 6.14 will analyze the \ell-cup phase by focusing on how the phase interacts with algorithm AA_{\ell}. This is made difficult by the fact that, between every two steps in which the emptier uses algorithm AA_{\ell}, there may be many steps in which the emptier uses other score-based emptying algorithms.

Defining the equilibrium state and bolus of each phase

We define the equilibrium state EE_{\ell} and the bolus bb_{\ell} of the \ell-cup phase to each be with respect to the score-based emptying algorithm AA_{\ell}. That is, EE_{\ell} is the equilibrium state for algorithm AA_{\ell} for the cups 1,2,,+11,2,\ldots,\ell+1 in which the total amount of water (in those cups) is m+1m_{\ell}+1 (recall that mm_{\ell} is the amount of water in cups 1,2,,+11,2,\ldots,\ell+1 at the beginning of the \ell-cup phase). Using this definition of EE_{\ell}, the bolus is b=max(0,rE(+1))b_{\ell}=\max(0,r-E_{\ell}(\ell+1)), where rr is the amount of water in cup +1\ell+1 at the beginning of the \ell-cup phase.

The key to proving Theorem 6.14 is to show that, if mnlognm_{\ell}\leq n\log n, then the expected number of emptier-wasted steps in the \ell-cup phase is at least Ω(b)\Omega(b_{\ell}). That is, we wish to prove a result analogous to Lemma 6.7 from Section 6.2.

Lemma 6.15.

Let [n1]\ell\in[n-1], condition on mnlognm_{\ell}\leq n\log n, and condition on some value for bb_{\ell}. Under these conditions, the the expected number of emptier-wasted steps in the \ell-cup phase is at least b/2b_{\ell}/2.

Proof.

Call a step tt in the \ell-cup phase equilibrium-converging if either:

  • The emptier uses algorithm AA_{\ell} during step tt, and for each cup jj that the filler places water into, the fill xx of cup jj after the water is placed satisfies xE(j)x\leq E_{\ell}(j).

  • The emptier uses an algorithm AAA\neq A_{\ell} during step tt, and the filler places all of their water (i.e., a full unit) into the cup jj whose score (as assigned by the score-based algorithm AA) is largest at the beginning of step tt.

The first case in the definition of equilibrium-converging steps is similar to that in the proof of Lemma 6.7. The second case, where the emptier uses an algorithm AAA\neq A_{\ell} is different; in this case, the definition guarantees that the step is either emptier-wasted or is a no-op (meaning that the water removed by the emptier during the step is exactly the same as the water placed by the filler).

Call a step in the \ell-cup phase a convergence-enabling if the total amount of water in cups 1,2,,1,2,\ldots,\ell is less than (i=1E(i))1\left(\sum_{i=1}^{\ell}E_{\ell}(i)\right)-1 at the beginning of the step.

Just as in the proof of Lemma 6.7, convergence-enabling steps have two important properties:
The Convergence Property: For any convergence-enabling step tt, there is some pair of cups j,kj,k (possibly j=kj=k) that the filler can place water into in order so that the step is equilibrium converging. Thus, whenever a convergence-enabling step occurs, there is probability of at least 1/21/\ell^{2} that the step is equilibrium converging.
The Bolus Property: At the beginning of any convergence-enabling step, the amount of water in cup +1\ell+1 must be greater than E(+1)E_{\ell}(\ell+1). This is a consequence of the fact that the total amount of water in cups 1,,+11,\ldots,\ell+1 is at least m=(i=1+1E(i))1m_{\ell}=\left(\sum_{i=1}^{\ell+1}E_{\ell}(i)\right)-1.

We now prove a claim analogous to Claim 6.8.

Claim 6.16.

Suppose mnlognm_{\ell}\leq n\log n, and consider a segment SS in the \ell-cup phase that focuses on AA_{\ell}. If SS begins with a convergence-enabling step, and every step in SS is equilibrium converging, then SS must contain an emptier-wasted step.

Proof.

There are two types of steps in SS: (1) equilibrium converging steps where the emptier uses algorithm AA_{\ell}, and (2) equilibrium converging steps where the emptier does not use AA_{\ell}. All type (2) steps are either emptier-wasted or are no-ops (meaning that they do not change the state of the cup game). On the other hand, because segment SS focuses on AA_{\ell}, there must be at least 2nlogn2n\log n type (1) steps. Assuming no type (2) steps are emptier wasted, then the type (1) steps meet the conditions for Claim 6.8 (i.e., the type (1) steps meet the conditions that are placed on LiL_{i} in the claim). Thus, by Claim 6.8, at least one of the steps in SS is emptier-wasted, as desired. ∎

We can now complete the proof of the lemma. For each segment SS that focuses on AA_{\ell}, if the number of emptier-wasted steps prior to SS is less than b\lceil b_{\ell}\rceil, then the first step of SS is convergence-enabling (and any steps in SS up until the first emptier-wasted step in SS are also convergence enabling). By the Convergence Property and Claim 6.16, it follows that SS has probability at least

p:=1/2|S|=1/2nlogn|𝒜|p:=1/\ell^{2|S|}=1/\ell^{2n\log n|\mathcal{A}|}

of containing an emptier-wasted step.

If KK is the number of segments in a phase, then at least K=K/|𝒜|K^{\prime}=K/|\mathcal{A}| of the segments in phase \ell must focus on algorithm AA_{\ell}. Denote these segments by S1,,SKS_{1},\ldots,S_{K^{\prime}}. Break the \ell-cup phase into collections C1,C2,,C2nlognC_{1},C_{2},\ldots,C_{2n\log n} of time segments, where each CiC_{i} contains K/(2nlogn)K^{\prime}/(2n\log n) of the SiS_{i}’s. Say that a collection CiC_{i} fails if fewer than b\lceil b_{\ell}\rceil emptier-wasted steps occur prior to CiC_{i}, and no emptier-wasted step occurs during CiC_{i}. Since each CiC_{i} contains at least K/(2nlogn)K^{\prime}/(2n\log n) segments that focus on AA_{\ell}, the probability of CiC_{i} failing is at most,

(8) (1p)K/(2nlogn).(1-p)^{K^{\prime}/(2n\log n)}.

Assuming the KK is sufficiently large as a function of nn, then the exponent in (8) is also sufficiently large as a function of nn, and thus (8) is at most 1/(4nlogn)1/(4n\log n). By a union bound over the CiC_{i}’s, it follows that the probability of any CiC_{i} failing is at most 1/21/2. On the other hand, if none of the CiC_{i}’s fail, then at least b\lceil b_{\ell}\rceil steps must be emptier-wasted (here, we are using the fact that the number of collections CiC_{i} is 2nlognmb2n\log n\geq m_{\ell}\geq\lceil b_{\ell}\rceil). Thus the expected number of emptier-wasted steps is at least b/2\lceil b_{\ell}\rceil/2. ∎

We can now prove Theorem 6.14.

Proof of Theorem 6.14.

The proof follows exactly as for Theorem 6.6, except that Lemma 6.7 is replaced with Lemma 6.15. ∎

6.4. Unending guarantees with small resource augmentation

In this section we show that, even though resource augmentation ε>0\varepsilon>0 is needed to achieve unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms, the amount of resource augmentation that is necessary is substantially smaller than was previously known. In particular, we prove unending guarantees when ε=2polylogn\varepsilon=2^{-\operatorname{polylog}n}.

Theorem 6.17 states an unending guarantee for the smoothed greedy emptying algorithm, using ε=2polylogn\varepsilon=2^{-\operatorname{polylog}n}.

Theorem 6.17.

Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter ε2polylogn\varepsilon\geq 2^{-\operatorname{polylog}n}, then each step tt achieves backlog O(loglogn)O(\log\log n) with probability 12polylogn1-2^{-\operatorname{polylog}n} (where the exponent in the polylog is a constant of our choice).

Theorem 6.18 states an unending guarantee for the asymmetric smoothed greedy emptying algorithm, using ε=2polylogn\varepsilon=2^{-\operatorname{polylog}n}.

Theorem 6.18.

Consider a single-processor cup game in which the emptier follows the asymmetric smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter ε2polylogn\varepsilon\geq 2^{-\operatorname{polylog}n}, then each step tt achieves tail size O(polylogn)O(\operatorname{polylog}n) and the backlog O(loglogn)O(\log\log n) with probability 12polylogn1-2^{-\operatorname{polylog}n} (where the exponent in the polylog in the probability is a constant of our choice).

We begin by proving Theorem 6.17.

Call a step tt a rest step if the emptier removes less than 11 unit of water during that step. The next lemma shows that rest steps are relatively common.

Lemma 6.19.

Consider a single-processor cup game in which the emptier follows either the smoothed greedy emptying algorithm. Any sequence of n/ε+1n/\varepsilon+1 steps must contain a rest step.

Proof.

Whenever a step is a rest step, it must be that every cup contains less than 11 unit of water, meaning that the total amount of water in the system is at most nn. On the other hand, during each non-rest step, the amount of water in the system decreases by at least ε\varepsilon. It follows that, if there are kk non-rest steps in a row, then the total amount of water in the system after those steps is at most nkεn-k\varepsilon. Thus the number of non-rest steps that can occur in a row is never more than n/εn/\varepsilon, as desired. ∎

In order to prove Theorem 6.17, we will exploit the following result of Bender et al. [11] which analyzes the smoothed greedy algorithm for ε=0\varepsilon=0:

Theorem 6.20 (Bender et al. [11]).

Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Moreover, suppose ε=0\varepsilon=0. For any positive constants cc and dd, and any t2logcnt\leq 2^{\log^{c}n}, step tt has backlog O(loglogn)O(\log\log n) with probability at least 12logdn1-2^{-\log^{d}n}.

Although Theorem 6.20 only applies to the first 2polylogn2^{\operatorname{polylog}n} steps of a game, we can use it to prove the following lemma. Define a fractional reset to be what happens if one reduces the fill fjf_{j} of each cup jj to fjfjf_{j}-\lfloor f_{j}\rfloor. That is, the fills of the cups are decreased by integer amounts to be in [0,1)[0,1). The next lemma shows that, if a cup game is fractionally reset after a given step jj, then the following steps j+1,j+2,,j+2polylognj+1,j+2,\ldots,j+2^{\operatorname{polylog}n} are guaranteed to have small backlog.

Lemma 6.21.

Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Consider a step t0t_{0}, and suppose that, after step t0t_{0}, the cup system is fractionally reset. Then for any positive constants cc and dd, and any tt0+2logcnt\leq t_{0}+2^{\log^{c}n}, step tt has backlog O(loglogn)O(\log\log n) with probability at least 122logdn1-2^{-2\log^{d}n}.

Proof.

For each cup jj, let rjr_{j} be the random initial offset placed into cup jj by the smoothed greedy emptying algorithm, and let cjc_{j} be the total amount of water placed into cup jj by the filler during steps 1,2,,t01,2,\ldots,t_{0}. Because the emptier always removes water in multiples of 11, they never change the fractional mount of water in any cup (i.e., the amount of water modulo 11). It follows that the fractional amount of water in each cup jj is given by,

qj:=cj+rj(mod1).q_{j}:=c_{j}+r_{j}\pmod{1}.

Since rjr_{j} is uniformly random in [0,1)[0,1), the value qjq_{j} is also uniformly random in [0,1)[0,1). Moreover, because the initial offsets rjr_{j} are independent of one another, so are the values qjq_{j}.

Because the values qjq_{j} are independent and uniformly random in [0,1)[0,1), they can be thought of as initial random offsets for the smoothed greedy emptying algorithm. Thus, if each cup jj is reset to have fill qjq_{j} after step tt, then the following steps can be analyzed as the first steps of a cup game in which the emptier follows the smoothed greedy emptying algorithm. The claimed result therefore follows from Theorem 6.20. ∎

We now prove Theorem 6.17.

Proof of Theorem 6.17.

Consider a step tt, and let dd be a large positive constant. For each step t0{tn/ε,,t}t_{0}\in\{t-n/\varepsilon,\ldots,t\}, Lemma 6.21 tells us that if a fractional reset were to happen after step t0t_{0}, then step tt would have probability at least 12logdn1-2^{-\log^{d}n} of having backlog O(loglogn)O(\log\log n). By a union bound, it follows that if a fractional reset were to happen after any of steps tn/ε,,tt-n/\varepsilon,\ldots,t, then step tt would have probability at least 1(n/ε)2logdn1-(n/\varepsilon)2^{\log^{d}n} of having backlog O(loglogn)O(\log\log n). Supposing that dd is a sufficiently large constant, this probability is at least 12logd/2n1-2^{\log^{d/2}n}.

By Lemma 6.19, at least one step t0{tn/ε,,t}t_{0}\in\{t-n/\varepsilon,\ldots,t\} is a rest step. This means that, at the end of step t0t_{0}, every cup contains less than 11 unit of water. In other words, the state of the system after step t0t_{0} is the same as if the system were to be fractionally reset. It follows that, for any constant dd, the backlog after step tt is O(loglogn)O(\log\log n) with probability at least 12logd/2n1-2^{\log^{d/2}n}. This completes the proof. ∎

The proof of Theorem 6.18 follows similarly to the proof of Theorem 6.17. Rather than using Theorem 6.20, we instead analyze the case of ε=0\varepsilon=0 using the following version of Theorem 4.10:

Theorem 6.22.

Consider a single-process cup game that lasts for 2polylogn2^{\operatorname{polylog}n} steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in nn, the number of cups containing 22 or more or units of water never exceeds O(polylogn)O(\operatorname{polylog}n) and the backlog never exceeds O(loglogn)O(\log\log n) during the game.

We now prove Theorem 6.18.

Proof of Theorem 6.18.

The proof follows in exactly the same way as for Theorem 6.17, except that Theorem 6.22 is used in place of Theorem 6.20. ∎

6.5. Tight lower bounds on resource augmentation for smoothed greedy

Theorems 6.17 and 6.18 give unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms using resource augmentation ε=1/2polylogn\varepsilon=1/2^{\operatorname{polylog}n}. Theorem 6.23 shows that such guarantees cannot be achieved with smaller resource augmentation.

Theorem 6.23.

Consider a single-processor cup game on nn cups. Suppose ε=1/2logω(1)n\varepsilon=1/2^{\log^{\omega(1)}n}, and suppose the emptier follows either the smoothed greedy emptying algorithm or the asymmetric smoothed greedy emptying algorithm. Then there is an oblivious filling strategy that causes there to be a step tt at which the expected backlog is ω(loglogn)\omega(\log\log n).

To prove Theorem 6.23, we will have the filler follow the oblivious fuzzing filling algorithm on min(1/logε1,n)\min(1/\sqrt{\log\varepsilon^{-1}},n) cups. Rather than placing water in multiples of 1/21/2, however, the filler now places water in multiples of 1/2ε/21/2-\varepsilon/2 (in order so that the total water placed in each step is 1ε1-\varepsilon).

If ε\varepsilon were 0, then Theorem 6.6 would guarantee an expected backlog of

(9) Ω(min{log1logε1,logn})ω(loglogn)\Omega\left(\min\left\{\log\frac{1}{\sqrt{\log\varepsilon^{-1}}},\log n\right\}\right)\geq\omega(\log\log n)

after some step t2O~(1/logε1)ε1/10t^{*}\leq 2^{\tilde{O}(1/\sqrt{\log\varepsilon^{-1}})}\leq\varepsilon^{-1/10}.

The fact that ε>0\varepsilon>0, however, makes it so that we cannot directly apply Theorem 6.6. Thus, in order to prove Theorem 6.23, we must first prove that the resource augmentation ε=1/2logω(1)n\varepsilon=1/2^{\log^{\omega(1)}n} is so small that, with high probability, it does not have a significant effect on the game by step tt^{*}.

In order to bound the impact of resource augmentation on the emptier, we exploit the random structure of the emptier’s algorithm, and use that random structure to show that the emptier’s behavior is robust to small “perturbations” due to resource augmentation.

Lemma 6.24.

Consider resource augmentation ε=22ω(1)n\varepsilon=2^{-2^{\omega(1)}n} and consider a step tε1/10t^{*}\leq\varepsilon^{-1/10}. With probability at least 1O(ε)1-O(\sqrt{\varepsilon}), the resource augmentation ε=1/2logω(1)n\varepsilon=1/2^{\log^{\omega(1)}n} does not affect the emptier’s behavior during the first tt^{*} steps.

Proof.

We begin with a simple observation: the total amount of resource augmentation during the first tt^{*} steps is εtε9/10\varepsilon t^{*}\leq\varepsilon^{9/10}. Call this the Net-Augmentation Observation.

For each step tt, define StS_{t} to be the state of the cup game after the tt-th step without resource augmentation, and define StS^{\prime}_{t} to be the state of the cup game with resource augmentation ε=1/2logω(1)n\varepsilon=1/2^{\log^{\omega(1)}n} (note that, in both cases, the emptier follows the same variant of the smoothed greedy algorithm using the same random initial offsets). Let St(j)S_{t}(j) (resp. St(j)S^{\prime}_{t}(j)) denote the fill of cup jj, after the filler has placed water in the jj-th step, but before the emptier has removed water (note that, when discussing fill, we include the random initial offset placed by the emptier in each cup).

Suppose that, during steps 1,2,,t11,2,\ldots,t-1, the emptier’s behavior is unaffected by the resource augmentation. The only way that the emptier’s behavior filler in step tt can be affected by the resource augmentation is if either:

  • Case 1: St(j)St(j)\lfloor S_{t}(j)\rfloor\neq\lfloor S^{\prime}_{t}(j)\rfloor for some cup jj. By the Net-Augmentation Observation, it follows that (St(j)mod1)[ε9/10,ε9/10](S_{t}(j)\mod 1)\in[-\varepsilon^{9/10},\varepsilon^{9/10}].

  • Case 2: St(j)<St(k)S_{t}(j)<S_{t}(k) but St(k)<St(j)S^{\prime}_{t}(k)<S^{\prime}_{t}(j) for some cups jj and kk. By the Net-Augmentation Observation, it follows that St(k)St(j)<ε9/10S_{t}(k)-S_{t}(j)<\varepsilon^{9/10}.

We will show that the probability of either Case 1 or Case 2 happening is at most O(ε9/10n2)O(\varepsilon^{9/10}n^{2}). It follows that the probability of resource augmentation affecting the emptier’s behavior during any of the first tt^{*} steps is at most O(ε9/10n2t)O(ε)O(\varepsilon^{9/10}n^{2}t^{*})\leq O(\sqrt{\varepsilon}).

Rather than directly bounding probability of either Case 1 or Case 2 occurring on step tt, we can instead bound the probability that either,

(10) (St(j)mod1)[ε9/10,ε9/10](S_{t}(j)\mod 1)\in[-\varepsilon^{9/10},\varepsilon^{9/10}]

for some cup jj, or that

(11) St(k)St(j)<ε9/10S_{t}(k)-S_{t}(j)<\varepsilon^{9/10}

for some cups j,kj,k. Recall that the values St(1),St(2),,St(n)S_{t}(1),S_{t}(2),\ldots,S_{t}(n) modulo 11 are uniformly and independently random between 0 and 11; this is because the emptier initially places random offsets rj[0,1)r_{j}\in[0,1) into each cup jj, which permanently randomizes the fractional amount of water in that cup. Thus the probability that (St(j)mod1)[ε9/10,ε9/10](S_{t}(j)\mod 1)\in[-\varepsilon^{9/10},\varepsilon^{9/10}] for a given cup jj is O(ε9/10)O(\varepsilon^{9/10}), and the probability that St(k)St(j)<ε9/10S_{t}(k)-S_{t}(j)<\varepsilon^{9/10} for a given pair of cups j,kj,k is also O(ε9/10)O(\varepsilon^{9/10}). By union-bounding over all cups jj (for (10)) and over all pairs of cups j,kj,k (for (11)), we get that the probability of either (10) or (11) occurring is O(ε9/10n2)O(\varepsilon^{9/10}n^{2}), as desired. ∎

We can now complete the proof of Theorem 6.23.

Proof of Theorem 6.23.

For each step tt, define StS_{t} and StS^{\prime}_{t} as in Lemma 6.24.

By Theorem 6.6 and (9), there exists t2O~(1/logε1)ε1/10t^{*}\leq 2^{\tilde{O}(1/\sqrt{\log\varepsilon^{-1}})}\leq\varepsilon^{-1/10} for which the expected backlog in StS_{t^{*}} is ω(loglogn)\omega(\log\log n). By the Net-Augmentation Observation (from Lemma 6.24), it follows that the expected backlog in StS^{\prime}_{t^{*}} is at least ω(loglognε9/10)ω(loglogn)\omega(\log\log n-\varepsilon^{9/10})\geq\omega(\log\log n), which completes the proof. ∎

References

  • [1] M. Adler, P. Berenbrink, T. Friedetzky, L. A. Goldberg, P. Goldberg, and M. Paterson. A proportionate fair scheduling rule with good worst-case performance. In Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 101–108, 2003.
  • [2] A. Amir, M. Farach, R. M. Idury, J. A. L. Poutré, and A. A. Schäffer. Improved dynamic dictionary matching. Inf. Comput., 119(2):258–282, 1995.
  • [3] A. Amir, G. Franceschini, R. Grossi, T. Kopelowitz, M. Lewenstein, and N. Lewenstein. Managing unbounded-length keys in comparison-driven data structures with applications to online indexing. SIAM J. Comput., 43(4):1396–1416, 2014.
  • [4] Y. Azar and A. Litichevskey. Maximizing throughput in multi-queue switches. Algorithmica, 45(1):69–90, 2006.
  • [5] A. Bar-Noy, A. Freund, S. Landa, and J. Naor. Competitive on-line switching policies. Algorithmica, 36(3):225–247, 2003.
  • [6] A. Bar-Noy, A. Nisgav, and B. Patt-Shamir. Nearly optimal perfectly periodic schedules. Distributed Comput., 15(4):207–220, 2002.
  • [7] S. K. Baruah, N. K. Cohen, C. G. Plaxton, and D. A. Varvel. Proportionate progress: A notion of fairness in resource allocation. Algorithmica, 15(6):600–625, Jun 1996.
  • [8] S. K. Baruah, J. Gehrke, and C. G. Plaxton. Fast scheduling of periodic tasks on multiple resources. In Proceedings of IPPS ’95, The 9th International Parallel Processing Symposium, April 25-28, 1995, Santa Barbara, California, USA, pages 280–288. IEEE Computer Society, 1995.
  • [9] M. A. Bender, J. Christensen, A. Conway, M. Farach-Colton, R. Johnson, and M. Tsai. Optimal ball recycling. In T. M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2527–2546. SIAM, 2019.
  • [10] M. A. Bender, R. Das, M. Farach-Colton, R. Johnson, and W. Kuszmaul. Flushing without cascades. In S. Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 650–669. SIAM, 2020.
  • [11] M. A. Bender, M. Farach-Colton, and W. Kuszmaul. Achieving optimal backlog in multi-processor cup games. In M. Charikar and E. Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 1148–1157. ACM, 2019.
  • [12] M. A. Bender, S. P. Fekete, A. Kröller, V. Liberatore, J. S. B. Mitchell, V. Polishchuk, and J. Suomela. The minimum backlog problem. Theor. Comput. Sci., 605:51–61, 2015.
  • [13] M. H. L. Bodlaender, C. A. J. Hurkens, V. J. J. Kusters, F. Staals, G. J. Woeginger, and H. Zantema. Cinderella versus the wicked stepmother. In IFIP International Conference on Theoretical Computer Science, pages 57–71, 2012.
  • [14] M. H. L. Bodlaender, C. A. J. Hurkens, and G. J. Woeginger. The cinderella game on holes and anti-holes. In Proceedings of the 37th International Conference on Graph-Theoretic Concepts in Computer Science (WG), pages 71–82, 2011.
  • [15] M. Chrobak, J. Csirik, C. Imreh, J. Noga, J. Sgall, and G. J. Woeginger. The buffer minimization problem for multiprocessor scheduling with conflicts. In Proceedings of the 28th International Colloquium on Automata, Languages and Programming (ICALP), volume 2076, pages 862–874, 2001.
  • [16] K. Ciesielski et al. Set theory for the working mathematician, volume 39. Cambridge University Press, 1997.
  • [17] P. Damaschke and Z. Zhou. On queuing lengths in on-line switching. Theor. Comput. Sci., 339(2-3):333–343, 2005.
  • [18] P. Dietz and D. Sleator. Two algorithms for maintaining order in a list. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing (STOC), pages 365–372, 1987.
  • [19] P. F. Dietz and R. Raman. Persistence, amortization and randomization. In Proceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 78–88, 1991.
  • [20] J. Fischer and P. Gawrychowski. Alphabet-dependent string searching with wexponential search trees. In F. Cicalese, E. Porat, and U. Vaccaro, editors, Combinatorial Pattern Matching - 26th Annual Symposium, CPM 2015, Ischia Island, Italy, June 29 - July 1, 2015, Proceedings, volume 9133 of Lecture Notes in Computer Science, pages 160–171. Springer, 2015.
  • [21] R. Fleischer and H. Koga. Balanced scheduling toward loss-free packet queuing and delay fairness. Algorithmica, 38(2):363–376, Feb 2004.
  • [22] H. R. Gail, G. A. Grover, R. Guérin, S. L. Hantler, Z. Rosberg, and M. Sidi. Buffer size requirements under longest queue first. Perform. Evaluation, 18(2):133–140, 1993.
  • [23] L. Gasieniec, R. Klasing, C. Levcopoulos, A. Lingas, J. Min, and T. Radzik. Bamboo garden trimming problem (perpetual maintenance of machines with different attendance urgency factors). In B. Steffen, C. Baier, M. van den Brand, J. Eder, M. Hinchey, and T. Margaria, editors, SOFSEM 2017: Theory and Practice of Computer Science - 43rd International Conference on Current Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-20, 2017, Proceedings, volume 10139 of Lecture Notes in Computer Science, pages 229–240. Springer, 2017.
  • [24] M. H. Goldwasser. A survey of buffer management policies for packet switches. SIGACT News, 41(1):100–128, 2010.
  • [25] M. T. Goodrich and P. Pszona. Streamed graph drawing and the file maintenance problem. In S. K. Wismath and A. Wolff, editors, Graph Drawing - 21st International Symposium, GD 2013, Bordeaux, France, September 23-25, 2013, Revised Selected Papers, volume 8242 of Lecture Notes in Computer Science, pages 256–267. Springer, 2013.
  • [26] N. Guan and W. Yi. Fixed-priority multiprocessor scheduling: Critical instant, response time and utilization bound. In 26th IEEE International Parallel and Distributed Processing Symposium Workshops & PhD Forum, IPDPS 2012, Shanghai, China, May 21-25, 2012, pages 2470–2473. IEEE Computer Society, 2012.
  • [27] T. Kopelowitz. On-line indexing for general alphabets via predecessor queries on subsets of an ordered list. In 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October 20-23, 2012, pages 283–292. IEEE Computer Society, 2012.
  • [28] W. Kuszmaul. Achieving optimal backlog in the vanilla multi-processor cup game. In S. Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 1558–1577. SIAM, 2020.
  • [29] W. Kuszmaul and A. Westover. The variable-processor cup game. In J. R. Lee, editor, 12th Innovations in Theoretical Computer Science Conference, ITCS 2021, January 6-8, 2021, Virtual Conference, volume 185 of LIPIcs, pages 16:1–16:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
  • [30] A. Litman and S. Moran-Schein. On distributed smooth scheduling. In P. B. Gibbons and P. G. Spirakis, editors, SPAA 2005: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures, July 18-20, 2005, Las Vegas, Nevada, USA, pages 76–85. ACM, 2005.
  • [31] A. Litman and S. Moran-Schein. Smooth scheduling under variable rates or the analog-digital confinement game. Theor. Comp. Sys., 45(2):325–354, June 2009.
  • [32] A. Litman and S. Moran-Schein. On centralized smooth scheduling. Algorithmica, 60(2):464–480, 2011.
  • [33] C. L. Liu. Scheduling algorithms for multiprocessors in a hard real-time environment. JPL Space Programs Summary, 1969, 1969.
  • [34] C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the ACM (JACM), 20(1):46–61, 1973.
  • [35] C. McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
  • [36] M. Moir and S. Ramamurthy. Pfair scheduling of fixed and migrating periodic tasks on multiple resources. In Proceedings of the 20th IEEE Real-Time Systems Symposium, Phoenix, AZ, USA, December 1-3, 1999, pages 294–303. IEEE Computer Society, 1999.
  • [37] C. W. Mortensen. Fully-dynamic two dimensional orthogonal range and line segment intersection reporting in logarithmic time. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 12-14, 2003, Baltimore, Maryland, USA, pages 618–627. ACM/SIAM, 2003.
  • [38] M. Rosenblum, M. X. Goemans, and V. Tarokh. Universal bounds on buffer size for packetizing fluid policies in input queued, crossbar switches. In Proceedings IEEE INFOCOM 2004, The 23rd Annual Joint Conference of the IEEE Computer and Communications Societies, Hong Kong, China, March 7-11, 2004, pages 1126–1134. IEEE, 2004.
  • [39] M. Sutton, A. Greene, and P. Amini. Fuzzing: brute force vulnerability discovery. Pearson Education, 2007.
  • [40] B. Vöcking. How asymmetry helps load balancing. J. ACM, 50(4):568–589, 2003.