How Asymmetry Helps Buffer Management: Achieving Optimal Tail Size in Cup Games
Abstract.
The cup game on cups is a multi-step game with two players, a filler and an emptier. At each step, the filler distributes unit of water among the cups, and then the emptier selects a single cup to remove (up to) unit of water from.
There are several objective functions that the emptier might wish to minimize. One of the strongest guarantees would be to minimize tail size, which is defined to be the number of cups with fill or greater. A simple lower-bound construction shows that the optimal tail size for deterministic emptying algorithms is , however.
We present a simple randomized emptying algorithm that achieves tail size with high probability in for steps. Moreover, we show that this is tight up to doubly logarithmic factors. We also extend our results to the multi-processor cup game, achieving tail size on processors with high probability in . We show that the dependence on is near optimal for any emptying algorithm that achieves polynomial-bounded backlog.
A natural question is whether our results can be extended to give unending guarantees, which apply to arbitrarily long games. We give a lower bound construction showing that no monotone memoryless emptying algorithm can achieve an unending guarantee on either tail size or the related objective function of backlog. On the other hand, we show that even a very small (i.e., ) amount of resource augmentation is sufficient to overcome this barrier.
1. Introduction
At the start of the cup game on cups, there are empty cups. In each step of the game, a filler distributes unit of water among the cups, and then an emptier removes (up to) unit of water from a single cup of its choice. The emptier aims to minimize some measure of “behind-ness” for cups in the system (e.g., the height of the fullest cup, or the number of cups above a certain height). If the emptier’s algorithm is randomized, then the filler is an oblivious adversary, meaning it cannot adapt to the behavior of the emptier.
The cup game naturally arises in the study of processor scheduling, modeling a problem in which tasks each receive work at varying rates, and a scheduler must pick one task to schedule at each time step [7, 23, 8, 32, 30, 36, 6, 26, 33, 34, 1, 31, 19]. The cup game has also found numerous applications to network-switch buffer management [24, 4, 38, 22], quality of service guarantees [7, 1, 31], and data structure deamortization [2, 19, 18, 3, 37, 25, 20, 27, 10].
Bounds on backlog
Much of the work on cup games has focused on bounding the backlog of the system, which is defined to be the amount of water in the fullest cup.
Research on bounding backlog has spanned five decades [7, 23, 8, 32, 30, 36, 6, 26, 33, 34, 19, 11, 28, 1, 18, 31]. Much of the early work focused on the fixed-rate version of the game, in which the filler places a fixed amount of water into each cup on every step [7, 23, 8, 32, 30, 36, 6, 26, 33, 34]; in this case constant backlog is achievable [33, 7]. For the full version of the game, without fixed rates, constant backlog is not possible. In this case, the optimal deterministic emptying algorithm is known to be the greedy emptying algorithm, which always empties from the fullest cup, and which achieves backlog [1, 18]. If the emptier is permitted to use a randomized algorithm, then it can do much better, achieving an asymptotically optimal backlog of for steps with high probability [19, 11, 28].
A strong guarantee: small tail size
The tail size of a cup game at each step is the number of cups containing at least some constant amount of water. For the guarantees in this paper, will be taken to be .
A guarantee of small tail size is particularly appealing for scheduling applications, where cups represent tasks and water represents work that arrives to the tasks over time. Whereas a bound of on backlog guarantees that the furthest behind worker is only behind by at most , it says nothing about the number of workers that are behind by . In contrast, a small bound on tail size ensures that almost no workers are behind by more than .
The main result in this paper is a randomized emptying algorithm that
achieves tail size
. The algorithm also
simultaneously optimizes backlog, keeping the maximum height at
. As a result the total amount of water above
height in the system is with high
probability. In constrast, the best possible deterministic emptying
algorithm allows for up to cups to all have
fills at once (see the lower-bound constructions
discussed in [19] and [12]).
The problems of optimizing tail size and backlog are closely related to the problem of optimizing the -shifted norm of the cup game. Formally, the -shifted norm is given by
where is the fill of cup . 111When , in order for the -shifted norm to be interesting, it is necessary to require that , since trivial filling strategies can achieve fill in cups deterministically. The problem of bounding backlog corresponds to the problem of optimizing the norm of the cup game, and the problem of bounding tail size corresponds to bounding the -shifted norm. By optimizing both metrics simultaneously our algorithm has the desirable property that it also achieves a bound of for the -shifted norm for any , which is optimal up to doubly logarithmic factors.
Past work on tail size using beyond-worst-case analysis
To approach the combinatorial difficulty of analyzing cup games, past works have often turned to various forms of beyond worst-case analysis (e.g., smoothed analysis [11], semi-clairvoyance [31], resource augmentation [11, 31, 19, 18]). The most successful of these approaches has arguably been resource augmentation. In the cup game with resource augmentation, the filler is permitted to distribute at most water into cups at each step (rather than full unit), giving the emptier a small advantage. Resource augmentation can also be studied in a more extreme form by allowing the emptier to fully empty a cup on each step, rather than simply removing a single unit [19, 18]. Resource augmentation significantly simplifies the task of analyzing cup games — for example, there was nearly a 30 year gap between the first randomized bounds on backlog with resource augmentation [19] versus the first bounds without resource augmentation [28].
Currently, the best known guarantees with resource augmentation are achieved by the algorithm of [11] which, using , limits backlog to and tail size to (with high probability).
The algorithm, which is called the smoothed greedy algorithm, begins by randomly perturbing the starting state of the system, and then following a variant of the deterministic greedy emptying algorithm. Roughly speaking, the random perturbations at the beginning of the game allow for the number of cups containing more than unit of water after each step to be modeled by a biased random walk , where and . This is where the resource augmentation plays a critical role, since it introduces a bias to the random walk which pushes the walk near at all times. In contrast, without resource augmentation, the random walk is unbiased.
Subsequent work [28] showed that the algorithm’s guarantees on backlog continue to hold without resource augmentation (at least, for a polynomial number of steps). More generally, the analysis bounds the number of cups at a given height by roughly , which in turn implies an arbitrarily small polynomial tail size. Whether or not a subpolynomial tail size can be achieved without resource augmentation has remained open.
This paper: the asymmetric smoothed greedy algorithm
We show that resource augmentation is not needed to bound tail size. We present a randomized emptying algorithm (called the asymmetric smoothed greedy algorithm) that achieves tail size with high probability in after each of the first steps of a cup game. We prove that the algorithm is nearly optimal, in that any emptying algorithm must allow for a tail size of with probability at least .
The analysis of the algorithm takes an indirect approach to bounding tail size. Rather than examining tail size directly, we instead prove that the use of randomness by the algorithm makes the state of the cups at each step “difficult” for the filler to accurately predict. We call this the unpredictability guarantee. We then show that any greedy-like algorithm that satisfies the unpredictability guarantee is guaranteed to have tail size and backlog .
Algorithmically, the key to achieving the unpredictability guarantee is to add a small amount of asymmetry to the smoothed greedy emptying algorithm. When choosing between cups that have 2 or more units of water, the emptier follows the standard smoothed greedy algorithm; but when choosing between cups whose fills are between 1 and 2, the emptier ignores the specifics of how much water is in each cup, and instead chooses between the cups based on random priorities that are assigned to the cups at the beginning of the game.
Intuitively, the asymmetric treatment of cups ensures that there is a large collection (of size roughly ) of randomly selected cups that are “almost always” empty. The fact that the emptier doesn’t know which cups these are then implies the unpredictability guarantee. Proving this intuition remains highly nontrivial, however, and requires several new combinatorial ideas.
Multi-processor guarantees
The cup game captures a scheduling problem in which a single processor must pick one of tasks to make progress on in each time step. The multi-processor version of this scheduling problem is captured by the -processor cup game [11, 28, 31, 1, 7, 33]. In each step of the -processor cup game, the filler distributes units of water among cups, and the emptier removes unit of water from (up to) different cups. Because the emptier can remove at most unit of water from each cup at each step, an analogous constraint is also placed on the filler, requiring that it places at most unit of water into each cup at each step.
A key feature of the -processor cup game is that the emptier is required to remove water from distinct cups in each step, even if the vast majority of water is contained in fewer than cups.
Until recently, establishing any nontrivial bounds on backlog in the multi-processor cup game remained an open problem, even with the help of resource augmentation. Recent work by Bender et al. [11] (using resource augmentation) and then by Kuszmaul [28] (without resource augmentation) established bounds on backlog closely matching those for the single-processor game.
By extending our techniques to the multi-processor setting, we construct a randomized emptying algorithm achieves tail size with high probability in after each of the first steps of a -processor cup game. Moreover, we show that the dependence on is near optimal for any backlog-bounded algorithm (i.e., any algorithm that achieves backlog or smaller).
Lower bounds against unending guarantees
In the presence of resource augmentation , the smoothed greedy emptying algorithm is known to provide an unending guarantee [11], meaning that the high-probability bounds on backlog and tail size continue to hold even for arbitrarily large steps .
A natural question is whether unending guarantees can also be achieved without the use of resource augmentation. It was previously shown that, when , the smoothed greedy algorithm does not offer unending guarantees [28]. Analyzing the single-processor game has remained an open question, however.
We give a lower bound construction showing that neither the smoothed greedy algorithm nor the asymmetric smoothed greedy algorithm offer unending guarantees for the single-processor cup game without the use of resource augmentation. Even though resource augmentation is needed for the algorithms to achieve unending guarantees, we show that the amount of resource augmentation required is very small. Namely, is both sufficient and necessary for the asymmetric smoothed greedy algorithm to offer unending guarantees on both tail size and backlog.
We generalize our lower-bound construction to work against any emptying algorithm that is both monotone and memoryless, including emptying algorithms that are equipped with a clock. We show that no such emptying algorithm can offer an unending guarantee of backlog in the single-processor cup game, and that any unending guarantee of tail size must come of the cost of polynomial backlog.
We call the filling strategy in our lower bound construction the fuzzing algorithm. The fuzzing algorithm takes a very simple approach: it randomly places water into a pool of cups, and shrinks that pool of cups very slowly over time. The fact that gradually shrinking random noise represents a worst-case workload for cup games suggests that real-world applications of cup games (e.g., processor scheduling, network-switch buffer management, etc.) may be at risk of experiencing “aging” over time, with the performance of the system degrading due to the impossibility of strong unending guarantees.
Related work on other variants of cup games
Extensive work has also been performed on other variants of the cup game. Bar-Noy et al. [5] studied the backlog for a variant of the single-processor cup game in which the filler can place arbitrarily large integer amounts of water into cups at each step. Rather than directly bounding the backlog, which would be impossible, they show that the greedy emptying algorithm achieves competitive ratio , and that this is optimal for both deterministic and randomized online emptying algorithms. Subsequent work has also considered weaker adversaries [21, 17].
Several papers have also explored variants of cup games in which cups are connected by edges in a graph, and in which the emptier is constrained by the structure of the graph [13, 12, 14, 15]. This setting models multi-processor scheduling with conflicts between tasks [14, 15] and some problems in sensor radio networks [12].
Another recent line of work is that by Kuszmaul and Westover [29], which considers a variant of the -processor cup game in which the filler is permitted to change the value of over time. Remarkably, the optimal backlog in this game is significantly worse than in the standard game, and is for an (adaptive) filler.
Cup games have also been used to model memory-access heuristics in databases [9]. Here, the emptier is allowed to completely empty a cup at each step, but the water from that cup is then “recycled” among the cups according to some probability distribution. The emptier’s goal is achieve a large recycling rate, which is the average amount of water recycled in each step.
Closely related to the study of cup games is the problem of load balancing, in which one must assign balls to bins in order to minimize the number of balls in the fullest bin. In the classic load balancing problem, balls arrive over time, and each ball comes with a selection of random bins (out of bins) in which it can potentially be placed. The load balancing algorithm gets to select which of the bins to place the ball in, and can, for example, always select the bin with the fewest balls. But what should the algorithm do when choosing between bins that have the same number of balls? In this case, Vöcking famously showed that the algorthm should always break ties in the same direction [40], and that this actually results in an asymptotically better bound on load than if one breaks ties arbitrarily. Interestingly, one can think of the asymmetry used in Vöcking’s algorithm for load balancing as being analogous to the asymmetry used in our algorithm for the cup game: in both cases, the algorithm always breaks ties in the same random direction, although in our result, the way that one should define a “tie” is slightly nonobvious. In the case of Vöcking’s result, the asymmetry is known to be necessary in order to get an optimal algorithm [40]; it remains an open question whether the same is true for the problem of bounding tail size in cup games.
Related work on the roles of backlog and tail size in data structures
Bounds on backlog have been used extensively in data-structure deamortization [2, 19, 18, 3, 37, 25, 20, 27], where the scheduling decisions by the emptier are used to decide how a data structure should distribute its work.
Until recently, the applications focused primarily on in-memory data structures, since external-memory data structures often cannot afford the cost of a buffer overflowing by an factor. Recent work shows how to use bounds in tail-size in order to solve this problem, and presents a new technique for applying cup games to external-memory data structures [10]. A key insight is that if a cup game has small tail size, then the water in “overflowed cups” (i.e., cups with fill more than ) can be stored in a small in-memory cache. The result is that every cup consumes exactly blocks in external memory, meaning that each cup can be read/modified by the data structure in I/Os. This insight was recently applied to external-memory dictionaries in order to eliminate flushing cascades in write optimized data structures [10].
Outline
The paper is structured as follows. Section 2 describes a new randomized algorithm that achieves small tail size without resource augmentation. Section 3 gives a technical overview of the algorithm’s analysis and of the other results in this paper. Section 4 then presents the full analysis of the algorithm and Section 5 presents (nearly) matching lower bounds. Finally, Section 6 gives lower bounds against unending guarantees and analyzes the amount of resource augmentation needed for such guarantees.
Conventions
Although in principle an arbitrary constraint height can be used to determine which cups contribute to the tail size, all of the algorithms in this paper work with . Thus, throughout the rest of the paper, we define the tail size to be the number of cups with height or greater.
As a convention, we say that an event occurs with high probability in , if the event occurs with probability at least for an arbitrarily large constant of our choice. The constant is allowed to affect other constants in the statement. For example, an algorithm that achieves tail size with probability is said to achieve tail size with high probability in .
2. The Asymmetric Smoothed Greedy Algorithm
Past work on randomized emptying algorithms has focused on analyzing the smoothed greedy algorithm [11, 28]. The algorithm begins by randomly perturbing the starting state of the system: the emptier places a random offset of water into each cup , where the ’s are selected independently and uniformly from . The emptier then follows a greedy emptying algorithm, removing water from the fullest cup at each step. If the fullest cup contains fill less than , however, then the emptier skips its turn. This ensures that the fractional amount of water in each cup (i.e., the amount of water modulo ) is permanently randomized by the initial offset . The randomization of the fractional amounts of water in each cup has been critical to past randomized analyses [11, 28], and continues to play an important (although perhaps less central) role in this paper.
This paper introduces a new variant of the smoothed greedy algorithm that we call the asymmetric smoothed greedy algorithm. The algorithm assigns a random priorities to each cup (at the beginning of the game) and uses these to “break ties” when cups contain relatively small amounts of water. Interestingly, by always breaking these ties in the same direction, we change the dynamics of the game in a way that allows for new analysis techniques. We describe the algorithm in detail below.
Algorithm description
At the beginning of the game, the emptier selects random offsets independently and uniformly at random for each cup . Prior to the game beginning, units of water are placed in each cup . This water is for “bookkeeping” purposes only, and need not physically exist. During initialization, the emptier also assigns a random priority independently and uniformly at random to each cup .
After each step , the emptier selects (up to) different cups to remove unit of water from as follows. If there are or more cups containing or more units of water, then the emptier selects the fullest such cups. Otherwise, the emptier selects all of the cups containing or more units of water, and then resorts to cups containing fill in , choosing between these cups based on their priorities (i.e., choosing cups with larger priorities over those with smaller priorities). The emptier never removes water from any cup containing less than unit of water.
Threshold crossings and a threshold queue
When discussing the algorithm, several additional definitions and conventions are useful. We say that threshold is crossed if cup contains at least units of water for positive integer . When , the threshold is called a light threshold, and otherwise is called a heavy threshold. One interpretation of the emptying algorithm is that there is a queue of thresholds that are currently crossed. Whenever the filler places water into cups, this may add thresholds to the queue. And whenever the emptier removes water from some cup , this removes some threshold from the queue. When selecting thresholds to remove from the queue, the emptier prioritizes heavy thresholds over light ones. Within the heavy thresholds, the emptier prioritizes based on cup height, and within the light thresholds the emptier prioritizes based on cup priorities .
As a convention, we say that a cup is queued if is in (or, equivalently, if is in the queue for any ). The emptier is said to dequeue cup whenever threshold is removed from the queue. The size of the queue refers to the number of thresholds in the queue (rather than the number of cups).
3. Technical Overview
In this section we give an overview of the analysis techniques used in the paper. We begin by discussing the analysis of the asymmetric smoothed greedy algorithm. To start, we focus our analysis on the single-processor cup game, in which .
The unpredictability guarantee
At the heart of the analysis is what we call the unpredictability guarantee, which, roughly speaking, establishes that the filler cannot predict large sets of cups that will all be over-full at the same time as one-another. We show that if an algorithm satisfies a certain version of the unpredictability guarantee, along with certain natural “greedy-like” properties, then the algorithm is guaranteed to exhibit a small tail size.
Formally, we say that an emptying algorithm satisfies -unpredictability at a step if for any oblivious filling algorithm, and for any set of cups whose size is a sufficiently large constant multiple of , there is high probability in that at least one cup in has fill less than after step . In other words, for any polynomial , there exists a constant such that: for each set of cups, the probability that every cup in has height or greater at step is at most .
How -unpredictability helps
Rather than proving that -unpredictability causes the tail size to stay small, we instead show the contrapositive. Namely, we show that if there is a filling strategy that achieves a large tail size, the strategy can be adapted to instead violate -unpredictability.
Suppose that the filler is able to achieve tail size at some step , where is a large constant. Then during each of the next steps, the emptier will remove water from cups containing fill or more (here, we use the crucial fact that the emptier always prioritizes cups with fills or greater over cups with fills smaller than ). This means that, during steps , the set of cups with fill or greater is monotonically increasing. During these steps the filler can place unit of water into each of the cups in order to ensure that these cups all contain fill or greater after step . Thus the filler can transform the initial tail size of into a large set of cups that all have fill or greater. In other words, any filling strategy for achieving large tail size (at some step ) can be harnessed to violate -unpredictability (at some later step ).
The directness of the argument above may seem to suggest that in order to prove the -unpredictability, one must first (at least implicitly) prove a bound on tail size. A key insight in this paper is that the use of priorities in the asymmetric smoothed greedy algorithm allows for -unpredictability to be analyzed as its own entity.
Our algorithm analysis establishes -unpredictability for the first steps of any cup game, with high probability in . This, in turn, implies a bound of on tail size.
Establishing unpredictability
We prove that, out of the roughly cups with priorities , at most of them are queued (i.e., contain fill or greater) at a time, with high probability in . Recall that the cups with priority are prioritized by the asymmetric smoothed greedy algorithm when the algorithm is choosing between cups with fills in the interval . This preferential treatment does not extend the case where there are cups containing fill , however. Remarkably, the limited preferential treatment exhibited by the algorithm is enough to ensure that the number of queued high-priority cups never exceeds .
The bound of on the number of queued cups with priorities implies -unpredictability as follows. For any fixed set of cups, the number of cups in with priority will be roughly with high probability in . If is at least a sufficiently large constant multiple of , then the number of cups with in exceeds the total number of cups with that are queued. Thus must contain at least one non-queued cup, as required for the unpredictability guarantee.
In order to bound the number of queued cups with priority by , we partition the cups into priority levels based on their priorities . Let be a sufficiently large constant multiple of . The priority level of a cup is given by . (Note that the priority levels are only needed in the analysis, and the algorithm does not have to know .) We show that with high probability in , there are never more than queued cups with priority level . Note that, although we only care about bounding the number of queued whose priority-levels are in the top fifty percentile, our analysis will take advantage of the fact that the priorities are defined at a high granularity (rather than, for example, being boolean).
The stalled emptier problem
Bounding the number of queued cups with priority level greater than directly is difficult for the following reason: Over the course of a sequence of steps, the filler may cross many light thresholds cups with priority level greater than , while the emptier only removes heavy thresholds from (i.e., the emptier empties exclusively from cups of height or greater). This means that, in a given sequence of steps, the number of queued cups with priority level greater than could increase substantially. We call this the stalled emptier problem. Note that the stalled emptier problem is precisely what enables the connection between tail size and -unpredictability above, allowing the filler to transform large tail size into a violation of -unpredictability. As a result, any analysis that directly considers the stalled emptier problem must also first bound tail-size, bringing us back to where we started.
Rather than bounding the number of queued cups with priority level greater than , we instead compare the number of queued cups at priority level greater than to the number at priority level . The idea is that, if the stalled-emptier problem allows for the number of queued priority-level greater than to grow large, then it will allow for the number of queued priority-level- cups to grow even larger. That is, without proving any absolute bound on the number of cups at a given priority level, we can still say something about the ratio of high-priority cups to low-priority cups in the queue.
To be precise, we prove that, whenever there are queued cups at some priority level , there are at most queued cups at priority level (recall that is the number of priority levels). Since the number of cups with priority level at least is deterministically anchored at , this allows for us to inductively bound the number of queued cups with large priority levels . In particular, the number of queued cups at priority level or greater never exceeds .
Comparing the number of queued cups with priority level versus
Suppose that after some step , there are some large number of queued cups with priority level . We wish to show that almost all of these cups have priority level exactly .
Before describing our approach in detail (which we do in the following two subheaders), we give an informal description of the approach. Let be number of priority-level- queued cups, and let be the number of priority-level-greater-than- queued cups. The only way that there can be a large number of priority-level-greater-than- cups queued is if they have all entered the queue since the last time that a level- cup was dequeued. This means that the size of has increased by at least since the last time that a priority-level- cup was dequeued. On the other hand, we show that priority-level- cups accumulate in at a much faster rate than the size of varies. In particular, we show that both the rate at which priority-level cups accumulate in and the rate at which ’s size varies are controlled by what we call the “influence” of a time-interval, and that the former is always much larger than the latter. This ensures that .
Note that the analysis avoids arguing directly the number of queued high-priority cups small, which could be difficult due to the stalled emptier problem. Intuitively, the analysis instead shows that low priority cups do a good job “pushing” the high priority cups out of the queue, ensuring that the ratio of low-priority cups (i.e., cups with priority level ) to high-priority cups (i.e. cups with priority level ) is always very large.
Relating the number of high-priority queued cups to changes in
Let be the most recent step such that at least distinct cups with priority level cross thresholds during steps . (Recall that is the number of queued cups with priority level after step .) One can think of the steps as representing a long period of time in which many cups with priority level have the opportunity to accumulate in . We will now show that the use of priorities in the asymmetric smoothed greedy algorithm causes the following property to hold: The number of queued cups with priority level after step is bounded above by the amount that varies during steps .
Because contains only queued cups with priority level after step , at least one cup from must be dequeued during steps (otherwise, would contain at least level- cups after step ). Let be the final step in out of those that dequeue a cup with priority level , and let and denote the queue after steps and , respectively.
By design, the only way that the asymmetric smoothed greedy algorithm can dequeue a cup with priority level at step is if the queue consists exclusively of light thresholds (i.e., thresholds of the form ) for cups with priority level . Moreover, the thresholds in must remain present in , since by the definition of no cups with priority level are dequeued during steps .
Since and contains only thresholds for cups with priority level , the total number of thresholds in for cups with priority level is at most . In other words, the only way that a large number of cups with priority level can be queued after step is if the size of varies by a large amount during steps .
Although may be very large compared to (e.g. ) we show that the amount by which varies during steps is guaranteed to be small as a function of , bounded above by . This means that, out of the cups with priority level in , at most of them can have priority level or larger.
The influence property: bounding the rate at which varies
The main tool in order to analyze the rate at which ’s size varies is to analyze sequences of steps based on their influence. For sequence of steps , the influence of is defined to be , where is the amount of water poured into each cup during interval . We show that, for any priority level and for any step interval with influence for some , either , or two important properties are guaranteed to hold with high probability:
-
•
Step interval crosses thresholds in at least cups with priority level . This is true of any interval with influence at least by a simple concentration-bound argument.
-
•
The size of varies by at most during step interval
The key here is to show that, during each subinterval , the number of thresholds crossed by the filler is within of . In order to do this, we take advantage of the initial random offsets that are placed into each cup by the algorithm. If the filler puts some number of units of water into a cup during , then the cup will deterministically cross thresholds, and with probability will cross one additional threshold (with the outcome depending on the random value ). Since the influence of is at most , we know that . That is, if we consider only the threshold crossings that are not certain, then the number of them is a sum of independent 0-1 random variables with mean at most . By a Chernoff bound, this number varies from its mean by at most , with high probability in .
Combined, we call these the influence property. By a union bound, the influence property holds with high probability on all sub-sequences of steps during the cup game, and for all values .
The influence property creates a link between the number of cups with priority level that cross thresholds during a sequence of steps , and the amount by which varies during steps . Applying this link with to steps , as defined above, implies that varies by at most during steps . This, in turn, bounds the number of queued cups with priority level or larger by after step , completing the analysis.
Extending the analysis to the multi-processor cup game
The primary difficulty in analyzing the multi-processor cup game (i.e., when ) is that the emptier must remove water from different cups, even if almost all of the water in the system resides in fewer than cups. For example, the emptier may dequeue a cup even though there are up to other higher-priority cups that are still queued; furthermore, each of these higher-priority cups may contribute a large number of heavy thresholds to the queue .
We solve this issue by leveraging recent bounds on backlog for the -processor cup game [28], which prove that the deterministic greedy emptying algorithm achieves backlog . This can be used to ensure that, for any cups that are queued, each of them can only contribute a relatively small number of thresholds to the queue . These “miss-behaving” thresholds can then be absorbed into the algorithm analysis.
Nearly matching lower bounds on tail size
Our lower-bound constructions extend the techniques used in past works for backlog [28, 11, 19] in order to apply similar ideas to tail size. One of the surprising features of our lower bounds is that they continue to be nearly tight even in the multi-processor case — the same is not known to be true for backlog. We defer further discussion of the lower bounds to Section 5.
Lower bounds against unending guarantees
Finally, we consider the question of whether the analysis of the asymmetric smoothed greedy algorithm can be extended to offer an unending guarantee, i.e., a guarantee that for any step , no matter how large, there a high probability at step that the backlog and tail size are small.
We show that, without the use of resource augmentation, unending guarantees are not possible for the asymmetric smoothed greedy algorithm, or, more generally, for any monotone memoryless emptying algorithm. Lower bounds against unending guarantees have previously been shown for the multi-processor cup game [28], but remained open for the single-processor cup game.
The filling strategy, which we call the fuzzing algorithm, has a very simple structure: the filler spends a large number (i.e., ) of steps randomly placing water in multiples of into cups . The filler then disregards a random cup, which for convenience we will denote by , and spends a large number of steps randomly placing water into the remaining cups . The filler then disregards another random cup, which we will call cup ,and spends a large number of steps randomly placing water into cups , and so on. We call the portion of the algorithm during which the filler is focusing on cups the -cup phase.
Rather than describe the analysis of the fuzzing algorithm (which is somewhat complicated), we instead give an intuition for why the algorithm works. For simplicity, suppose the emptier follows the (standard) smoothed greedy emptying algorithm.
Between the -cup phase and the -cup phase, the filler disregards a random cup (that we subsequently call cup ). Intuitively, at the time that cup is discarded, there is a roughly chance that cup has more fill than the average of cups . Then, during the -cup phase, there is a reasonably high probability that the filler at some point manages to make all of cups have almost equal fills to one-another. At this point, the emptier will choose to empty out of cup instead of cups . The fact that the emptier neglects cups during the step, even though the filler places unit of water into them, causes their average fill to increase by . Since this happens with constant probability in every phase, the result is that, by the beginning of the -cup phase there are cups each with expected fill roughly
Formalizing this argument leads to several interesting technical problems. Most notably, the cups having almost equal fills (rather than exactly equal fills) may not be enough for cup to receive the emptier’s attention. Moreover, if we wish to analyze the asymmetric smoothed greedy algorithm or, more generally, the class of monotone memoryless algorithms, then cups are not necessarily judged by the emptying algorithm based on their fill heights, and may instead be selected based on an essentially arbitrary objective function that need not treat cups symmetrically. These issues are handled in Section 6 by replacing the notion of cups having almost equal fills as each other with the notion of cups reaching a certain type of specially designed equilibrium state that interacts well with the emptier.
4. Algorithm analysis
In this section, we give the full analysis of the -processor asymmetric smoothed greedy algorithm. The main result of the section is Theorem 4.10, which bounds the tail size of the game by for the first steps of the game with high probability in .
In addition to using the conventions from Section 2 we find it useful to introduce one additional notation: for a sequence of steps , define to be the amount of water placed into cup during . We also continue to use the convention from Section 3 that is a large constant multiple of , and that each cup is assigned a priority level given by .
Recall that a cup crosses a threshold whenever the fill of cup increases from some quantity to some quantity for . A key property of the smoothed greedy algorithm, which was originally noted by Bender et al. [11], is that the number of threshold crossings across any sequence of steps can be expressed using a sum of independent 0-1 random variables.222Note that, when counting the number of threshold crossings across a sequence of steps, the same threshold may get crossed multiple times, and thus contribute more than to the count. This remains true for the asymmetric smoothed greedy algorithm, and is formalized in Lemma 4.1.
Lemma 4.1 (Counting threshold crossings).
For a sequence of steps , and for a cup , the number of threshold crossings in cup is , where is a 0-1 random variable with mean . Moreover, are independent.
Proof.
Recall that the emptier only removes water from a cup if cup contains at least unit. Moreover, the emptier always removes exactly unit of water from cups. Since threshold crossings in each cup depend only on the fractional amount of water (i.e., the amount of water modulo ) in the cup, the behavior of the emptier cannot affect when thresholds are crossed within each cup.
Let be the final step prior to interval . For each cup , the fractional amount of water in the cup at the beginning of interval is
(1) |
Since is uniformly random in , it follows that (1) is as well. The first units of water poured into cup during interval will therefore cross a threshold with probability exactly . The next units of water placed into cup are then guaranteed to cause exactly threshold crossings. The number of crossings in cup during the step sequence is therefore , where is 0-1 random variable with mean , and where the randomness in is due to the random initial offset . Because are independent, so are . ∎
One consequence of Lemma 4.1 is that, if a sequence of steps has a large influence , then each priority level will have at least cups that cross thresholds during interval (recall that is the number of priority levels).
Lemma 4.2 (The influence property, part 1).
Consider a sequence of steps with influence . Let be a priority level. With high probability in , at least distinct cups with priority level cross thresholds during the sequence of steps .
Proof.
By Lemma 4.1, the probability that cup crosses at least one threshold during step sequence is , independently of other cups . The number of distinct cups that cross thresholds during interval is therefore a sum of independent indicator random variables with mean , where is the influence of . Since each cup has probability of having priority level , the number of cups with priority level that cross thresholds in interval is a sum of independent indicator random variables with mean .
If , the number of distinct cups with priority level to cross thresholds is at least trivially. Suppose, on the other hand, that for a sufficiently large constant . Then by a Chernoff bound,
completing the proof. ∎
The proofs of the preceding lemmas have not needed to explicitly consider the effect of there being a potentially large number of processors. In subsequent proofs, the multi-processor case will complicate the analysis in two ways. First, the emptier may sometimes dequeue a cup, even when there are more than heavy thresholds in the queue (this can happen when the heavy thresholds all belong to a set of fewer than cups). Second, and similarly, the emptier may sometimes be unable to remove a full thresholds from the queue , even though (this can happen if of the thresholds in belong to a set of fewer than cups). It turns out that both of these problems can be circumvented using the fact that the emptying algorithm achieves small backlog. In particular, this ensures that no single cup can ever contribute more than thresholds to :
Lemma 4.3 (K. [28]).
In any multi-processor cup game of length, the asymmetric smoothed greedy algorithm achieves backlog after each step, with high probability in .
Proof.
Using Lemma 4.3 as a tool to help in the case of , we now return to the analysis approach outlined in Section 3.
Remark 4.4.
The proof of Lemma 4.3 given in [28] is highly nontrivial. We remark that, although Lemma 4.3 simplifies our analysis, there is also an alternative lighter weight approach that one can use in place of the lemma. In particular, one can begin by analyzing the -truncated cup game for some sufficiently large . In this game, the height of each cup is deterministically bounded above by , and whenever the height of a cup exceeds , unit of water is removed from the cup (and that unit does not count as part of the emptier’s turn). The -truncated cup game automatically satisfies the backlog property stated by Lemma 4.3, allowing for it to be analyzed without requiring the lemma. The analysis can then be used to bound the backlog of the -truncated cup game to at most with high probability (using the analysis by [28] of the greedy algorithm, applied to the cups in the tail). It follows that with high probability, the -truncated cup game and the (standard) cup game are indistinguishable. This means that the high-probability bounds on tail size for the -truncated cup game carry over directly to the the standard cup game.
The next lemma shows that, even though many threshold crossings may occur in a sequence of steps , the size of the queue varies by only a small amount as a function of the influence of .
Lemma 4.5 (The influence property, part 2).
Consider a sequence of steps with influence at most during a game of length at most . For each step , let denote the queue after step . With high probability in ,
for all subintervals .
Proof.
We begin with a simpler claim:
Claim 4.6.
For any subinterval , the number of threshold crossings during is within of with high probability in .
Proof.
Because has influence at most so does . Lemma 4.1 tells us that, during , the number of threshold crossings that occur satisfies . Moreover, satisfies , where is a fixed value and the ’s are independent 0-1 random variables, each taking value with probability . Note that has influence , and thus . By a multiplicative Chernoff bound, it follows that for ,
Set for a sufficiently large constant . If , then and is deterministically . Otherwise,
Since , it follows that the number of threshold crossings in interval is within of with high probability in . ∎
Applying a union bound to the subintervals of steps , Claim 4.6 tells us that every subinterval contains threshold crossings with high probability in .
To complete the proof, consider some subinterval and let be the (absolute) amount by which changes in size during . We wish to show that .
Suppose that shrinks by during . Then the number of threshold crossings in subinterval would have to be at most , meaning that , as desired.
Suppose, on the other hand, that grows by during . Call a step removal-friendly if the emptier removes full units of water during step (i.e., prior to the emptier removing water, there are at least cups with height or greater). By Lemma 4.3, with high probability in , the size of after any removal-unfriendly step is at most . If consists exclusively of removal-friendly steps, then the filler must cross at least threshold crossings in order to increase by ; thus . On the other hand, if contains at least one removal-unfriendly step, then there must be some last such step in . Since after step but after step , it must be that during the steps the size of increases by at least . Since the interval consists entirely of removal-friendly steps, we can apply the reasoning from the first case (i.e., the case of only removal-friendly steps) to deduce that , completing the proof. ∎
Combined, Lemmas 4.2 and 4.5 give the influence property discussed in Section 3. Using this property, we can now relate the number of queued cups with priority level to the number of queued cups with priority level for a given priority level .
Lemma 4.7 (Accumulation of low-priority cups).
Let , let be the number of queued cups with priority level after step , and be the number of queued cups with priority level after step . With high probability in ,
(2) |
Proof.
For each , define to be the smallest step-interval ending at step and with influence at least (or define if the total influence of is less than ). By Lemmas 4.2 and 4.5, each satisfies the following two properties with high probability in :
-
•
The many-crossings property. Either is all of , or the number of priority-level- cups to cross thresholds during is at least .
-
•
The low-variance property. The size of varies by at most during . To see this, we use the fact that has influence at most , which by Lemma 4.5 limits the amount by which varies to
Since , it follows that the amount by which varies during is at most , with high probability in .
Collectively, this pair of properties is called the influence property. By a union bound, the influence property holds for all with high probability in . It follows that the property also holds for (recall is the number of queued cups with priority level after step ).
If , then the total size of can be at most (since begins as size at the start of ). It follows that , meaning that (2) is immediate. In the rest of the proof, we focus on the case in which .
If the emptier never dequeues any priority-level- cups during , then by the many-crossings property, there are at least priority-level- cups queued after step . The number of queued cups with priority levels greater than is therefore at most , as desired.
Suppose, on the other hand, that there is at least one step in at which the emptier dequeues a priority-level- (or smaller) cup, and let be the last such step. Let be the set of queued thresholds after , and let be the set of queued thresholds after step . Call a threshold in permanent if it is a light threshold for a cup with priority level . All permanent thresholds in are guaranteed to also be in , since is the final step in during which the emptier dequeues such a threshold. The non-permanent thresholds in must reside in a set of fewer than cups, since the emptier would rather have dequeued one of them during step than to have dequeued a cup with priority level . By Lemma 4.3, the number of non-permanent thresholds in is therefore at most , with high probability in .
By the low-variance property, the sizes of and differ by at most . It follows that the permanent thresholds in make up all but
of the thresholds in .
Recall that contains thresholds from different cups with priority level . It follows that contains permanent thresholds from at least different cups with priority level . The permanent thresholds in are all for cups with priority level , however. Thus there are at least cups with priority level that are queued after step and remain queued after step . This bounds the number of queued cups after step with priority level greater than by at most , completing the proof.
∎
Since the number of queued cups with priority level can never exceed , Lemma 2 allows for us to bound the number of queued cups with priority level inductively. We argue that if is a sufficiently large constant multiple of , then the number of queued cups with priority level never exceeds , with high probability in . This can then be used to obtain -unpredictability, as defined in Section 3.
Lemma 4.8 (The unpredictability guarantee).
Consider a cup game of length . For any step , and for any set of cups whose size is a sufficiently large constant multiple of , at least one cup in is not queued after step , with high probability in . In other words, each step in the game satisfies -unpredictability.
Proof.
Suppose the number of priority levels is set to be a sufficiently large constant multiple of . For , let denote the maximum number of queued cups with priority level during the game. We claim that with high probability in .
For each priority level , let be the ratio . By (3), for any , either or
It follows that, as long as is a sufficiently large constant multiple of , then , and thus .
Now consider a set of cups of size at least , where is a sufficiently large constant. By a Chernoff bound, the number of cups with priority level greater than in is at least , with high probability in . Since is a sufficiently large constant, and since , this implies that contains more than cups with priority level or greater. Thus the priority-level- cups in cannot all be queued after step .
Note that . Thus every set whose size is a sufficiently large constant multiple of has high probability of containing at least one non-queued cup after step , completing the proof of -unpredictability. ∎
To complete the analysis of the algorithm, we must formalize the connection between the unpredictability guarantee and tail size.
Lemma 4.9.
Suppose that a (randomized) emptying algorithm for the -processor cup game on cups satisfies -unpredictability in the steps of any game of polynomial length, and further satisfies the “greediness property” that whenever there is a cup of height at least , the algorithm empties out of such a cup. Then in any game of polynomial length, the tail size after each step is with high probability in .
Proof.
Consider a polynomial , let be a large constant, and let . Suppose for contradiction that there is a filling strategy such that, at time the tail size is at least with probability at least . Since the tail size is at least at time , then during each of steps , the emptier removes water exclusively from cups with fills at least . This means that the set of cups containing or more units of water is monotonically increasing during steps . If the filler places unit of water into each cup during steps , then it follows that each of cups has fill or greater after step .
The preceding construction guarantees that, with probability at least , all of cups contain at least unit of water at step . If is a sufficiently large constant, this violates -unpredictability at step , a contradiction. ∎
Using the unpredictability guarantee, we can now complete the algorithm analysis:
Theorem 4.10.
Consider a -processor cup game that lasts for steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in , the number of cups containing or more or units of water never exceeds and the backlog never exceeds during the game.
Proof.
By Lemma 4.8, -unpredictability holds for each step in any game of length . By Lemma 4.9, it follows that the tail size remains at most , with high probability in , during any game of length .
Lemma 4.3 bounds the height of the fullest cup in each step by with high probability in . Alternatively, by considering a cup game consisting only of the cups that contain greater than units of water, the analysis of the deterministic greedy emptying algorithm (see [1] for and [28] for ) on cups implies that no cup ever contains more than water, with high probability in . ∎
5. Lower Bounds
In this section we prove that the asymmetric smoothed greedy algorithm achieves (near) optimal tail size within the class of backlog-bounded algorithms.
An emptying algorithm is backlog bounded if the algorithm guarantees that the backlog never exceeds for some polynomial . This is a weak requirement in that that the greedy algorithm achieves a bound of on backlog [1, 28]. The main result in this section states that any backlog-bounded emptying algorithm must allow for a tail size of with probability . The lower bound continues to hold, even when the height-requirement for a cup to be in the tail is increased to an arbitrarily large constant (rather than ). When , the lower bound also applies to non-backlog-bounded emptying algorithms (Lemma 5.3).
Theorem 5.1.
Let be a constant, and suppose for sufficiently large constant . For any backlog-bounded emptying strategy, there is a -step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least cups with fill or greater, with probability at least .
To prove Theorem 5.1, we begin by describing a simple lower-bound construction that we call the -filling strategy. The strategy is structurally similar to the lower-bound construction for backlog given by Bender et al [11].
Lemma 5.2.
Let such that , , and . Then there exists an -step oblivious randomized filling strategy for the -processor cup game on cups that causes cups to each have fill at least , with probability at least . We call this strategy the -filling strategy.
Proof.
Define the -filling strategy for the -processor cup game on cups as follows. In each step of the strategy, the filler places units of water into each of cups. The sets of cups used in each step are selected so that for some random distinct . The -filling strategy completes after steps where . Note that for every step .
We say that the -filling strategy succeeds if at the beginning of each step none of the cups in have been touched (i.e., emptied from) by the emptier. If the -filling strategy succeeds, then at the end the -th step of the strategy there will be cups each with fill
where the first equality uses the fact that .
Now consider the final step of a successful -filling strategy. By the requirement that ,
It follows that, after step of a successful -filling strategy, there are at least cups, each with fill at least
Next we evaluate the probability of a -filling strategy being successful. If the first steps of the -filling strategy all succeed, then the -th step has probability at least of succeeding. In particular, the emptier may touch up to cups during step , and then the set has probability at least of removing a superset of those cups from to get . Since there are at most steps, the -filling strategy succeeds with probability at least . ∎
If we assume that is a sufficiently large constant multiple of , then we can apply Lemma 5.2 directly to achieve a tail size of size . (Furthermore, note that Lemma 5.3 does not have any requirement that the emptier be backlog-bounded.)
Lemma 5.3.
Let be a positive constant, and suppose for a sufficiently large constant (where is large relative to ). Then there is an -step oblivious randomized filling strategy for the -processor cup game on cups that causes cups to all have height or greater after some step with probability at least .
Proof.
Let be a sufficiently large constant compared to . By assumption that is sufficiently large in terms of , we may also assume that
(4) |
The next lemma gives a filling strategy for achieving tail size against any backlog-bounded emptying strategy. Remarkably, the construction in Lemma 5.4 succeeds with probability (rather than with probability ).
Lemma 5.4.
Let be a constant, and suppose for sufficiently large constant . For any backlog-bounded emptying strategy, there is a -step oblivious randomized filling strategy that gives the following guarantee. After the final step of the filling strategy, there are at least cups with fill or greater, with probability .
Proof.
For the sake of simplicity, we allow for the filler to sometimes swap two cups, meaning that the labels of the cups are interchanged.
The basic building block of the algorithm is a mini-phase, which consists of steps. In each step of a mini-phase the filler places unit of water into each of cups , and then strategically distributes additional unit of water among cups . Using the final unit of water, the filler follows a -filling strategy on cups , where is a sufficiently large constant relative to satisfying . We say that a mini-phase succeeds if the emptier removes only unit of water from cups during each step in the mini-phase, and the -filling strategy succeeds within the mini-phase. By the Lemma 5.2, any successful mini-phase will cause at least one cup to have fill at least at the end of the mini-phase (and the filler will know ).
Mini-phases are composed together by the filler to get phases. During each -th phase, the filler selects a random and performs mini-phases (recall that is the polynomial such that the emptier achieves backlog or smaller). After the -th mini-phase, the filler swaps cups and , where is the phase number and is the the cup containing fill in the event that the most recent mini-phase succeeded. The full filling algorithm consists of phases.
We claim that each phase has constant probability of ending in a successful mini-phase (and thus swapping cup with a new cup having fill ). Using this claim, one can complete the analysis as follows. If the swap in phase is at the end of a successful mini-phase, then after the swap, the (new) cup will have fill , and will continue to have fill for the rest of the filling algorithm, since the filler puts unit in cup during every remaining step. At the end of the algorithm, the number of cups with fill is therefore bounded below by a sum of independent - random variables with total mean . This means that the number of such cups with fill is at least with probability , as desired.
It remains to analyze the probability that a given phase ends with a successful mini-phase.
Call a mini-phase clean if the emptier removes unit of water from each cup during each step of the mini-phase, and dirty otherwise. Because each dirty mini-phase increases the total amount of water in cups by at least , and because the emptying algorithm prevents backlog from ever exceeding , there can be at most dirty mini-phases during phase .
By Lemma 5.2, each mini-phase (independently) has at least a constant probability of either being dirty or of succeeding. Out of the possible mini-phases in phase , there can only be dirty mini-phases. It follows that, with probability , at least a constant fraction of the possible mini-phases succeed (or would have succeeded in the event that were at least as large as ). Thus the -th mini-phase succeeds with constant probability. ∎
Combining the preceding lemmas, we prove Theorem 5.1.
6. Lower Bounds Against Unending Guarantees
In this section, we prove upper and lower bounds for unending guarantees, which are probabilistic guarantees that hold for each step , even when is arbitrarily large. As a convention, we will use to denote the fill of cup after step .
The main result of the section is a lower bound showing that no monotone stateless emptier can achieve an unending guarantee of backlog.
Definition 6.1.
An emptier is said to be stateless if the emptier’s decision depends only on the state of the cups at each step. An emptier is said to be monotone if the following holds: given a state of the cups in which the emptier selects some cup to empty, if we define to be except that the amount of water in some cup has been reduced, then the emptier still selects cup in state . A monotone stateless emptier is any emptier that is both monotone and stateless.
The monotonicity and stateless property dictate only how the emptier selects a cup in each step. Once a cup is selected the emptier is permitted to either (a) remove full unit of water from that cup, or (b) skip their turn. This decision is allowed to be an arbitrary function of the state of the cups.
We begin in Section 6.1 by showing that all monotone stateless emptiers can be modeled as using a certain type of score function to make emptying decisions.
In Section 6.2, we give an oblivious filling strategy, called the fuzzing algorithm, that prevents monotone stateless emptiers from achieving unending probabilistic guarantees of backlog (in fact, the filling strategy places an expected water into cups, meaning that bounds on tail size are also not viable, unless backlog is allowed to be polynomially large). The fuzzing algorithm is named after what is known as the fuzzing technique [39] for detecting security vulnerabilities in computer systems – by barraging the system with random noise, one accidentally discovers and exploits the structural holes of the system.
In Section 6.3 we show that the fuzzing algorithm continues to prevent unending guarantees, even when the emptier is equipped with a global clock, allowing for the emptier to adapt to the number of steps that have occurred so far in the game.
Finally, in Sections 6.4 and 6.5, we determine the exact values of the resource-augmentation parameter for which the smoothed greedy and asymmetric smoothed greedy emptying algorithms achieve single-processor unending guarantees. In particular, we show that the minimum attainable value of is .
6.1. Score-Based Emptiers
In this section, we prove an equivalence between monotone stateless emptiers, and what we call score-based emptiers. We then state several useful properties of score-based emptiers.
A score-based emptier has score functions . When selecting which cup to empty from, the emptier selects the cup whose fill maximizes . The emptier can then select whether to either (a) remove full unit of water from the cup, or (b) skip their turn; this decision is an arbitrary function of the state of the cups. The score functions are required to be monotonically increasing functions, meaning that whenever . Moreover, in order to break ties, all of the scores in the multiset are required to be distinct. (We only consider fills of the form because in our lower bound constructions all fills will be multiples of .)
It is easy to see that any score-based emptier is also a monotone stateless emptier. The following theorem establishes that the other direction is true as well:
Theorem 6.2.
Consider cup games in which the filler always places water into cups in multiples of . For these cup games, every monotone stateless emptying algorithm is equivalent to some score-based emptying algorithm.
For a set of cups , a state of the cups is a tuple , where indicates the amount of water in cup . Throughout this section we will restrict ourselves to states where is a non-negative integer multiple of .
In order to prove Theorem 6.2, we first derive several natural properties of monotone stateless emptiers. We say that the pair dominates the pair if either (a) and , or if (b) and in the cup state where the only two non-empty cups are and with and water, respectively, the emptier selects cup . We say that a cup dominates a cup in a state if dominates , where and are the amounts of water in cups and , respectively, in state .
The next lemma shows that the emptiers decision in each step is determined by which cup dominates the other cups.
Lemma 6.3.
Let be any state of the cups , and suppose the emptier is following a monotone stateless algorithm. Then the cup that the emptier selects from is the unique cup that dominates all other cups.
Proof.
It suffices to show that cup dominates all other cups, since only one cup can have this property. Consider a cup , and let and be the amounts of water in cups and , respectively, in state . Let be the state in which the only non-empty cups are and with and units of water, respectively. By the monotonicity property of the emptier, it must be that the emptier selects cup over cup in state . Thus cup dominates cup in state , as desired. ∎
Next we show that domination is a transitive property.
Lemma 6.4.
Consider any monotone stateless emptying algorithm. If dominates and dominates , then dominates .
Proof.
We begin by considering the case where are distinct. Consider the cup state in which the only three cups that contain water are , and they contain water, respectively. By Lemma 6.3, one of cups must dominate the others. Since is dominated by and is dominated by , it must be that is the cup that dominates. Thus dominates , as desired.
Next we consider the case where and . Suppose for contradiction that does not dominate . Consider the cup state in which and are the only cups containing water, and they contain and units of water, respectively. In state , cup dominates cup . By monotonicity, it follows that if we decrease the fill of to , then cup must still dominate cup . But this means that dominates , a contradiction.
Next we consider the case where and . Consider the cup state in which and are the only cups containing water, and they contain and units of water, respectively. In state , cup dominates cup . By monotonicity, it follows that if we decrease the fill of to , then cup must still dominate cup . This means that dominates , as desired.
Finally we consider the case where . In this case it must be that and . Thus , meaning that dominates , as desired. ∎
By exploiting the transitivity of the domination property, we can now prove Theorem 6.2.
Proof of Theorem 6.2.
Consider the set
. For
, say that
if dominates . By Lemma
6.4, the set is totally ordered by the
operation. Since every totally ordered set is also well ordered, and
since every countably-infinite well ordered set is order-isomorphic
to the natural numbers [16], it follows that is
order isomorphic to the natural numbers. That is, there is a
bijection that preserves the
relationship.
Let be the function . By Lemma 6.3, the emptier always selects the cup whose fill maximizes . It follows that the emptier is a score-based emptier. ∎
We conclude the section by observing a useful property of score-based emptiers, namely the existence of what we call equilibrium states.
We say that a state on cups is an equilibrium state if for every pair of distinct cups , . That is, for any cup , if unit of water is added to any cup , then that cup’s score function will exceed the score function of all other cups .
Lemma 6.5.
Consider cups , and suppose their total fill is a non-negative integer multiple of . For any set of score functions, , there is a unique equilibrium state for cups in which the total amount of water in the cups is .
Proof.
Consider any state for cups in which the total fill of the cups is . Define the score severity of to be . If is not an equilibrium state, then we can move units of water from some cup to some other cup in a way that decreases the score severity of .
Let be the set of states for cup in which each cup contains a multiple of units of water, and in which the total amount of water in cups is . Since is finite, there must be a state with minimum score severity. By the preceding paragraph, it follows that is an equilibrium state.
Finally, we prove uniqueness. Suppose are distinct equilibrium states in . Then some cup in must have greater fill than the same cup in . But by the equilibrium property, adding units of water to cup in increases the score function of cup to be larger than any other cup’s score function in . Thus must have a larger score severity than does . Likewise, must have a larger score severity than , a contradiction. ∎
6.2. The Oblivious Fuzzing Filling Algorithm
In this section, we describe a simple filling algorithm that, when pitted against a score-based emptier, achieves backlog after steps with at least constant probability. Note that, throughout this section, we focus only on cup games that do not have resource augmentation.
The filling strategy, which we call the oblivious fuzzing algorithm has a very simple structure. At the beginning of the algorithm, the filler randomly permutes the labels of the cups. The filler then begins their strategy by spending a large number (i.e., ) of steps randomly placing water into cups . The filler then disregards cup (note that cup is a random cup due to the random-permutation step!), and spends a large number of steps randomly placing water into cups . The filler then disregards cup and spends a large number of steps randomly placing water into cups , and so on.
Formally, the oblivious fuzzing algorithm works as follows. Let be a sufficiently large constant, and relabel the cups (from the filler’s perspective) with a random permutation of . The filling strategy consists of phases of steps. The -th phase is called the -cup phase because it focuses on cups . In each step of the -th phase, the filler selects random values uniformly and independently, and then places water into each of cups . If , then the cup receives a full unit of water.
One interesting characteristic of the oblivious fuzzing algorithm is that it represents a natural workload in the scheduling problem that the cup game models. One can think of the cups as representing tasks and water as representing work that needs to be scheduled. In this scheduling problem, the oblivious fuzzing filling algorithm simply assigns work to tasks at random, and selects one task every steps to stop receiving new work.
In this section, we prove the following theorem.
Theorem 6.6.
Consider a cup game on cups. Suppose that the emptier follows a score-based emptying algorithm, and that the filler follows the oblivious fuzzing filling algorithm. Then at the beginning of the -cup phase, the average fill of cups is , in expectation.
For each , call a step in the -cup phase emptier-wasted if the emptier fails to remove water from any of cups during step (either because the emptier skips their turn, or because the emptier selects a cup ). We show that for each , the -cup phase has at least emptier-wasted steps in expectation (or the average height of cups in that phase is already ). During an emptier-wasted step , the total amount of water in cups increases by (since the filler places water into the cups, and the emptier does not remove water from them). It follow that, during the -cup phase, the average amount of water in cups increases by in expectation. Applying this logic to every phase gives Theorem 6.6. The key challenge is show that, within the -cup phase, the expected number of emptier-wasted steps is .
For each , define the initial water level of the -cup phase to be the total amount of water in cups at the beginning of the phase. Define the equilibrium state for the -cup phase to be the equilibrium state for cups in which the total amount of water is (note that exists and is unique by Lemma 6.5). One can think of as representing the total amount of water in cups after the filler places unit of water into the cups at the beginning of the first step in the -cup phase.
Define the bolus of the -cup phase as follows. If is the amount of water in cup at the beginning of the -cup phase, then . That is, is the amount by which cup exceeds its equilibrium fill.
We begin by showing that, if , then the expected number of emptier-wasted steps in phase is at least . The basic idea is that, whenever fewer than emptier-wasted steps have occurred, the filler has some small probability of reaching a state in which all of cups have fills no greater than , respectively. If this happens, then the score function of cup will exceed that of any of cups , and an emptier-wasted step occurs. Thus, whenever fewer than emptier-wasted steps have occurred, the filler has a small probability of incurring an emptier-wasted step (within the next steps). Since the -cup phase is very long, the filler has many opportunities to induce an emptier-wasted step in this way. It follows that, with high probability, there will be at least emptier-wasted steps in the -cup phase. Lemma 6.7 presents this argument in detail.
Lemma 6.7.
Let , condition on , and condition on some value for . Under these conditions, the the expected number of emptier-wasted steps in the -cup phase is at least .
Proof.
Call a step in the -cup phase equilibrium-converging if for each cup that the filler places water into, the fill of cup after the water is placed satisfies . One can think of an equilibrium-converging step as being a step in which the filler’s behavior pushes each cup towards its equilibrium state, without pushing any cups above their equilibrium state.
Call a step in the -cup phase a convergence-enabling if the total amount of water in cups is less than at the beginning of the step.
Convergence-enabling steps have two important
properties.
The Convergence Property: For any convergence-enabling step
, there is some pair of cups (possibly ) that the
filler can place water into in order so that the step is equilibrium
converging. Thus, whenever a convergence-enabling step occurs, there
is probability of at least that the step is equilibrium
converging.
The Bolus Property: At the beginning of any
convergence-enabling step, the amount of water in cup
must be greater than . This is a consequence of
the fact that the total amount of water in cups
is at least
.
Break the -cup phase into sequences of steps , where each is steps. We begin by showing that, if contains a convergence-enabling step and consists of only equilibrium-converging steps, then must also contain at least one emptier-wasted step.
Claim 6.8.
Suppose . Suppose that the first step of is convergence-enabling. If all of the steps in are equilibrium converging, then at least one of the steps must be emptier-wasted.
Proof.
At the end of each step , let denote the amount of water in each cup . Define the potential function to be
Since the first step of is convergence-enabling, the total amount of water in the cups at the beginning of is at most . It follows that, at the beginning of , the potential function is at most .
Whenever a step is both equilibrium-converging and non-emptier-wasted, we have that either or . Since is at most at the beginning of , we cannot have for every step in . Thus, if every step in is equilibrium converging, then there must be at least one step that is either emptier-wasted or that satisfies .
To complete the claim, we show that if there is at least one step in for which , and step is equilibrium-converging, then there also be at least one emptier-wasted step. Suppose , that step is equilibrium-converging, and that no steps in are emptier-wasted. Since there are no emptier-wasted steps in , every step in must be equilibrium-enabling, and thus cup contains more than water at the beginning of step (by the Bolus Property of equilibrium-enabling steps). Since and step is equilibrium converging, the cups contain fills at most , respectively, after the filler places water in step . It follows that, during step , the emptier will choose cup over all of cups . Thus step is an emptier-wasted step, a contradiction. ∎
Next we use Claim 6.8 in order to show that, if and contains a convergence-enabling step, then has probability at least of containing an emptier-wasted step.
Claim 6.9.
Condition on the fact that the first step of is convergence-enabling and that . Then contains an emptier-wasted step with probability at least .
Proof.
Since the first step of is convergence-enabling, either every step of is convergence-enabling or there is at least one emptier-wasted step. Recall by the Convergence Property that each convergence-enabling step has probability at least of being equilibrium-converging. Thus there is probability at least that every step of (up until the first emptier-wasted step) is equilibrium-converging. By Claim 6.8, it follows that the probability of there being an emptier-wasted step is at last . ∎
We can now complete the proof of the lemma. For each , if the number of emptier-wasted steps in is less than , then the first step of is convergence-enabling. Since , then by Claim 6.9, it follows that has probability at least of containing an emptier-wasted step.
Now collect the ’s into collections of size , so that the -th collection is given by
Note that, as long as the constant used to define the fuzzing algorithm is sufficiently large, then the -cup phase is long enough so that it contains at least collections .
Say that a step collection failed if, at the beginning of the step collection, the number of emptier-wasted steps that have occurred is less than , and contains no emptier-wasted steps. The probability of a given failing is at most,
It follows that the probability of any of failing is at most,
If none of the collections fail, then there must be at least emptier-wasted steps. Thus the expected number of emptier-wasted steps that occur during the phase is at least . ∎
In order to show that the expected number of emptier-wasted steps in phase is (at least, whenever ), it suffices to show that expected bolus is (conditioned on ).
In order to prove a lower-bound on the bolus, we examine a related quantity that we call the variation. If is the first step of the -cup phase, then the variation of the -cup phase is defined to be,
The variation captures the degree to which the fills of cups differ from their equilibrium fills. The next lemma shows that, if the variation is large, then so will be the bolus in expectation.
Lemma 6.10.
Let . Fix some value of and of . Then
Proof.
Let be the first step in the -cup phase. By the definition of , we have that,
Thus,
Hence,
Since the cups are randomly labeled, we have by symmetry that,
Since , the proof of the lemma is complete. ∎
By Lemma 6.10, if our goal is to show that , then it suffices to show that .
Lemma 6.11.
Let . For , the variation satisfies, .
Proof.
Let be the first step of the -cup phase. Recall that
Note that the equilibrium state depends on the amount of water in cups at the beginning of step . Let denote the equilibrium state in the case where , and let denote the variation of the cups at step from , i.e.,
Since we are conditioning on , it suffices to consider values of . For each such value of , we will show that . By a union bound, it follows that, . It follows that , as desired.
To complete the proof of the lemma, we must examine . To do this, we break the water placed by the filler into two parts: Let denote the amount of water placed into each cup by the filler in the first steps of the first phase, and let denote the amount of water placed into each cup by the filler in steps . Finally, let denote the total amount of water removed from cup by the emptier during steps .
The role of will be similar to that of the random offsets in the smoothed greedy emptying algorithm. Interestingly, these random offsets now work in the filler’s favor, rather than the emptier’s.
Consider the quantity . We can lower bound the variation by,
(5) |
Each can be written as,
Since the emptier always removes water in chunks of size , . Thus,
(6) |
For the sake of analysis, fix the filler’s behavior in steps , meaning that the only randomness is in the ’s and the ’s are fixed. Define . By (5) and (6), our goal is to show that
(7) |
with probability at least .
We begin by showing that the left side of (7) has expected value . Note that, and . Thus , implying that the left side of (7) has expected value at least .
In order to prove that (7) holds with probability at least , we show that the left side of (7) is tightly concentrated around its mean. If the ’s were independent of one another then we could achieve this with a Chernoff bound. Since the ’s are dependent, we will instead use McDiarmid’s inequality.
Theorem 6.12 (McDiarmid ’89 [35]).
Let be independent random variables over an arbitrary probability space. Let be a function mapping to , and suppose satisfies,
for all . That is, if are fixed, then the value of can affect the value of by at most ; this is known as the Lipschitz condition. Then for all ,
We will apply McDiarmid’s inequality to the quantity (i.e., the left side of (7)). Recall that the filler’s behavior in steps has been fixed, meaning that the value of is a function of the filler’s behavior in steps . Thus is a function of independent random variables , where and are the cups that the filler places water into during step . Moreover, the outcome of each can only change the value of by at most . Thus satisfies the Lipschitz condition with .
We apply McDiarmid’s inequality to , with and in order to conclude that,
Thus (7) holds with probability at least , as desired. ∎
Combining the preceding lemmas, we get that the expected number of emptier-wasted steps in each phase is .
Lemma 6.13.
Suppose . For , if , then the expected number of emptier-wasted steps in the -cup phase is .
Proof.
We can now complete the proof of Theorem 6.6.
Proof of Theorem 6.6.
For each , let denote the average fill of cups at the end of the -cup phase. We will show that , completing the proof of the theorem.
Consider the -cup phase, where . At the beginning of the phase, the average amount of water in cups has expected value , since the cups are indistinguishable up until the beginning of the -cup phase. If is the expected number of emptier-wasted steps in phase , then, . Hence,
Let denote the event that for all . If , then it follows from Lemma 6.13 that for each . Thus,
as desired.
Now consider the case where . That is, with probability at least , there is some for which . Let be the largest for which . Then the average fill of cups at the beginning of the -cup phase is at least . Note that, for any phase and value , we have . Given that , it follows that . This means that . Since occurs with probability at least , we have that , completing the proof. ∎
6.3. Giving the Emptier a Time Stamp
In this section, we show that unending guarantees continue to be impossible, even if the score-based emptier is permitted to change their algorithm based on a global time stamp.
A dynamic score-based emptying algorithm is dictated by a sequence , where each is a score-based emptying algorithm. On step of the cup game, the algorithm follows algorithm .
Define the extended oblivious fuzzing filling algorithm to be the oblivious fuzzing filling strategy, except that each phase’s length is increased to consist of steps, where is a sufficiently large function of (that we will choose later).
Theorem 6.14.
Consider a cup game on cups. Suppose the emptier is a dynamic score-based emptier. Suppose the filler follows the extended oblivious fuzzing filling algorithm. Then at the beginning of the -cup phase, the average fill of cups is , in expectation.
Understanding when two score-based algorithms can be treated as “equivalent”
We say that the cups are in a legal state if each cup contains an integer multiple of water, and the total water in the cups is at most . By the assumption that the emptier is backlog-bounded, and that the filler follows the extended oblivious fuzzing filling algorithm, we know that the cup game considered in Theorem 6.14 will always be in a legal state.
Let denote the set of legal states. For each score-based emptying algorithm , we define the behavior vector of to be the set
Note that, for some states , the emptier may choose not to empty from any cup—in this case, does not appear in any pair in .
The behavior vector captures ’s behavior on all legal states. If for two score-based emptying algorithms and , then we treat the two emptying algorithms as being the same (since their behavior is indistinguishable on the cup games that we are analyzing). This means that the number of distinct score-based emptying algorithms is finite, bounded by . We will use to denote the set of distinct score-based emptying algorithms. Formally, each element of is an equivalence class of algorithms, where each score-based algorithm is assigned to an equivalence class based on its behavior vector .
Associating each phase with a score-based algorithm that it “focuses” on
In order to analyze the -cup phase of the extended oblivious fuzzing filling algorithm, we break the phase into segments, where each segment consists of steps. For each segment, there must be an algorithm that the emptier uses at least times within the segment. We say that the segment focuses on emptying algorithm .
Let denote the number of segments in the -cup phase. By the pigeon-hole principle, there must be some algorithm such that at least of the segments in the -cup phase focus on . We say that the -cup phase focuses on algorithm .
For each , let denote the score-based emptying algorithm that the -cup phase focuses on (if there are multiple such algorithms , select one arbitrarily).
Our proof of Theorem 6.14 will analyze the -cup phase by focusing on how the phase interacts with algorithm . This is made difficult by the fact that, between every two steps in which the emptier uses algorithm , there may be many steps in which the emptier uses other score-based emptying algorithms.
Defining the equilibrium state and bolus of each phase
We define the equilibrium state and the bolus of the -cup phase to each be with respect to the score-based emptying algorithm . That is, is the equilibrium state for algorithm for the cups in which the total amount of water (in those cups) is (recall that is the amount of water in cups at the beginning of the -cup phase). Using this definition of , the bolus is , where is the amount of water in cup at the beginning of the -cup phase.
The key to proving Theorem 6.14 is to show that, if , then the expected number of emptier-wasted steps in the -cup phase is at least . That is, we wish to prove a result analogous to Lemma 6.7 from Section 6.2.
Lemma 6.15.
Let , condition on , and condition on some value for . Under these conditions, the the expected number of emptier-wasted steps in the -cup phase is at least .
Proof.
Call a step in the -cup phase equilibrium-converging if either:
-
•
The emptier uses algorithm during step , and for each cup that the filler places water into, the fill of cup after the water is placed satisfies .
-
•
The emptier uses an algorithm during step , and the filler places all of their water (i.e., a full unit) into the cup whose score (as assigned by the score-based algorithm ) is largest at the beginning of step .
The first case in the definition of equilibrium-converging steps is similar to that in the proof of Lemma 6.7. The second case, where the emptier uses an algorithm is different; in this case, the definition guarantees that the step is either emptier-wasted or is a no-op (meaning that the water removed by the emptier during the step is exactly the same as the water placed by the filler).
Call a step in the -cup phase a convergence-enabling if the total amount of water in cups is less than at the beginning of the step.
Just as in the proof of Lemma 6.7, convergence-enabling steps have two important properties:
The Convergence Property: For any convergence-enabling step
, there is some pair of cups (possibly ) that the
filler can place water into in order so that the step is equilibrium
converging. Thus, whenever a convergence-enabling step occurs, there
is probability of at least that the step is equilibrium
converging.
The Bolus Property: At the beginning of any
convergence-enabling step, the amount of water in cup
must be greater than . This is a consequence of
the fact that the total amount of water in cups
is at least .
We now prove a claim analogous to Claim 6.8.
Claim 6.16.
Suppose , and consider a segment in the -cup phase that focuses on . If begins with a convergence-enabling step, and every step in is equilibrium converging, then must contain an emptier-wasted step.
Proof.
There are two types of steps in : (1) equilibrium converging steps where the emptier uses algorithm , and (2) equilibrium converging steps where the emptier does not use . All type (2) steps are either emptier-wasted or are no-ops (meaning that they do not change the state of the cup game). On the other hand, because segment focuses on , there must be at least type (1) steps. Assuming no type (2) steps are emptier wasted, then the type (1) steps meet the conditions for Claim 6.8 (i.e., the type (1) steps meet the conditions that are placed on in the claim). Thus, by Claim 6.8, at least one of the steps in is emptier-wasted, as desired. ∎
We can now complete the proof of the lemma. For each segment that focuses on , if the number of emptier-wasted steps prior to is less than , then the first step of is convergence-enabling (and any steps in up until the first emptier-wasted step in are also convergence enabling). By the Convergence Property and Claim 6.16, it follows that has probability at least
of containing an emptier-wasted step.
If is the number of segments in a phase, then at least of the segments in phase must focus on algorithm . Denote these segments by . Break the -cup phase into collections of time segments, where each contains of the ’s. Say that a collection fails if fewer than emptier-wasted steps occur prior to , and no emptier-wasted step occurs during . Since each contains at least segments that focus on , the probability of failing is at most,
(8) |
Assuming the is sufficiently large as a function of , then the exponent in (8) is also sufficiently large as a function of , and thus (8) is at most . By a union bound over the ’s, it follows that the probability of any failing is at most . On the other hand, if none of the ’s fail, then at least steps must be emptier-wasted (here, we are using the fact that the number of collections is ). Thus the expected number of emptier-wasted steps is at least . ∎
We can now prove Theorem 6.14.
6.4. Unending guarantees with small resource augmentation
In this section we show that, even though resource augmentation is needed to achieve unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms, the amount of resource augmentation that is necessary is substantially smaller than was previously known. In particular, we prove unending guarantees when .
Theorem 6.17 states an unending guarantee for the smoothed greedy emptying algorithm, using .
Theorem 6.17.
Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter , then each step achieves backlog with probability (where the exponent in the polylog is a constant of our choice).
Theorem 6.18 states an unending guarantee for the asymmetric smoothed greedy emptying algorithm, using .
Theorem 6.18.
Consider a single-processor cup game in which the emptier follows the asymmetric smoothed greedy emptying algorithm, and the filler is an oblivious filler. If the game has resource augmentation parameter , then each step achieves tail size and the backlog with probability (where the exponent in the polylog in the probability is a constant of our choice).
We begin by proving Theorem 6.17.
Call a step a rest step if the emptier removes less than unit of water during that step. The next lemma shows that rest steps are relatively common.
Lemma 6.19.
Consider a single-processor cup game in which the emptier follows either the smoothed greedy emptying algorithm. Any sequence of steps must contain a rest step.
Proof.
Whenever a step is a rest step, it must be that every cup contains less than unit of water, meaning that the total amount of water in the system is at most . On the other hand, during each non-rest step, the amount of water in the system decreases by at least . It follows that, if there are non-rest steps in a row, then the total amount of water in the system after those steps is at most . Thus the number of non-rest steps that can occur in a row is never more than , as desired. ∎
In order to prove Theorem 6.17, we will exploit the following result of Bender et al. [11] which analyzes the smoothed greedy algorithm for :
Theorem 6.20 (Bender et al. [11]).
Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Moreover, suppose . For any positive constants and , and any , step has backlog with probability at least .
Although Theorem 6.20 only applies to the first steps of a game, we can use it to prove the following lemma. Define a fractional reset to be what happens if one reduces the fill of each cup to . That is, the fills of the cups are decreased by integer amounts to be in . The next lemma shows that, if a cup game is fractionally reset after a given step , then the following steps are guaranteed to have small backlog.
Lemma 6.21.
Consider a single-processor cup game in which the emptier follows the smoothed greedy emptying algorithm, and the filler is an oblivious filler. Consider a step , and suppose that, after step , the cup system is fractionally reset. Then for any positive constants and , and any , step has backlog with probability at least .
Proof.
For each cup , let be the random initial offset placed into cup by the smoothed greedy emptying algorithm, and let be the total amount of water placed into cup by the filler during steps . Because the emptier always removes water in multiples of , they never change the fractional mount of water in any cup (i.e., the amount of water modulo ). It follows that the fractional amount of water in each cup is given by,
Since is uniformly random in , the value is also uniformly random in . Moreover, because the initial offsets are independent of one another, so are the values .
Because the values are independent and uniformly random in , they can be thought of as initial random offsets for the smoothed greedy emptying algorithm. Thus, if each cup is reset to have fill after step , then the following steps can be analyzed as the first steps of a cup game in which the emptier follows the smoothed greedy emptying algorithm. The claimed result therefore follows from Theorem 6.20. ∎
We now prove Theorem 6.17.
Proof of Theorem 6.17.
Consider a step , and let be a large positive constant. For each step , Lemma 6.21 tells us that if a fractional reset were to happen after step , then step would have probability at least of having backlog . By a union bound, it follows that if a fractional reset were to happen after any of steps , then step would have probability at least of having backlog . Supposing that is a sufficiently large constant, this probability is at least .
By Lemma 6.19, at least one step is a rest step. This means that, at the end of step , every cup contains less than unit of water. In other words, the state of the system after step is the same as if the system were to be fractionally reset. It follows that, for any constant , the backlog after step is with probability at least . This completes the proof. ∎
The proof of Theorem 6.18 follows similarly to the proof of Theorem 6.17. Rather than using Theorem 6.20, we instead analyze the case of using the following version of Theorem 4.10:
Theorem 6.22.
Consider a single-process cup game that lasts for steps and in which the emptier follows the asymmetric smoothed greedy algorithm. Then with high probability in , the number of cups containing or more or units of water never exceeds and the backlog never exceeds during the game.
We now prove Theorem 6.18.
6.5. Tight lower bounds on resource augmentation for smoothed greedy
Theorems 6.17 and 6.18 give unending guarantees for the smoothed greedy (and asymmetric smoothed greedy) emptying algorithms using resource augmentation . Theorem 6.23 shows that such guarantees cannot be achieved with smaller resource augmentation.
Theorem 6.23.
Consider a single-processor cup game on cups. Suppose , and suppose the emptier follows either the smoothed greedy emptying algorithm or the asymmetric smoothed greedy emptying algorithm. Then there is an oblivious filling strategy that causes there to be a step at which the expected backlog is .
To prove Theorem 6.23, we will have the filler follow the oblivious fuzzing filling algorithm on cups. Rather than placing water in multiples of , however, the filler now places water in multiples of (in order so that the total water placed in each step is ).
The fact that , however, makes it so that we cannot directly apply Theorem 6.6. Thus, in order to prove Theorem 6.23, we must first prove that the resource augmentation is so small that, with high probability, it does not have a significant effect on the game by step .
In order to bound the impact of resource augmentation on the emptier, we exploit the random structure of the emptier’s algorithm, and use that random structure to show that the emptier’s behavior is robust to small “perturbations” due to resource augmentation.
Lemma 6.24.
Consider resource augmentation and consider a step . With probability at least , the resource augmentation does not affect the emptier’s behavior during the first steps.
Proof.
We begin with a simple observation: the total amount of resource augmentation during the first steps is . Call this the Net-Augmentation Observation.
For each step , define to be the state of the cup game after the -th step without resource augmentation, and define to be the state of the cup game with resource augmentation (note that, in both cases, the emptier follows the same variant of the smoothed greedy algorithm using the same random initial offsets). Let (resp. ) denote the fill of cup , after the filler has placed water in the -th step, but before the emptier has removed water (note that, when discussing fill, we include the random initial offset placed by the emptier in each cup).
Suppose that, during steps , the emptier’s behavior is unaffected by the resource augmentation. The only way that the emptier’s behavior filler in step can be affected by the resource augmentation is if either:
-
•
Case 1: for some cup . By the Net-Augmentation Observation, it follows that .
-
•
Case 2: but for some cups and . By the Net-Augmentation Observation, it follows that .
We will show that the probability of either Case 1 or Case 2 happening is at most . It follows that the probability of resource augmentation affecting the emptier’s behavior during any of the first steps is at most .
Rather than directly bounding probability of either Case 1 or Case 2 occurring on step , we can instead bound the probability that either,
(10) |
for some cup , or that
(11) |
for some cups . Recall that the values modulo are uniformly and independently random between and ; this is because the emptier initially places random offsets into each cup , which permanently randomizes the fractional amount of water in that cup. Thus the probability that for a given cup is , and the probability that for a given pair of cups is also . By union-bounding over all cups (for (10)) and over all pairs of cups (for (11)), we get that the probability of either (10) or (11) occurring is , as desired. ∎
We can now complete the proof of Theorem 6.23.
References
- [1] M. Adler, P. Berenbrink, T. Friedetzky, L. A. Goldberg, P. Goldberg, and M. Paterson. A proportionate fair scheduling rule with good worst-case performance. In Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA), pages 101–108, 2003.
- [2] A. Amir, M. Farach, R. M. Idury, J. A. L. Poutré, and A. A. Schäffer. Improved dynamic dictionary matching. Inf. Comput., 119(2):258–282, 1995.
- [3] A. Amir, G. Franceschini, R. Grossi, T. Kopelowitz, M. Lewenstein, and N. Lewenstein. Managing unbounded-length keys in comparison-driven data structures with applications to online indexing. SIAM J. Comput., 43(4):1396–1416, 2014.
- [4] Y. Azar and A. Litichevskey. Maximizing throughput in multi-queue switches. Algorithmica, 45(1):69–90, 2006.
- [5] A. Bar-Noy, A. Freund, S. Landa, and J. Naor. Competitive on-line switching policies. Algorithmica, 36(3):225–247, 2003.
- [6] A. Bar-Noy, A. Nisgav, and B. Patt-Shamir. Nearly optimal perfectly periodic schedules. Distributed Comput., 15(4):207–220, 2002.
- [7] S. K. Baruah, N. K. Cohen, C. G. Plaxton, and D. A. Varvel. Proportionate progress: A notion of fairness in resource allocation. Algorithmica, 15(6):600–625, Jun 1996.
- [8] S. K. Baruah, J. Gehrke, and C. G. Plaxton. Fast scheduling of periodic tasks on multiple resources. In Proceedings of IPPS ’95, The 9th International Parallel Processing Symposium, April 25-28, 1995, Santa Barbara, California, USA, pages 280–288. IEEE Computer Society, 1995.
- [9] M. A. Bender, J. Christensen, A. Conway, M. Farach-Colton, R. Johnson, and M. Tsai. Optimal ball recycling. In T. M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2527–2546. SIAM, 2019.
- [10] M. A. Bender, R. Das, M. Farach-Colton, R. Johnson, and W. Kuszmaul. Flushing without cascades. In S. Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 650–669. SIAM, 2020.
- [11] M. A. Bender, M. Farach-Colton, and W. Kuszmaul. Achieving optimal backlog in multi-processor cup games. In M. Charikar and E. Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 1148–1157. ACM, 2019.
- [12] M. A. Bender, S. P. Fekete, A. Kröller, V. Liberatore, J. S. B. Mitchell, V. Polishchuk, and J. Suomela. The minimum backlog problem. Theor. Comput. Sci., 605:51–61, 2015.
- [13] M. H. L. Bodlaender, C. A. J. Hurkens, V. J. J. Kusters, F. Staals, G. J. Woeginger, and H. Zantema. Cinderella versus the wicked stepmother. In IFIP International Conference on Theoretical Computer Science, pages 57–71, 2012.
- [14] M. H. L. Bodlaender, C. A. J. Hurkens, and G. J. Woeginger. The cinderella game on holes and anti-holes. In Proceedings of the 37th International Conference on Graph-Theoretic Concepts in Computer Science (WG), pages 71–82, 2011.
- [15] M. Chrobak, J. Csirik, C. Imreh, J. Noga, J. Sgall, and G. J. Woeginger. The buffer minimization problem for multiprocessor scheduling with conflicts. In Proceedings of the 28th International Colloquium on Automata, Languages and Programming (ICALP), volume 2076, pages 862–874, 2001.
- [16] K. Ciesielski et al. Set theory for the working mathematician, volume 39. Cambridge University Press, 1997.
- [17] P. Damaschke and Z. Zhou. On queuing lengths in on-line switching. Theor. Comput. Sci., 339(2-3):333–343, 2005.
- [18] P. Dietz and D. Sleator. Two algorithms for maintaining order in a list. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing (STOC), pages 365–372, 1987.
- [19] P. F. Dietz and R. Raman. Persistence, amortization and randomization. In Proceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 78–88, 1991.
- [20] J. Fischer and P. Gawrychowski. Alphabet-dependent string searching with wexponential search trees. In F. Cicalese, E. Porat, and U. Vaccaro, editors, Combinatorial Pattern Matching - 26th Annual Symposium, CPM 2015, Ischia Island, Italy, June 29 - July 1, 2015, Proceedings, volume 9133 of Lecture Notes in Computer Science, pages 160–171. Springer, 2015.
- [21] R. Fleischer and H. Koga. Balanced scheduling toward loss-free packet queuing and delay fairness. Algorithmica, 38(2):363–376, Feb 2004.
- [22] H. R. Gail, G. A. Grover, R. Guérin, S. L. Hantler, Z. Rosberg, and M. Sidi. Buffer size requirements under longest queue first. Perform. Evaluation, 18(2):133–140, 1993.
- [23] L. Gasieniec, R. Klasing, C. Levcopoulos, A. Lingas, J. Min, and T. Radzik. Bamboo garden trimming problem (perpetual maintenance of machines with different attendance urgency factors). In B. Steffen, C. Baier, M. van den Brand, J. Eder, M. Hinchey, and T. Margaria, editors, SOFSEM 2017: Theory and Practice of Computer Science - 43rd International Conference on Current Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-20, 2017, Proceedings, volume 10139 of Lecture Notes in Computer Science, pages 229–240. Springer, 2017.
- [24] M. H. Goldwasser. A survey of buffer management policies for packet switches. SIGACT News, 41(1):100–128, 2010.
- [25] M. T. Goodrich and P. Pszona. Streamed graph drawing and the file maintenance problem. In S. K. Wismath and A. Wolff, editors, Graph Drawing - 21st International Symposium, GD 2013, Bordeaux, France, September 23-25, 2013, Revised Selected Papers, volume 8242 of Lecture Notes in Computer Science, pages 256–267. Springer, 2013.
- [26] N. Guan and W. Yi. Fixed-priority multiprocessor scheduling: Critical instant, response time and utilization bound. In 26th IEEE International Parallel and Distributed Processing Symposium Workshops & PhD Forum, IPDPS 2012, Shanghai, China, May 21-25, 2012, pages 2470–2473. IEEE Computer Society, 2012.
- [27] T. Kopelowitz. On-line indexing for general alphabets via predecessor queries on subsets of an ordered list. In 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October 20-23, 2012, pages 283–292. IEEE Computer Society, 2012.
- [28] W. Kuszmaul. Achieving optimal backlog in the vanilla multi-processor cup game. In S. Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 1558–1577. SIAM, 2020.
- [29] W. Kuszmaul and A. Westover. The variable-processor cup game. In J. R. Lee, editor, 12th Innovations in Theoretical Computer Science Conference, ITCS 2021, January 6-8, 2021, Virtual Conference, volume 185 of LIPIcs, pages 16:1–16:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
- [30] A. Litman and S. Moran-Schein. On distributed smooth scheduling. In P. B. Gibbons and P. G. Spirakis, editors, SPAA 2005: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures, July 18-20, 2005, Las Vegas, Nevada, USA, pages 76–85. ACM, 2005.
- [31] A. Litman and S. Moran-Schein. Smooth scheduling under variable rates or the analog-digital confinement game. Theor. Comp. Sys., 45(2):325–354, June 2009.
- [32] A. Litman and S. Moran-Schein. On centralized smooth scheduling. Algorithmica, 60(2):464–480, 2011.
- [33] C. L. Liu. Scheduling algorithms for multiprocessors in a hard real-time environment. JPL Space Programs Summary, 1969, 1969.
- [34] C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the ACM (JACM), 20(1):46–61, 1973.
- [35] C. McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
- [36] M. Moir and S. Ramamurthy. Pfair scheduling of fixed and migrating periodic tasks on multiple resources. In Proceedings of the 20th IEEE Real-Time Systems Symposium, Phoenix, AZ, USA, December 1-3, 1999, pages 294–303. IEEE Computer Society, 1999.
- [37] C. W. Mortensen. Fully-dynamic two dimensional orthogonal range and line segment intersection reporting in logarithmic time. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 12-14, 2003, Baltimore, Maryland, USA, pages 618–627. ACM/SIAM, 2003.
- [38] M. Rosenblum, M. X. Goemans, and V. Tarokh. Universal bounds on buffer size for packetizing fluid policies in input queued, crossbar switches. In Proceedings IEEE INFOCOM 2004, The 23rd Annual Joint Conference of the IEEE Computer and Communications Societies, Hong Kong, China, March 7-11, 2004, pages 1126–1134. IEEE, 2004.
- [39] M. Sutton, A. Greene, and P. Amini. Fuzzing: brute force vulnerability discovery. Pearson Education, 2007.
- [40] B. Vöcking. How asymmetry helps load balancing. J. ACM, 50(4):568–589, 2003.