WOR and ’s:
Sketches for -Sampling Without Replacement
Abstract
Weighted sampling is a fundamental tool in data analysis and machine learning pipelines. Samples are used for efficient estimation of statistics or as sparse representations of the data. When weight distributions are skewed, as is often the case in practice, without-replacement (WOR) sampling is much more effective than with-replacement (WR) sampling: it provides a broader representation and higher accuracy for the same number of samples. We design novel composable sketches for WOR sampling, weighted sampling of keys according to a power of their frequency (or for signed data, sum of updates). Our sketches have size that grows only linearly with the sample size. Our design is simple and practical, despite intricate analysis, and based on off-the-shelf use of widely implemented heavy hitters sketches such as CountSketch. Our method is the first to provide WOR sampling in the important regime of and the first to handle signed updates for .
1 Introduction
Weighted random sampling is a fundamental tool that is pervasive in machine learning and data analysis pipelines. A sample serves as a sparse summary of the data and provides efficient estimation of statistics and aggregates.
We consider data presented as elements in the form of key value pairs . We operate with respect to the aggregated form of keys and their frequencies , defined as the sum of values of elements with key . Examples of such data sets are stochastic gradient updates (keys are parameters and element values are signed and the aggregated form is the combined gradient), search (keys are queries, elements have unit values, and the aggregated form are query-frequency pairs), or training examples for language models (keys are co-occurring terms).
The data is commonly distributed across servers or devices or is streamed and the number of distinct keys is very large. In this scenario it is beneficial to perform computations without explicitly producing a table of key-frequency pairs, as this requires storage or communication that grows linearly with the number of keys. Instead, we use composable sketches which are data structures that support (i) processing a new element : Computing a sketch of from a sketch of and (ii) merging: Computing a sketch of from sketches of each and (iii) are such that the desired output can be produced from the sketch. Composability facilitates parallel, distributed, or streaming computation. We aim to design sketches of small size, because the sketch size determines the storage and communication requirements. For sampling, we aim for the sketch size to be not much larger than the desired sample size.
The case for ’s:
Aggregation and statistics of functions of the frequencies are essential for resource allocation, planning, and management of large scale systems across application areas. The need for efficiency prompted rich theoretical and applied work on streaming and sketching methods that spanned decades [60, 41, 4, 38, 43, 36, 35, 55, 54]. We study sampling, weighted sampling of keys with respect to a power of their frequency . These samples support estimates of frequency statistics of the general form for functions of frequency and constitute sparse representations of the data. Low powers () are used to mitigate frequent keys and obtain a better resolution of the tail whereas higher powers () emphasize more frequent keys. Moreover, recent work suggests that on realistic distributions, samples for provide accurate estimates for a surprisingly broad set of tasks [34].
Sampling is at the heart of stochastic optimization. When training data is distributed [57], sampling can facilitate efficient example selection for training and efficient communication of gradient updates of model parameters. Training examples are commonly weighted by a function of their frequency: Language models [59, 66] use low powers of frequency to mitigate the impact of frequent examples. More generally, the function of frequency can be adjusted in the course of training to shift focus to rarer and harder examples as training progresses [9]. A sample of examples can be used to produce stochastic gradients or evaluate loss on domains of examples (expressed as frequency statistics). In distributed learning, the communication of dense gradient updates can be a bottleneck, prompting the development of methods that sparsify communication while retaining accuracy [57, 2, 71, 47]. Weighted sampling by the -th powers of magnitudes complements existing methods that sparsify using heavy hitters (or other methods, e.g., sparsify randomly), provides adjustable emphasis to larger magnitudes, and retains sparsity as updates are composed.
The case for WOR:
Weighted sampling is classically considered with (WR) or without (WOR) replacement. We study here the WOR setting. The benefits of WOR sampling were noted in very early work [44, 42, 72] and are becoming more apparent with modern applications and the typical skewed distributions of massive datasets. WOR sampling provides a broader representation and more accurate estimates, with tail norms replacing full norms in error bounds. Figure 1 illustrates these benefits of WOR for Zipfian distributions with sampling (weighted by frequencies) and sampling (weighted by the squares of frequencies). We can see that WR samples have a smaller effective sample size than WOR (due to high multiplicity of heavy keys) and that while both WR and WOR well-approximate the frequency distribution on heavy keys, WOR provides a much better approximation of the tail.
Related work.
The sampling literature offers many WOR sampling schemes for aggregated data: [68, 72, 14, 69, 64, 37, 25, 26, 23]. A particularly appealing technique is bottom- (order) sampling, where weights are scaled by random variables and the sample is the set of keys with top- transformed values [69, 64, 37, 25, 26]. There is also a large body of work on sketches for sampling unaggregated data by functions of frequency. There are two primary approaches. The first approach involves transforming data elements so that a bottom- sample by function of frequency is converted to an easier problem of finding the top- keys sorted according to the maximum value of an element with the key. This approach yields WOR distinct () sampling [53], sampling [41, 22], and sampling with respect to any concave sublinear functions of frequency (including sampling for ) [20, 24]). These sketches work with non-negative element values but only provide limited support for negative updates [40, 22]. The second approach performs WR sampling for using sketches that are random projections [45, 39, 5, 51, 61, 6, 52, 50]. The methods support signed updates but were not adapted to WOR sampling. For , a classic lower bound [4, 7] establishes that sketches of size polynomial in the number of distinct keys are required for worst case frequency distributions. This task has also been studied in distributed settings [16, 49]; [49] observes the importance of WOR in that setting though does not allow for updates to element values.
Contributions:
We present WORp: A method for WOR sampling for via composable sketches of size that grows linearly with the sample size. WORp is simple and practical and uses a bottom- transform to reduce sampling to (residual) heavy hitters (rHH) computation on the transformed data. The technical heart of the paper is establishing that for any set of input frequencies, the keys with top- transformed frequencies are indeed rHH. In terms of implementation, WORp only requires an off-the-shelf use of popular (and widely-implemented) HH sketches [60, 56, 15, 35, 58, 10]. WORp is the first WOR sampling method (that uses sample-sized sketches) for the regime and the first to fully support negative updates for . As a bonus, we include practical optimizations (that preserve the theoretical guarantees) and perform experiments that demonstrate both the practicality and accuracy of WORp.111Code for the experiments is provided in the following Colab notebook https://colab.research.google.com/drive/1JcNokk_KdQz1gfUarKUcxhnoo9CUL-fZ?usp=sharing
In addition to the above, we show that perhaps surprisingly, it is possible to obtain a WOR -sample of a set of indices, for any , with variation distance at most to a true WOR -sample, and using only bits of memory. Our variation distance is extremely small, and cannot be detected by any polynomial time algorithm. This makes it applicable in settings for which privacy may be a concern; indeed, this shows that no polynomial time algorithm can learn anything from the sampled output other than what follows from a simulator who outputs a WOR -sample from the actual (variation distance ) distribution. Finally, for , we show that the memory of our algorithm is optimal up to an factor.
2 Preliminaries
A dataset consists of data elements that are key value pairs . The frequency of a key , denoted , is the sum of values of elements with key . We use the notation for a vector of frequencies of keys.
For a function and vector , we denote the vector with entries by . In particular, is the vector with entries that are the -th powers of the entries of . For vector and index , we denote by the value of the entry with the -th largest magnitude in . We denote by the permutation of the indices that corresponds to decreasing order of entries by magnitude. For , we denote by the vector with the entries with largest magnitudes removed (or replaced with ).
In the remainder of the section we review ingredients that we build on: bottom- sampling, implementing a bottom- transform on unaggregated data, and composable sketch structures for residual heavy hitters (rHH).
2.1 Bottom- sampling (ppswor and priority)
Bottom- sampling (also called order sampling [69]) is a family of without-replacement weighted sampling schemes of a set of key and weight pairs. The weights are transformed via independent random maps , where are i.i.d. from some distribution . The sample includes the pairs for keys that are top- by transformed magnitudes [63, 69, 37, 17, 13, 25, 27] 222Historically, the term bottom- is due to analogous use of , but here we find it more convenient to work with ”top-” . For estimation tasks, we also include a threshold , the -st largest magnitude of transformed weights. Bottom- schemes differ by the choice of distribution . Two popular choices are Probability Proportional to Size WithOut Replacement (ppswor) [69, 17, 27] via the exponential distribution and Priority (Sequential Poisson) sampling [63, 37] via the uniform distribution . Ppswor is equivalent to a weighted sampling process [68] where keys are drawn successively (without replacement) with probability proportional to their weight. Priority sampling mimics a pure Probability Proportional to Size (pps) sampling, where sampling probabilities are proportional to weights (but truncated to be at most ).
Estimating statistics from a Bottom- sample
Bottom- samples provide us with unbiased inverse-probability [44] per-key estimates on , where is a function applied to the weight [3, 26, 21, 19]):
(1) |
These estimates can be used to sparsify a vector to entries or to estimate sum statistics of the general form:
(2) |
using the unbiased estimate
The quality of estimates depends on and . We measure the quality of these unbiased estimates by the sum over keys of the per-key variance. With both ppswor and priority sampling and , the sum is bounded by a respective one for pps with replacement. The per-key variance bound is
(3) |
and the respective sum by . This can be tightened to and respective bound on the sum of . For skewed distributions, and hence WOR sampling is beneficial. Conveniently, bottom- estimates for different keys have non-positive correlations , so the variance of sum estimates is at most the respective weighted sum of per-key variance. Note that the per-key variance for a function of weight is . WOR (and WR) estimates are more accurate (in terms of normalized variance sum) when the sampling is weighted by .
2.2 Bottom- sampling by power of frequency
To perform bottom- sampling of with distribution , we draw , transform the weights , and return the top- keys in . This is equivalent to bottom- sampling the vector using the distribution , that is, draw , transform the weights
(4) |
and return the top- keys according to . Equivalence is because and is a monotone increasing and hence . We denote the distribution of obtained from the bottom- transform (4) as - and specifically, -ppswor when and -priority when . We use the term -ppswor for bottom- sampling by .
The linear transform (4) can be efficiently performed over unaggregated data by using a random hash to represent for keys and then locally generating an output element for each input element
(5) |
The task of sampling by -th power of frequency is replaced by the task of top- by frequency on the respective output dataset , noting that . Therefore, the top- keys in are a bottom- sample according to of . Note that we can approximate the input frequency of a key from an approximate output frequency using the hash . Note that relative error is preserved:
(6) |
This per-element scaling was proposed in the precision sampling framework of Andoni et al. [6] and inspired by a technique for frequency moment estimation using stable distributions [45].
Generally, finding the top- frequencies is a task that requires large sketch structures of size linear in the number of keys. However, [6] showed that when the frequencies are drawn from -priority (applied to arbitrary ) and then the top- value is likely to be an heavy hitter. Here we refine the analysis and use the more subtle notion of residual heavy hitters [10]. We show that the top- output frequencies in -ppswor are very likely to be residual heavy hitters (when ) and can be found with a sketch of size .
2.3 Residual heavy hitters (rHH)
An entry in a weight vector is called an - heavy hitter if . A heavy hitter with respect to a function is defined as a key with . When , we refer to a key as an heavy hitter. For and , a vector has residual heavy hitters [10] when the top- keys are “heavy" with respect to the tail of the frequencies starting at the -st most frequent key, that is, . This is equivalent to . We say that has rHH with respect to a function if has rHH. In particular, has rHH if
(7) |
Popular composable HH sketches were adopted to rHH and include (see Table 1): (i) sketches designed for positive data elements. A deterministic counter-based variety [60, 56, 58] with rHH adaptation [10] and the randomized CountMin sketch [35]. (ii) sketches designed for signed data elements, notably CountSketch [15] with rHH analysis in [52]. With these designs, a sketch for -rHH provides estimates for all keys with error bound:
(8) |
With randomized sketches, the error bound (8) is guaranteed with some probability . CountSketch has the advantages of capturing top keys that are but not heavy hitters and supports signed data, but otherwise provides lower accuracy than sketches for the same sketch size. Methods also vary in supported key domains: counters natively work with key strings whereas randomized sketches work for keys from (see Appendix A for a further discussion). We use these sketches off-the-shelf through the following operations:
-
•
: Initialize a sketch structure
-
•
: Merge two sketches with the same parameters and internal randomization
-
•
: process a data element
-
•
: Return an estimate of the frequency of a key .
3 WORp Overview
Our WORp methods apply a -ppswor transform to data elements (5) and (for ) compute an -rHH sketch of the output elements. The rHH sketch is used to produce a sample of keys.
We would like to set to be low enough so that for any input frequencies , the top- keys by transformed frequencies are rHH (satisfy condition (7)) with probability at least . We refine this desiderata to be conditioned on the permutation of keys in . This conditioning turns out not to further constrain and allows us to provide the success probability uniformly for any potential -sample. Since our sketch size grows inversely with (see Table 1), we want to use the maximum value that works. We will be guided by the following:
(9) |
where denotes the set of permutations of . If we set the rHH sketch parameter to then using (8), with probability at least ,
(10) |
We establish the following lower bounds on :
Theorem 3.1.
There is a universal constant such that for all , , and
For : | (11) | |||
For : | (12) |
To set sketch parameters in implementations, we approximate using simulations of what we establish to be a “worst case" frequency distribution. For this we use a specification of a “worst-case" set of frequencies as part of the proof of Theorem 3.1 (See Appendix B.1). From simulations we obtain that very small values of suffice for , , and .
We analyse a few WORp variants. The first we consider returns an exact -ppswor sample, including exact frequencies of keys, using two passes. We then consider a variant that returns an approximate -ppswor sample in a single pass. The two-pass method uses smaller rHH sketches and efficiently works with keys that are arbitrary strings.
We also provide another rHH-based technique that provides a guaranteed very small variation distance on -tuples in a single pass.
4 Two-pass WORp
Sketch size | |||
---|---|---|---|
sign, | words | key strings | |
, | |||
, | |||
, | |||
, |
We design a two-pass method for ppswor sampling according to for (Equivalently, a -ppswor sample according to ):
-
Pass I: We compute an -rHH sketch of the transformed data elements
(13) A hash is applied when keys have a large or non-integer domain to facilitate use of CountSketch or reduce storage with Counters. We set .
-
Pass II: We collect key strings (if needed) and corresponding exact frequencies for keys with the largest , where is a constant (see below) and are the estimates of provided by . For this purpose we use a composable top- sketch structure . The size of is dominated by storing key strings.
-
Producing a -ppswor sample from : Compute exact transformed frequencies for stored keys . Our sample is the set of key frequency pairs for the top- stored keys by . The threshold is the largest over stored keys.
-
Estimation: We compute per-key estimates as in (1): Plugging in for -ppswor, we have for and for is .
We establish that the method returns the -ppswor sample with probability at least , propose practical optimizations, and analyze the sketch size:
Theorem 4.1.
The 2-pass method returns a -ppswor sample of size according to with success probability and composable sketch sizes as detailed in Table 2. The success probability is defined to be that of returning the exact top- keys by transformed frequencies. The bound applies even when conditioned on the top- being a particular -tuple.
Proof.
From (10), the estimates of are such that:
(14) |
We set in the second pass so that the following holds:
The top- keys by are a subset of the top- keys by . | (15) |
Note that for any frequency distribution with rHH, it suffices to store keys to have (15). We establish that for our particular distributions, a constant suffices. For that we used the following:
Lemma 4.1.
A sufficient condition for property (15) is that .
Proof.
Note that for keys that are top- by and . Hence for all keys that are top- by . ∎
We then use Lemma E.1 to express a “worst-case" distribution for the ratio and use the latter (using Corollary D.2) to show that the setting of according to our proof of Theorem 3.1 (Appendix B-D) implies that the conditions that guarantee the rHH property will also imply a ratio of at most with a constant .
Correctness for the final sample follows from property (15) : storing the top- keys in the data according to . To conclude the proof of Theorem 4.1 we need to specify the rHH sketch structure we use. From Theorem 3.1 we obtain a lower bound on for and we use it to set . For our rHH sketch we use CountSketch ( and supports signed values) or Counters ( and positive values). The top two lines in Table 2 are for CountSketch and the next two lines are for Counters. The rHH sketch sizes follow from and Table 1. ∎
4.1 Practical optimizations
We propose practical optimizations that improve efficiency without compromising quality guarantees.
The first optimization allows us to store keys in the second pass: We always store the top- keys by but beyond that only store keys if they satisfy
(16) |
where is with respect to data elements processed by the current sketch. We establish correctness:
Lemma 4.2.
Proof.
(i) The structure is a slight modification of a top- structure. Since can only increase as more distinct keys are processed, the condition (16) only becomes more stringent as we merge sketches and process elements. Therefore if a key satisfies the condition at some point it would have satisfied the condition when elements with the key were previously processed and therefore we can collect exact frequencies.
(ii) From the uniform approximation (14), we have . Let be the -th key by . Its estimated transformed frequency is at at least . Therefore, if we store all keys with we store the top- keys by . ∎
A second optimization allows us to extract a larger effective sample from the sketch with keys. This can be done when we can certify that the top- keys by in the transformed data are stored in the sketch . Using a larger sample is helpful as it can only improve (in expectation) estimation quality (see e.g., [28, 31]). To extend this, we compute the uniform error bound (available because the top- keys by are stored). Let . We obtain that any key in the dataset with must be stored in . Our final sample contains these keys with the minimum in the set used as the threshold .
5 One-pass WORp
Our 1-pass WORp yields a sample of size that approximates a -ppswor sample of the same size and provides similar guarantees on estimation quality.
-
Sketch: For and Compute an -rHH sketch of the transformed data elements (5) where .
-
Produce a sample: Our sample includes the keys with top- estimated transformed frequencies . For each key we include , where the approximate frequency is according to (6). We include with the sample the threshold .
-
Estimation: We treat the sample as a -ppswor sample and compute per-key estimates as in (1), substituting approximate frequencies for actual frequencies of sampled keys and the 1-pass threshold for the exact . The estimate is for and for is:
(17)
We relate the quality of the estimates to those of a perfect -ppswor. Since our 1-pass estimates are biased (unlike the unbiased perfect -ppswor), we consider both bias and variance. The proof is provided in Appendix G.
Theorem 5.1.
Let be such that for some and . Let be per-key estimates obtained with a one-pass WORp sample and let be the respective estimates obtained with a (perfect) -ppswor sample. Then and .
Note that the assumption on holds for with . Also note that the bias bound implies a respective contribution to the relative error of on all sum estimates.
6 One-pass Total Variation Distance Guarantee
We provide another 1-pass method, based on the combined use of rHH and known WR perfect sampling sketches [50] to select a -tuple with a polynomially small total variation (TV) distance from the -tuple distribution of a perfect -ppswor. The method uses (for variation distance , and for variation distance for an arbitrarily large constant ) perfect samplers (each providing a single WR sample) and an rHH sketch. The perfect samplers are processed in sequence with prior selections “subtracted" from the linear sketch (using approximate frequencies provided by the rHH sketch) to uncover fresh samples. As with WORp, exact frequencies of sampled keys can be obtained in a second pass or approximated using larger sketches in a single pass. Details are provided in Appendix F.
Theorem 6.1.
There is a 1-pass method via composable sketches of size that returns a -tuple of keys such that the total variation distance from the -tuples produced by a perfect -ppswor sample is at most for an arbitrarily large constant . The method applies to keys from a domain , and signed values with magnitudes and intermediate frequencies that are polynomial in .
7 Experiments
We implemented 2-pass and 1-pass WORp in Python using CountSketch as our rHH sketch. Figure 2 reports estimates of the rank-frequency distribution obtained with -pass and -pass WORp and perfect WOR (-ppswor) and perfect WR samples (shown for reference). For best comparison, all WOR methods use the same randomization of the -ppswor transform. Table 3 reports normalized root averaged squared errors (NRMSE) on example statistics. As expected, 2-pass WORp and perfect -ppswor are similar and WR samples are less accurate with larger skew or on the tail. Note that current state of the art sketching methods are not more efficient for WR sampling than for WOR sampling, and estimation quality with perfect methods is only shown for reference. We can also see that 1-pass WORp performs well, although it requires more accuracy (lager sketch size) since it works with estimated weights (reported results are with fixed CountSketch size of for all methods).
perfect WR | perfect WOR | 1-pass WORp | 2-pass WORp | |||
1.16e-04 | 2.09e-11 | 1.06e-03 | 2.08e-11 | |||
7.96e-05 | 1.26e-07 | 1.14e-02 | 1.25e-07 | |||
9.51e-03 | 1.60e-03 | 2.79e-02 | 1.60e-03 | |||
3.59e-01 | 5.73e-03 | 5.14e-03 | 5.72e-03 | |||
3.45e-04 | 7.34e-10 | 5.11e-05 | 7.38e-10 |
Conclusion
We present novel composable sketches for without-replacement (WOR) sampling, based on “reducing" the sampling problem to a heavy hitters (HH) problem. The reduction, which is simple and practical, allows us to use existing implementations of HH sketches to perform WOR sampling. Moreover, streaming HH sketches that support time decay (for example, sliding windows [8]) provide a respective time-decay variant of sampling. We present two approaches, WORp, based on a bottom- transform, and another technique based on “perfect” with-replacement sampling sketches, which provide 1-pass WOR samples with negligible variation distance to a true sample. Our methods open the door for a wide range of future applications. Our WORp schemes produce bottom- samples (aka bottom- sketches) with respect to a specified randomization over the support (with 1-pass WORp we obtain approximate bottom- samples). Samples of different datasets or different values or different time-decay functions that are generated with the same are coordinated [12, 70, 65, 17, 32, 13]. Coordination is a desirable and powerful property: Samples are locally sensitivity (LSH) and change little with small changes in the dataset [12, 70, 46, 33, 19]. This LSH property allows for a compact representation of multiple samples, efficient updating, and sketch-based similarity searches. Moreover, coordinated samples (sketches) facilitate powerful estimators for multi-set statistics and similarity measures such as weighted Jaccard similarity, min or max sums, and one-sided distance norms [17, 13, 28, 31, 29, 30, 18].
Acknowledgments:
D. Woodruff would like to thank partial support from the Office of Naval Research (ONR) grant N00014-18-1-2562, and the National Science Foundation (NSF) under Grant No. CCF-1815840.
References
- [1] Pankaj K. Agarwal, Graham Cormode, Zengfeng Huang, Jeff M. Phillips, Zhewei Wei, and Ke Yi. Mergeable summaries. ACM Trans. Database Syst., 38(4):26:1–26:28, 2013.
- [2] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Communication-efficient sgd via gradient quantization and encoding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 1709–1720. Curran Associates, Inc., 2017.
- [3] N. Alon, N. Duffield, M. Thorup, and C. Lund. Estimating arbitrary subset sums with few probes. In Proceedings of the 24th ACM Symposium on Principles of Database Systems, pages 317–325, 2005.
- [4] N. Alon, Y. Matias, and M. Szegedy. The space complexity of approximating the frequency moments. J. Comput. System Sci., 58:137–147, 1999.
- [5] Alexandr Andoni, Khanh Do Ba, Piotr Indyk, and David P. Woodruff. Efficient sketches for earth-mover distance, with applications. In 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2009, October 25-27, 2009, Atlanta, Georgia, USA, pages 324–330. IEEE Computer Society, 2009.
- [6] Alexandr Andoni, Robert Krauthgamer, and Krzysztof Onak. Streaming algorithms via precision sampling. In IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 22-25, 2011, 2011.
- [7] Ziv Bar-Yossef, Thathachar S Jayram, Ravi Kumar, and D Sivakumar. An information statistics approach to data stream and communication complexity. Journal of Computer and System Sciences, 68(4):702–732, 2004.
- [8] R. Ben-Basat, G. Einziger, R. Friedman, and Y. Kassner. Heavy hitters in streams and sliding windows. In IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications, 2016.
- [9] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In ICML, 2009.
- [10] Radu Berinde, Graham Cormode, Piotr Indyk, and Martin J. Strauss. Space-optimal heavy hitters with strong error bounds. In Proceedings of the Twenty-Eighth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS ’09. Association for Computing Machinery, 2009.
- [11] Vladimir Braverman, Stephen R. Chestnut, Nikita Ivkin, Jelani Nelson, Zhengyu Wang, and David P. Woodruff. Bptree: An l2 heavy hitters algorithm using constant memory. In Proceedings of the 36th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2017, Chicago, IL, USA, May 14-19, 2017, 2017.
- [12] K. R. W. Brewer, L. J. Early, and S. F. Joyce. Selecting several samples from a single population. Australian Journal of Statistics, 14(3):231–239, 1972.
- [13] A. Z. Broder. On the resemblance and containment of documents. In Proceedings of the Compression and Complexity of Sequences, pages 21–29. IEEE, 1997.
- [14] M. T. Chao. A general purpose unequal probability sampling plan. Biometrika, 69(3):653–656, 1982.
- [15] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent items in data streams. In Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP), pages 693–703, 2002.
- [16] Yung-Yu Chung, Srikanta Tirthapura, and David P. Woodruff. A simple message-optimal algorithm for random sampling from a distributed stream. IEEE Trans. Knowl. Data Eng., 28(6):1356–1368, 2016.
- [17] E. Cohen. Size-estimation framework with applications to transitive closure and reachability. J. Comput. System Sci., 55:441–453, 1997.
- [18] E. Cohen. Distance queries from sampled data: Accurate and efficient. In KDD. ACM, 2014. full version: http://arxiv.org/abs/1203.4903.
- [19] E. Cohen. Multi-objective weighted sampling. In HotWeb. IEEE, 2015. full version: http://arxiv.org/abs/1509.07445.
- [20] E. Cohen. Stream sampling for frequency cap statistics. In KDD. ACM, 2015. full version: http://arxiv.org/abs/1502.05955.
- [21] E. Cohen. Stream sampling framework and application for frequency cap statistics. ACM Trans. Algorithms, 14(4):52:1–52:40, 2018. preliminary version published in KDD 2015. arXiv:http://arxiv.org/abs/1502.05955.
- [22] E. Cohen, G. Cormode, and N. Duffield. Don’t let the negatives bring you down: Sampling from streams of signed updates. In Proc. ACM SIGMETRICS/Performance, 2012.
- [23] E. Cohen, N. Duffield, C. Lund, M. Thorup, and H. Kaplan. Efficient stream sampling for variance-optimal estimation of subset sums. SIAM J. Comput., 40(5), 2011.
- [24] E. Cohen and O. Geri. Sampling sketches for concave sublinear functions of frequencies. In NeurIPS, 2019.
- [25] E. Cohen and H. Kaplan. Summarizing data using bottom-k sketches. In ACM PODC, 2007.
- [26] E. Cohen and H. Kaplan. Sketch-based estimation of subpopulation-weight. Technical Report 802.3448, CORR, 2008.
- [27] E. Cohen and H. Kaplan. Tighter estimation using bottom-k sketches. In Proceedings of the 34th VLDB Conference, 2008.
- [28] E. Cohen and H. Kaplan. Leveraging discarded samples for tighter estimation of multiple-set aggregates. In ACM SIGMETRICS, 2009.
- [29] E. Cohen and H. Kaplan. Get the most out of your sample: Optimal unbiased estimators using partial information. In Proc. of the 2011 ACM Symp. on Principles of Database Systems (PODS 2011). ACM, 2011. full version: http://arxiv.org/abs/1203.4903.
- [30] E. Cohen and H. Kaplan. What you can do with coordinated samples. In The 17th. International Workshop on Randomization and Computation (RANDOM), 2013. full version: http://arxiv.org/abs/1206.5637.
- [31] E. Cohen, H. Kaplan, and S. Sen. Coordinated weighted sampling for estimating aggregates over multiple weight assignments. VLDB, 2, 2009. full version: http://arxiv.org/abs/0906.4560.
- [32] E. Cohen, Y.-M. Wang, and G. Suri. When piecewise determinism is almost true. In Proc. Pacific Rim International Symposium on Fault-Tolerant Systems, pages 66–71, December 1995.
- [33] Edith Cohen, Graham Cormode, Nick Duffield, and Carsten Lund. On the tradeoff between stability and fit. ACM Trans. Algorithms, 13(1), 2016.
- [34] Edith Cohen, Ofir Geri, and Rasmus Pagh. Composable sketches for functions of frequencies: Beyond the worst case. In ICML, 2020.
- [35] G. Cormode and S. Muthukrishnan. An improved data stream summary: The count-min sketch and its applications. J. Algorithms, 55(1), 2005.
- [36] N. Duffield, C. Lund, and M. Thorup. Estimating flow distributions from sampled flow statistics. In Proceedings of the ACM SIGCOMM’03 Conference, pages 325–336, 2003.
- [37] N. Duffield, M. Thorup, and C. Lund. Priority sampling for estimating arbitrary subset sums. J. Assoc. Comput. Mach., 54(6), 2007.
- [38] C. Estan and G. Varghese. New directions in traffic measurement and accounting. In SIGCOMM. ACM, 2002.
- [39] Gereon Frahling, Piotr Indyk, and Christian Sohler. Sampling in dynamic data streams and applications. Int. J. Comput. Geom. Appl., 18(1/2):3–28, 2008.
- [40] R. Gemulla, W. Lehner, and P. J. Haas. A dip in the reservoir: Maintaining sample synopses of evolving datasets. In VLDB, 2006.
- [41] P. Gibbons and Y. Matias. New sampling-based summary statistics for improving approximate query answers. In SIGMOD. ACM, 1998.
- [42] H. O. Hartley and J. N. K. Rao. Sampling with unequal probabilities and without replacement. Annals of Mathematical Statistics, 33(2), 1962.
- [43] N. Hohn and D. Veitch. Inverting sampled traffic. In Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, pages 222–233, 2003.
- [44] D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260):663–685, 1952.
- [45] P. Indyk. Stable distributions, pseudorandom generators, embeddings and data stream computation. In Proc. 41st IEEE Annual Symposium on Foundations of Computer Science, pages 189–197. IEEE, 2001.
- [46] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In Proc. 30th Annual ACM Symposium on Theory of Computing, pages 604–613. ACM, 1998.
- [47] Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Vladimir Braverman, Ion Stoica, and Raman Arora. Communication-efficient distributed SGD with sketching. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, 2019.
- [48] Svante Janson. Tail bounds for sums of geometrics and exponential variables. https://http://www2.math.uu.se/~svante/papers/sj328.pdf, 2017.
- [49] Rajesh Jayaram, Gokarna Sharma, Srikanta Tirthapura, and David P. Woodruff. Weighted reservoir sampling from distributed streams. In Dan Suciu, Sebastian Skritek, and Christoph Koch, editors, Proceedings of the 38th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2019, Amsterdam, The Netherlands, June 30 - July 5, 2019, pages 218–235. ACM, 2019.
- [50] Rajesh Jayaram and David P. Woodruff. Perfect lp sampling in a data stream. In FOCS, 2018.
- [51] T. S. Jayram and David P. Woodruff. The data stream space complexity of cascaded norms. In 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2009, October 25-27, 2009, Atlanta, Georgia, USA, pages 765–774. IEEE Computer Society, 2009.
- [52] H. Jowhari, M. Saglam, and G. Tardos. Tight bounds for Lp samplers, finding duplicates in streams, and related problems. In PODS, 2011.
- [53] D. E. Knuth. The Art of Computer Programming, Vol 2, Seminumerical Algorithms. Addison-Wesley, 1st edition, 1968.
- [54] Zaoxing Liu, Ran Ben-Basat, Gil Einziger, Yaron Kassner, Vladimir Braverman, Roy Friedman, and Vyas Sekar. Nitrosketch: robust and general sketch-based monitoring in software switches. In Proceedings of the ACM Special Interest Group on Data Communication, SIGCOMM 2019, Beijing, China, August 19-23, 2019, pages 334–350, 2019.
- [55] Zaoxing Liu, Antonis Manousis, Gregory Vorsanger, Vyas Sekar, and Vladimir Braverman. One sketch to rule them all: Rethinking network flow monitoring with univmon. In SIGCOMM, 2016.
- [56] G. Manku and R. Motwani. Approximate frequency counts over data streams. In International Conference on Very Large Databases (VLDB), pages 346–357, 2002.
- [57] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research. PMLR, 2017.
- [58] A. Metwally, D. Agrawal, and A. El Abbadi. Efficient computation of frequent and top-k elements in data streams. In ICDT, 2005.
- [59] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
- [60] J. Misra and D. Gries. Finding repeated elements. Technical report, Cornell University, 1982.
- [61] M. Monemizadeh and D. P. Woodruff. 1-pass relative-error l-sampling with applications. In Proc. 21st ACM-SIAM Symposium on Discrete Algorithms. ACM-SIAM, 2010.
- [62] Rajeev Motwani and Prabhakar Raghavan. Randomized algorithms. In Algorithms and Theory of Computation Handbook. 1999.
- [63] E. Ohlsson. Sequential poisson sampling from a business register and its application to the swedish consumer price index. Technical Report 6, Statistics Sweden, 1990.
- [64] E. Ohlsson. Sequential poisson sampling. J. Official Statistics, 14(2):149–162, 1998.
- [65] E. Ohlsson. Coordination of pps samples over time. In The 2nd International Conference on Establishment Surveys, pages 255–264. American Statistical Association, 2000.
- [66] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
- [67] Eric Price. Efficient sketches for the set query problem. In Dana Randall, editor, Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 41–56. SIAM, 2011.
- [68] B. Rosén. Asymptotic theory for successive sampling with varying probabilities without replacement, I. The Annals of Mathematical Statistics, 43(2):373–397, 1972.
- [69] B. Rosén. Asymptotic theory for order sampling. J. Statistical Planning and Inference, 62(2):135–158, 1997.
- [70] P. J. Saavedra. Fixed sample size pps approximations with a permanent random number. In Proc. of the Section on Survey Research Methods, pages 697–700, Alexandria, VA, 1995. American Statistical Association.
- [71] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified sgd with memory. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18. Curran Associates Inc., 2018.
- [72] Y. Tillé. Sampling Algorithms. Springer-Verlag, New York, 2006.
Appendix A Properties of rHH sketches
Sketches for heavy hitters on datasets with positive values:
These include the deterministic counter-based Misra Gries [60, 1], Lossy Counting [56], and Space Saving [58] and the randomized Count-Min Sketch [35]. A sketch of size provides frequency estimates with absolute error at most . Berinde et al. [10] provide a counter-based sketch of size that provides absolute error at most .
Sketches for heavy hitters on datasets with signed values:
Pioneered by CountSketch [15]: A sketch of size provides with confidence estimates with error bound for squared frequencies. For rHH, a CountSketch of size provides estimates for all squared frequencies with absolute error at most . These bounds were further refined in [52] for rHH. The dependence on was replaced by in [11] for insertion only streams. Unlike the case for counter-based sketches, the estimates produced by CountSketch are unbiased, a helpful property for estimation tasks.
Obtaining the rHH keys
Keys can be arbitrary strings (search queries, URLs, terms) or integers from a domain (parameters in an ML model). Keys in the form of strings can be handled by hashing them to a domain but we discuss applications that require the sketch to return rHH keys in their string form. Counter-based sketches store explicitly keys. The stored keys can be arbitrary strings. The estimates are positive for stored keys and for other keys. The rHH keys are contained in the stored keys. The randomized rHH sketches (CountSketch and CountMin) are designed for keys that are integers in . The bare sketch does not explicitly store keys. The rHH keys can be recovered by enumerating over and retaining the keys with largest estimates. Alternatively, when streaming (performing one sequential pass over elements) we can maintain an auxiliary structure that holds key strings with current top- estimates [15].
With general composable sketches, key strings can be handled using an auxiliary structure that increases the sketch size by a factor linear in string length. This is inefficient compared with sketches that natively store the string. Alternatively, a two-pass method can be used with the first pass computing an rHH sketch for a hashed numeric representation and a second pass is used to obtain the key strings of hashed representations with high estimates.
Recovering (approximate) frequencies of rHH keys
For our application, we would need to have approximate or exact frequencies of rHH keys. The estimates provided by a rHH sketch provide absolute error (statistical) guarantees (see Table 1). One approach is to recover exact frequencies in a second pass. We can also obtain more accurate estimates (of relative error at most ) by using a rHH sketch.
Testing for failure
Recall that the dataset may not have rHH. We can test if one of the largest estimated frequencies to the th power is below the specified error bound of . If so, we declare “failure.”
Appendix B Overview of the proof of Theorem 3.1
For a vector , permutation , and , let the random variable be a -ppswor transform (4) of conditioned on the event . For and , we define the following distribution:
(18) |
Note that for any and ,
(19) |
Therefore tail bounds on that hold for any and can be used to establish the claim.
We now define another distribution that does not depend on and :
Definition B.1.
For and we define a distribution as follows.
where are i.i.d.
The proof of Theorem 3.1 will follow using the following two components:
-
(i)
We show (Section C) that for any and permutation ,
where the relation corresponds here to statistical domination of distributions.
-
(ii)
We establish (Section D) tail bounds on .
Because of domination, the tails bounds on provide corresponding tail bound for for any and . Together with (19), we use the tail bounds to conclude the proof of Theorem 3.1.
Moreover, the domination relation is tight in the sense that for some and , is very close to : For distributions with keys with relative frequency and that has these keys in the first (as ), or for uniform distributions with , (as grows).
The tail bounds (and hence the claim of Theorem 3.1) also hold without the condition on .
Lemma B.2.
The tail bounds also hold for the unconditional distribution .
Proof.
The distribution is a convex combination of distributions . Specifically, for each permutation let be the probability that we obtain this permutation with successive weighted sampling with replacement. Then
(20) |
Since tail bounds hold for each term, they hold for the combination. ∎
B.1 Approximating by simulations
is the solution of the following for :
(21) |
We can approximate by computing i.i.d. , taking the quantile in the empirical distribution and returning .
From simulations we obtain that for and , suffices for sample size , suffices for , and suffices for .
Appendix C Domination of the ratio distribution
Lemma C.1 (Domination).
For any permutation , , , , and , the distribution (18) is dominated by . That is, for all ,
(22) |
Proof.
Assume without loss of generality that . Let . Note by definition is in decreasing order of magnitude. Define the random variable . are transformed weights of a ppswor sample of conditioned on the order . We use use properties of the exponential distribution (see a similar use in [26]) to express the joint distribution of . We use the following set of independent random variables:
We have:
(23) |
To see this, recall that is the (inverse) of the minimum of exponential random variables with parameters and thus is (the inverse of) exponential random variable with parameter equal to their sum. Therefore, . From memorylessness, the difference between the -st smallest inverse and the -th smallest is an exponential random variable with distribution . Therefore, the -th smallest inverse has the claimed distribution (23).
We are now ready to express the random variable that is the ratio (18) in terms of the independent random variables :
(24) |
We rewrite this using i.i.d. random variables , recalling that for any , is the same as . Then we have .
We next provide a simpler distribution that dominates the distribution of the ratio. Let and consider the i.i.d. random variables . Note that for and for . Thus, for ,
(25) |
This holds in particular for each term in the RHS of (24). Therefore we obtain
∎
Appendix D Tail bounds on
We establish the following upper tail bounds on the distribution :
Theorem D.1 (Concentration of ).
There is a constant , such that for any
(26) | ||||
(27) |
We start with a “back of the envelope” calculation to provide intuition: replace the random variables in (see Definition B.1) by their expectation to obtain
For , . For we have . We will see that we can expect the sums not to deviate too far from this value.
The sum of i.i.d. random variables generates an Erlang distribution (rate parameter ). The expectation is and variance is . We will use the following Erlang tail bounds [48]:
Lemma D.1.
For
When or we have the bound . For we also have
Proof of Theorem D.1.
We bound the probability of a “bad event" which we define as the numerator being “too large” and denominators being too “small.” More formally, the numerator is the sum and we define a bad event as . Substituting and in the upper tail bounds from Lemma D.1, we have that the probability of this bad event is bounded by
(28) |
The denominators are prefix sums of of the sequence of random variables. We consider a partition the sequence to consecutive stretches of size
We denote by the sum of stretch . Note that are independent random variables. We define a bad event as the probability that for some , . From the lower tail bound of Lemma D.1, we have
(29) |
The combined probability of the union of these bad events (for the numerator and all stretches) is at most .
We are now ready to compute probabilistic upper bound on the ratios when there is no bad event
For we have . For , we have . ∎
From the proof of Theorem D.1 we obtain:
Corollary D.2.
There is a constant such that when there are no “bad events" in the sense of the proof of Theorem D.1,
Proof.
With no bad events, and . Solving for (for for some ) we obtain . ∎
Appendix E Ratio of magnitudes of transformed weights
For we consider the distribution of the ratio between the and transformed weights:
Lemma E.1.
For any , , and , the distribution is dominated by
(30) |
where are i.i.d.
Proof.
For we define . Now note that for and for . Therefore,
∎
Appendix F 1-pass with total variation distance on sample -tuple: upper and lower bounds
Perfect ppswor returns each subset of keys with a certain probability:
Recall that the distribution is equivalent to successive weighted sampling without replacement. It is also equivalent to successive sampling with replacement if we “skip" repetitions until we obtain distinct keys.
With -ppswor and unaggregated data, this is with respect to . The WORp 1-pass method returns an approximate -ppswor sample in terms of estimation quality and per-key inclusion probability but the TV distance on -tuples can be large.
We present here another 1-pass method that returns a -tuple with a polynomially small VT distance from -ppswor.
Theorem F.1.
Let . There is a -pass turnstile streaming algorithm using bits of memory which, given a stream of updates to an underlying vector , with , outputs a set of size such that the distribution of has variation distance at most from the distribution of a sample without replacement of size from the distribution , where , where is an arbitrarily large constant.
Proof.
The algorithm is -pass and works in a turnstile stream given an rHH method and perfect -single sampler methods that have this property. We use the rHH method of [52], which has this property and uses bits of memory. We also use the perfect -single sampler method of [50], which has this property and uses bits of memory for and bits of memory for . The perfect -single sampler method of [50] can output Fail with constant probability, but we can repeat it times and output the first sample found, so that it outputs Fail with probability at most for an arbitrarily large constant , and consequently we can assume it never outputs Fail (by say, always outputting index when Fail occurs). This gives us the claimed total bits of memory.
We next state properties of these subroutines. The rHH method we use satisfies: with probability for an arbitrarily large constant , simultaneously for all , it outputs an estimate for which
We assume this event occurs and add to our overall variation distance.
The next property concerns the perfect -single samplers we use. Each returns an index such that the distribution of has variation distance at most from the distribution . Here is an arbitrarily large constant of our choice.
We next analyze our procedure for producing a sample. Consider the joint distribution of . The algorithms use independent randomness. However, the input to may depend on the randomness of for . However, by definition, conditioned on not outputting for any we have that is independent of and moreover, the distribution of has variation distance from the distribution of a sample from conditioned on , for an arbitrarily large constant .
Let be the event that we sample distinct indices, i.e., do not output Fail in our overall algorithm. We show below that for an arbitrarily large constant . Consequently, our output distribution has variation distance from an idealized algorithm that samples until it has distinct values.
Consider the probability of outputting a particular ordered tuple of distinct indices in the idealized algorithm that samples until it has distinct values. By the above, this is
for an arbitrarily large constant . Summing up over all orderings, we obtain the probability of obtaining the sample is within times its probability of being sampled from in a sample without replacement of size , where is a sufficiently large constant.
It remains to show for an arbitrarily large constant . Here we use that for all Let be the number of trials until (and including the time) we sample the -th distinct item, given that we have just sampled distinct items. The total probability mass on the items we have already sampled is at most , and thus the probability we re-sample an item already sampled is at most . It follows that . Thus, the number of trials in the algorithm is stochastically dominated by , where is a geometric random variable with . This sum is a negative binomial random variable, and by standard tail bounds relating a sum of independent geometric random variables to binomial random variables[62] 333See, also, e.g., https://math.stackexchange.com/questions/1565559/tail-bound-for-sum-of-geometric-random-variables, is at most with probability for an arbitrarily large constant .
This completes the proof. ∎
We now analyze the memory in Theorem F.1 more precisely. Algorithm 1 runs independent perfect -sampling algorithms of [50]. The choice of is to ensure that the variation distance is at most ; however, with only such samplers, the same argument as in the proof of Theorem F.1 gives variation distance at most . Now, each -sampler of [50] uses bits of memory for , and uses bits of memory for . We also do not need to repeat the algorithm times to create a high probability of not outputting Fail; indeed, already if with only constant probability the algorithm does not output Fail, we will still obtain distinct samples with failure probability provided we have a large enough number of such samplers.
Algorithm 1 also runs an rHH method, and this uses bits of memory [52]. Consequently, to acheive variation distance at most , Algorithm 1 uses bits of memory for , and bits of memory for .
We now show that for , the memory used of Algorithm 1 is best possible for any algorithm, up to a multiplicative factor. For , we show our algorithm’s memory is optimal up to a multiplicative factor. Further, our lower bound holds even for any algorithm with the much weaker requirement of achieving variation distance at most , as opposed to the variation distance at most that we achieve.
Theorem F.2.
Any -pass turnstile streaming algorithm which outputs a set of size such that the distribution of has variation distance at most from the distribution of a sample without replacement of size from the distribution , where , requires bits of memory, provided for a certain absolute constant .
Proof.
We use the fact that such a sample can be used to find a constant fraction of the residual heavy hitters in a data stream. Here we do not need to achieve residual error for our lower bound, and can instead define such indices to be those that satisfy . Notice that there are at most such indices , and any sample (with or without replacement) with variation distance at most from a true sample has probability at least of containing the index . By repeating our algorithm times, we obtain a superset of size which contains any particular such index with arbitrarily large constant probability, and these repetitions only increase our memory by a constant factor.
It is also well-known that there exists a point-query data structure, in particular the CountSketch data structure [15, 67], which only needs rows and thus bits of memory, such that given all the indices in a set , one can return all items for which and no items for which . Here we avoid the need for rows since we only need to union bound over correct estimates in the set .
In short, the above procedure allows us to, with arbitrarily large constant probability, return a set containing a random fraction of the indices for which , and containing no index for which .
We now use an existing bit lower bound, which is stated for finding all the heavy hitters [52], to show an bit lower bound for the above task. This is in fact immediate from the proof of Theorem 9 of [52], which is a reduction from the one-way communication complexity of the Augmented Indexing Problem and just requires any particular heavy hitter index to be output with constant probability. In particular, our algorithm, combined with the bits of memory side data structure of [15] described above, achieves this.
Consequently, the memory required of any -pass streaming algorithm for the sampling problem is at least bits, which gives us an lower bound provided for an absolute constant , completing the proof. ∎
Appendix G Estimates of one-pass WORp
We first review the setup. Our one-pass WORp method returns the top keys by as our sample and returns as the threshold. The estimate of is for and for is
(31) |
We assume that is such that for some constant ,
(32) |
We need to establish that
where are estimates obtained with a (perfect) -ppswor sample.
Proof of Theorem 5.1.
From (10), the rHH sketch has the property that for all keys in the dataset,
(33) |
For sampled keys, and hence . Using (6) we obtain that .
From our assumption (32) , we have .
We consider the inclusion probability and frequency estimate of a particular key , conditioned on fixed randomization of all other keys . The key is included in the sample if . We consider the distribution of as a function of . The value has a form of , where the erro satisfies . The conditioned inclusion probability thus falls in the range
We estimate by
(34) |
This estimate has a small relative error. This due to the relative error in and because and is an relative error approximation of .
We first consider the bias. Instead of using the unbiased inverse probability estimate when is sampled (with probability ) our estimator (17) approximates both the numerator and the denominator.
In the numerator of the estimator, we replace by the relative error approximation . Therefore overall, we use a small relative error estimate of the actual inverse probability estimate when it is non zero, which translates to a bias that is .
We next bound the Mean Squared Error (MSE) of the estimator (17). We express the variance contribution of exact -ppswor conditioned on the same randomization of all keys . This is , where . The MSE contribution is
(35) |
We observe that the approximate threshold (that determines ) approximates the perfect -ppswor threshold: . When , (35) approximates with relative error .
When is close to this is dominated by .
∎
We remark that our analysis of the error only assumed the rHH error bound (33) which holds for all sketch types including Counters. The bias analysis can be tightened for CountSketch that returns unbiased estimates of the frequency.