CONQ: CONtinuous Quantile Treatment Effects for Large-Scale Online Controlled Experiments
Abstract.
In many industry settings, online controlled experimentation (A/B test) has been broadly adopted as the gold standard to measure product or feature impacts. Most research has primarily focused on user engagement type metrics, specifically measuring treatment effects at mean (average treatment effects, ATE), and only a few have been focusing on performance metrics (e.g. latency), where treatment effects are measured at quantiles. Measuring quantile treatment effects (QTE) is challenging due to the myriad difficulties such as dependency introduced by clustered samples, scalability issues, density bandwidth choices, etc. In addition, previous literature has mainly focused on QTE at some pre-defined locations, such as P50 or P90, which doesn’t always convey the full picture. In this paper, we propose a novel scalable non-parametric solution, which can provide a continuous range of QTE with point-wise confidence intervals while circumventing the density estimation altogether. Numerical results show high consistency with traditional methods utilizing asymptotic normality. An end-to-end pipeline has been implemented at Snap Inc., providing daily insights on key performance metrics at a distributional level.
1. Introduction
Large-scale online experiments (George et al., 2005; Gerber and Green, 2012) are routinely conducted at various technology companies (Google, Netflix, Microsoft, LinkedIn, etc.) (Xie and Aurisset, 2016; Kohavi et al., 2009; Tang et al., 2010; Bakshy et al., 2014; Xu et al., 2015) to help gauge quantifiable impacts of potential feature and product launches. At Snap Inc., hundreds of experiments are conducted on a daily basis, aiming to improve user engagement or app-performance. To accurately measure these changes, hundreds and thousands of metrics are further pumped through our in house A/B platform where routine statistical tests are conducted to judge if the treatment effects are indeed statistically significant. These metrics can be roughly divided into two types: engagement vs. performance. For engagement type metrics, typically two sample -tests are utilized to determine if the average treatment effects (ATE) are truly nonzero (Imbens and Rubin, 2015). However for performance metrics (e.g., camera transcoding latency, page load latency, etc.), evaluating treatment effects at mean is typically ill-advised (Liu et al., 2019), and the industry standard is to measure QTE (quantile treatment effects), most notably at P50 and P90, whereas P50 is aimed at a summary statistic for overall performance and P90 for tail area performance.
However, it is not straightforward to get such QTE with valid -values and confidence intervals as events can no longer be considered independent (each randomization unit, typically users, can contribute multiple events, which in itself is another random variable measuring engagement), therefore delta-methods and user-level correlation calculations are required (Liu et al., 2019). In addition, while the method proposed in (Liu et al., 2019) is statistically valid, it lacks a principled and data-driven way on choosing the bandwidth for its kernel density estimation, which affects the result greatly since the density’s square appears as the denominator of the variance term. Furthermore, QTE at the median and the -th quantile are sometimes not enough to give experimenters the whole picture, especially when the significance or directions do not agree (example in Figure 1), or when heterogeneous treatment effect (HTE) is present for different devices with high and low overall performances. This motivates us to come up with a robust, statistically valid, and scalable method which can provide continuous QTE with point-wise -values and confidence intervals.

In this paper, we propose a novel method named CONQ (continuous quantile treatment effect), which utilizes Bag of Little Bootstrap (BLB) (Kleiner et al., 2014) and the Bahadur Representation (Wu et al., 2005) to construct point-wise asymptotically valid confidence intervals and -values, which are consistent with the normal approximation method in (Liu et al., 2019). To further improve Monte Carlo efficiency, balanced bootstrap is implemented for variance reduction. We also show the added benefit of log-transformation on the original metric for calculating percentage change QTE due to quantile’s invariance to monotone transformations. Extensive evaluation on Snapchat data shows high consistency of our method with method in (Liu et al., 2019) at P50 and P90, and stable performance across a range of quantile locations bench-marked by known A/A tests. A daily pipeline has been implemented using Spark (Zaharia et al., [n.d.]) and Airflow, generating continuous QTE on core performance metrics for all A/B tests at Snap Inc.
The key contributions of our paper include:
-
•
A scalable and theoretically sound method that can provide QTE with percentage change -values and confidence intervals at arbitrary range of quantile locations simultaneously.
-
•
Circumvents the issue of density estimation and bandwidth choice altogether, improve accuracy of delta-method by log-transformation for percentage changes.
-
•
Validation of the method on real experiments at Snap Inc., which shows consistency with existing P50 and P90 results, and stable performance across various quantile locations.
In Section 2, we describe the background and literature on QTE in the A/B setting and statistical inference for quantiles. In Section 3, we detail the motivation and theoretical justification of the CONQ procedure and its implementation. In Section 4, we evaluate CONQ using a large set of A/B experiments at Snap Inc., showing its comparable performance with delta-method based approach in (Liu et al., 2019; Deng et al., 2018), and shows its stable performance using a set of known A/A tests as validation corpus. In Section 5, we discuss the possible extension and summarize the paper.
2. Background and Related Work
2.1. Quantile Treatment Effects
Although in many tech firms, the vast majority of metrics are measuring user engagement activities, another important category of metrics aim to gauge app or web performance, i.e. performance metrics. For such performance metrics, not only do we care about the average users’ experience, we are also interested in the cohort of users that are suffering from the worst app or web performances. Bearing these concerns in mind, it’s important to evaluate these metrics at at least multiple quantile locations in A/B tests, especially in the tail area. At Snap Inc., P50 and P90 are the typical objectives various teams optimize for in A/B experiments. (Liu et al., 2019) proposed a statistically valid and scalable method that can calculate QTE with -values and confidence intervals by appealing to quantile estimator’s asymptotic normality (Cramér, 1999), however the varying bandwidths choice mentioned is rather ad-hoc and lacks theoretically guarantee. (Deng et al., 2018) introduced an approach that appeals to delta-method and the outer CI, circumventing the density estimation which still involves user-level correlation calculation for the variance term. Furthermore, these approaches still focus on treatment effects at a number of pre-determined quantile locations (.e.g., P50 or P90), where generalizations to a continuous range of quantile locations cannot be easily obtained without point-wise recalculation.
QTE at various quantile locations has been receiving increased attention these days in the industry, as it provides a fuller picture of distributional changes between control and treatment, and better captures heterogeneity (Lux, [n.d.]). At Uber, the approach they adopted is quantile regression, citing the advantage of relying on existing abundant literature (Koenker and Hallock, 2001; Hunter and Lange, 2000). However, the methodology proposed still requires an optimization algorithm, which is not suitable for the myriad of performance metrics and experiments at Snap Inc. at scale.
2.2. Statistical Inference for Quantiles
In the Statistics literature, the relationship between quantiles and CDF (cumulative distribution function) has been investigated extensively. Most notably, for that are i.i.d. random variables from an unknown non-parametric with density , for , denote by the -th quantile of , the -th sample quantile of , (Bahadur, 1966) coined the famed Bahadur representation:
To put it simply, it stated that an asymptotic relationship between quantile and CDF can be established through its density function . Refinements of Bahadur’s result in the i.i.d. setting were further proposed by Kiefer (Kiefer, 1967, 1970a, 1970b). (Wu et al., 2005) further extended such representation for a wide range of dependent cases. (Woodruff, 1952) proposed the Woodruff confidence interval, which essentially established a one to one relationship between point-wise confidence intervals of the CDF and quantiles.
Another line of research focused on a -norm deviation of the empirical CDF from the truth (uniform bound), instead of at a specific quantile location (point-wise). The celebrated Dvoretsky-Kiefer-Wolfowitz-Massart (DKWM) inequality (Dvoretzky et al., 1956; Massart, 1990) provides a tight bound on the CDF for i.i.d. samples:
, which by its expression, is indifferent to quantile locations. (Howard and Ramdas, 2019) further proposed two methods for point-wise quantile confidence intervals and two methods for -norm type confidence bounds, which are proven to be adaptive in a sequential setting, resonating with the always-valid -values line of research (Johari et al., 2015, 2017). However, coming from a QTE perspective, we are still most interested in point-wise statistical inference, and (Howard and Ramdas, 2019)’s method requires either parameter tuning or numeric root finding, and generalization to the dependent case is non-trivial.
In this paper, we focus on the end-horizon time setting which is the common-case at Snap Inc., instead of a sequential test setting as considered by (Howard and Ramdas, 2019). We propose an easy to interpret and implement data-driven method for a range of quantile level treatment effects for percentage changes.
3. Continuous point-wise Statistical Inference on QTE
3.1. Sample Quantiles and Bahadur Representation
At Snap Inc., hundreds of metrics are evaluated in A/B experiments simultaneously, aiming to not only gauge user telemetry data, but also on app-performance. As a social network company with a camera focus, myriad facets of in-app performances are the direct goal of optimization for many teams. There’s almost always a dual performance metric for an engagement metric. For example, as we measure the number of app opens, we also measure app open latency.

This brings about another layer of complication in A/B tests. As a typical practice, randomization units are users; for engagement type metrics, each user contributes exactly one point, and all data-points can be considered i.i.d. observations, hence statistical inference can be by and large accomplished by appealing to the Central Limit Theorem. However, the dual performance metrics are not as straight-forward, as each user contributes multiple data-points (exactly the engagement count number of data-points), therefore we have a clustered set of observations which are no longer independent. Using the same notation in 2.2, for i.i.d. samples , the sample -th quantile is asymptotically normal:
With performance metrics, typically we need delta method to approximate the numerator in the variance term (Deng et al., 2018; Liu et al., 2019). However, this is only for a fixed quantile location , and it’s hard to generalize to other locations without recalculation. Furthermore, kernel density estimation is typically required for , in which its bandwidth is hard to choose. Although some ad-hoc rules could work well in practice (Liu et al., 2019), small deviations in the bandwidth affect the result greatly.
To demonstrate such issue, here is an example on a typical performance metric measuring the latency on starting the Discover section inside the Snapchat app. The study improved this latency at P50 from to , with a -value of using the method mentioned in (Liu et al., 2019) with the recommended bandwidth choice. However, if we were to vary the bandwidth using the below choices (with normal reference rule (Silverman, 1986) chosen as (IQR iinter-quantile range, is sample standard deviation and is sample size), we get drastically different -values demonstrated in Table 1.
kernel density estimator’s bandwidth | -value |
---|---|
0.002 | 1.08e-34 |
0.01 | 0.0125 |
0.02 | 0.049 |
normal reference rule | 0.352 |
Using as a threshold, we can see that -values vary from being very significant to not significant at all as the bandwidth varies. However, we lack a robust and data-driven approach on choosing such bandwidth which can guarantee good estimation quality for inference across all quantiles.
As noted in sub-Section 2.2, the famed Bahadur representation establishes the asymptotic relationship between sample quantiles and the empirical CDF, especially in various dependent cases (Wu et al., 2005).
Such Bahadur representation has a nice geometric interpretation as illustrated by 2. At the neighborhood of point , to convert from the confidence interval of empirical CDF to quantiles, we can simply divide by the tangent of the curve , which is . Therefore, for i.i.d. samples, we have
which no longer involves the density estimation. For clustered sample’s case, simply replacing the variance term with would suffice, which can be empirically estimated using the Bag of Little Bootstrap (BLB) technique in the next section. Take one step further, we have the result that:
Woodruff (Woodruff, 1952) first proposed this interval empirically, which ingeniously circumvents the density estimation altogether, while still provides valid statistical inference results.
Below we discuss how we extend such CI to a continuous setting and clustered sample’s case, suitable for the large-scale A/B testing need at Snap Inc.
3.2. Balanced Bag-of-Little-Bootstrap for CDF
In the previous section, we discuss the relationship between empirical CDF and quantiles through the Bahadur representation. In this section, we discuss how the variance term can be efficiently calculated for at a continuous range of quantile locations using one set of bootstrap results, and how Woodruff type confidence intervals can be applied.
As mentioned in (Deng et al., 2018), ’ntiles’ operation (query quantile from set of observations) is expensive and should be avoided if possible. Furthermore, bearing in mind the one to one relationship between CI for quantiles and CI for CDF, in order to simultaneously get quantile level inference for all possible locations, we can first tackle the variance of CDF, and then appeal to Woodruff CI for transforming back to quantiles. In this paper, we use the Bag of Little Bootstrap in (Kleiner et al., 2014) with the balanced method (Gleason, 1988) for resampling to further reduce Monte Carlo variance, achieving higher bootstrap efficiency.
Suppose there are users, for user , there are total of events, denoted as , while . We first take the logarithm of these metrics, then preserve a pre-defined digits of precision (we chose 2): . Note this step is to reduce the total number of unique log-scaled values for efficiency. Furthermore, due to quantile’s invariance to monotone transformation (Berger, 2013), we have
This ensures we can get back the QTE on the original metric by simply taking the exponent. Furthermore, by delta method, we have
where the first term in the last expression can be bootstrapped on log-transformed data separately for control and treatment. This way of applying the delta-method is empirically more accurate than on the original scale metric, especially when percentage change is large, or when is small.
Then we split users together with their corresponding events into subsets ( for Snap) for control and treatment separately, all events for any given user shall be preserved in the same subset. For each subset, generate a data sketch with unique rounded log-scaled metric values (denoted by , here is the total number of unique values in the original dataset) with its corresponding count :
1 | 2 | s | ||
---|---|---|---|---|
Let be the number of bootstraps, to achieve balanced bootstrapping, i.e. where all samples appear exactly times, randomly permute the long vector and then split into vectors of length , and treat each vector as one bootstrap sample on the buckets. For each bootstrap sample vector , we can get a counter of how many times each appears, and weight corresponding by how many times -th bucket appears in . For each bootstrap sample, we can get the empirical cdf at all unique log-scaled values . Using all bootstrap samples, we can get an estimate of the standard deviation of empirical cdf at all (denoted as ) as well, which would approximate at :
Bootstrapping on the empirical cdf avoids the ’ntiles’ operations, and enables us to use one set of bootstrap to get all potentially interested QTE by interpolating the estimated variance via a general continuity assumption. Below is a typical example of the interpolated bootstrapped standard error across all quantiles:

Since the original Woodruff method does not always produce a symmetric CI, empirically, we propose the following conservative estimate for each unique log-scaled metric value for control and treatment separately:
(1) | ||||
(2) | ||||
(3) |
-
•
Users and events : , ; , .
-
•
Digits of precision for log-scaling: , 2 at Snap.
-
•
Unique log-scaled observations and their counts: ; . Further let ; be their corresponding count in the dataset.
-
•
# of user buckets: , 100 at Snap.
-
•
# of bootstrap iterations: , 200 at Snap.
-
•
Quantile grid locations for QTE evaluation: ; e.g., to with step size chosen for Snap.
-
•
Confidence level: , at Snap.
-
•
and -value evaluated at grid: .
-
•
confidence interval at level evaluated at grid: .
-
•
evaluate the overall empirical CDF .
-
•
split users into buckets of equal size ; for all users in bucket , put all events in bucket , get corresponding log-scaled event’s count as .
-
•
permute the long vector randomly and split into small vectors of equal length .
-
–
for each bootstrap sample , define ;
-
–
evaluate at ;
For each , estimate
-
–
-
•
For each , let , , then let
-
•
Use linear interpolation on , and evaluate on grid
-
•
at grid:
-
•
SE for at grid:
-
•
-value at grid:
-
•
confidence interval at level at grid:
Furthermore, using these results joined with the original observations’ empirical CDF, we can then linearly interpolate these results on a fixed grid of quantiles (ensuring common quantile locations between control and treatment for QTE evaluation on such grid). At Snap, we chose a range of P20 to P99. The same example referenced in Figure 3 has the interpolated standard error, shown in Figure 4.

Once we have such common quantile locations and estimated log-scaled metrics’ SE for control and treatment, we have all the ingredients for traditional point-wise -test based statistical inference, including percentage change -values, confidence intervals, etc.
The following section provides a detailed summary of the aforementioned steps constituting the algorithm named CONQ (Continuous QTE for large-scaled experiments).
3.3. CONQ Algorithm and Implementations
Here we summarize the end-to-end CONQ algorithm in Algorithm 1, to adopt earlier notations, superscripts and denote data associated with the control and the treatment group respectively, being the total number of events, being the total number of users. As can be seen, it only requires several hyper-parameters including for log-scaling precision, for number of buckets, for balanced bootstrap iterations, a pre-determined grid on quantile locations for evaluation, and for confidence level. As for the bag of little bootstrap part, it can be simply construed as re-weighting observations in buckets based on samples on bucket indices. This greatly reduces computational cost in comparison to the method in (Deng et al., 2018) as the number of unique observations reduce from to , and here can be further adjusted by the log-scaling precision .

One key step that enables us to get a range of continuous QTE evaluation is the evaluation of on all for each bootstrap sample and . This is straightforward after we do a simple sort on and a weighted cumulative sum on the re-weighted count . This avoids ’ntiles’ operation and improves efficiency.
We implemented the CONQ algorithm at Snap’s A/B experimentation pipeline, the flowchart illustrated in Figure 5 demonstrates the overall workflow orchestrated by Spark (Zaharia et al., 2016) and Airflow.
Recall our earlier example in Figure 1 where we see significant but directionally different results at P50 and P90 QTE, here using CONQ, in Figure 6, we can see that the regression mainly happens at P20 to P80, i.e. lower quantiles, while improvements mainly happen after P90, thus indicating presence of heterogeneity in treatment effects, especially for devices with varying performance. Further breakdown results by varying device performance clusters (2 to 7, with lower number indicating devices with general poorer performance, while higher number indicating higher performance) in Figure 7, the results corroborate with our speculation that low regressions mainly occur for high end devices, whereas improvements occur for low end devices.


4. Evaluation
In this section we evaluate CONQ using real A/B experiments measuring various performance type metrics at Snap. Specifically we aim to address the following questions:
- (1)
-
(2)
Under an known A/A test setting (null case), can CONQ be robust across all quantile locations when it comes to false positives after multiple testing adjustment, thus indicating stability?
4.1. Evaluation on Snap’s A/B Experiments
For real experiments comparison, we use method in (Deng et al., 2018) (denoted by DELTA) with kernel density bandwidth choice chosen by the ad-hoc rule proposed in (Liu et al., 2019), and compare -value results at P50 and P90 with CONQ. A total of 15 performance metrics on 696 treatment and control pairs are evaluated (total of 10440 -values for P50 and P90). The performance metrics included measure a wide array of latencies and crashes for the Snapchat app, which itself should be representative enough for industry A/B experiments.
We compare the resulting -values from DELTA and CONQ first using a scatter-plot. As can be seen in Figure 8, these two methods align quite well and mostly adhere a linear pattern as expected, indicating stable performance of CONQ across various settings and -value regions. Then we compare the proportions of significance discoveries with a static threshold on -values ranging from 0.0001 to 0.2, DELTA and CONQ also perform quite similarly at both P50 and P90 levels, with CONQ resulting in slightly more discoveries across all thresholds.




4.2. Evaluation on Snap’s A/A Tests
In this section, we evaluate CONQ on a subset of known A/A tests at Snap where no QTE is expected across all quantile locations. Then for any given quantile location, the set of -values associated QTE across different treatment control pairs and various performance metrics should have no significance after multiple testing adjustment using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995). The reason for using BH here instead of testing for a uniform distributed -value, is the positive dependency relationships between many performance metrics, and the robustness of BH under such setting. Here we choose the nominal FDR level as . We evaluate a total of 15 performance metrics across 54 treatment and control pair combinations. The percentile grid for evaluation is chosen as P20 to P95 with increment. BH is applied conditioning on a fixed percentile. Since we know that all scenarios considered are A/A tests, any discoveries by BH are false discoveries. Table 3 summarizes the percentile and their corresponding count of false discoveries and its percentage (total of 810 -value per quantile). As can be seen, CONQ performs stable across various quantile locations under a realistic FDR nominal thresholds of either or . With a larger threshold at , CONQ still performs robust with slightly elevated number of false positives in the tail area.
0.05 | 0.1 | 0.2 | |
---|---|---|---|
P20 | 0 () | 0 () | 0 () |
P25 | 0 () | 0 () | 0 () |
P30 | 0 () | 0 () | 0 () |
P35 | 0 () | 0 () | 0 () |
P40 | 0 () | 0 () | 0 () |
P45 | 0 () | 0 () | 0 () |
P50 | 0 () | 0 () | 0 () |
P55 | 0 () | 0 () | 0 () |
P60 | 0 () | 0 () | 0 () |
P65 | 0 () | 0 () | 0 () |
P70 | 0 () | 0 () | 0 () |
P75 | 0 () | 0 () | 0 () |
P80 | 0 () | 0 () | 1 () |
P85 | 1 () | 2 () | 3 () |
P90 | 0 () | 0 () | 5 () |
P95 | 2 () | 2 () | 8 () |
5. Concluding Remarks and Possible Extension
In this paper, We propose a non-parametric method for point-wise statistical inference on a continuous range of QTE in A/B experiments, which is density estimation free and scalable for the large-scale setting at industry. It achieves stable performance across all ranges and performs similarly to existing delta-method based approaches in (Liu et al., 2019; Deng et al., 2018).
Note the confidence interval provided is conservative to ensure symmetry and -value calculation, for tail areas, the direct application of Woodruff type confidence intervals may have better coverage rate as investigated in (Sitter and Wu, 2001). Furthermore, we can indirectly use the estimated SE on quantile and empirical CDF to get an estimate of the density, which can be used for other tasks, avoiding kernel bandwidth choice.
we mainly focus on point-wise statistical inference for QTE, namely for any given , we produce confidence intervals CI s.t. . We could also be interested in making a confidence interval for all quantiles simultaneously, namely , which is in line with the DKWM inequality mentioned above. The method CONQ described in this paper can be further extended to produce -norm type bound as well, we can use the bootstrapped variance to get an estimate of the so-called “effective degrees of freedom”, and use it to substitute in the DKWM inequality or the inequality produced in (Howard and Ramdas, 2019). This approach is justified by the central limit theory on clustered samples, specifically in (Hansen and Lee, 2019). We leave this for future research.
References
- (1)
- Bahadur (1966) R Raj Bahadur. 1966. A note on quantiles in large samples. The Annals of Mathematical Statistics 37, 3 (1966), 577–580.
- Bakshy et al. (2014) Eytan Bakshy, Dean Eckles, and Michael S Bernstein. 2014. Designing and deploying online field experiments. In Proceedings of the 23rd international conference on World wide web. 283–292.
- Benjamini and Hochberg (1995) Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 1 (1995), 289–300.
- Berger (2013) James O Berger. 2013. Statistical decision theory and Bayesian analysis. Springer Science & Business Media.
- Cramér (1999) Harald Cramér. 1999. Mathematical methods of statistics. Vol. 43. Princeton university press.
- Deng et al. (2018) Alex Deng, Ulf Knoblich, and Jiannan Lu. 2018. Applying the Delta method in metric analytics: A practical guide with novel ideas. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 233–242.
- Dvoretzky et al. (1956) Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. 1956. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics (1956), 642–669.
- George et al. (2005) EP George, J Stuart Hunter, William Gordon Hunter, Roma Bins, Kay Kirlin IV, and Destiny Carroll. 2005. Statistics for experimenters: design, innovation, and discovery. Wiley New York.
- Gerber and Green (2012) Alan S Gerber and Donald P Green. 2012. Field experiments: Design, analysis, and interpretation. WW Norton.
- Gleason (1988) John R Gleason. 1988. Algorithms for balanced bootstrap simulations. The American Statistician 42, 4 (1988), 263–266.
- Hansen and Lee (2019) Bruce E Hansen and Seojeong Lee. 2019. Asymptotic theory for clustered samples. Journal of econometrics 210, 2 (2019), 268–290.
- Howard and Ramdas (2019) Steven R Howard and Aaditya Ramdas. 2019. Sequential estimation of quantiles with applications to A/B-testing and best-arm identification. arXiv preprint arXiv:1906.09712 (2019).
- Hunter and Lange (2000) David R Hunter and Kenneth Lange. 2000. Quantile regression via an MM algorithm. Journal of Computational and Graphical Statistics 9, 1 (2000), 60–77.
- Imbens and Rubin (2015) Guido W Imbens and Donald B Rubin. 2015. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.
- Johari et al. (2017) Ramesh Johari, Pete Koomen, Leonid Pekelis, and David Walsh. 2017. Peeking at a/b tests: Why it matters, and what to do about it. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1517–1525.
- Johari et al. (2015) Ramesh Johari, Leo Pekelis, and David J Walsh. 2015. Always valid inference: Bringing sequential analysis to A/B testing. arXiv preprint arXiv:1512.04922 (2015).
- Kiefer (1967) J Kiefer. 1967. On Bahadur’s representation of sample quantiles. The Annals of Mathematical Statistics 38, 5 (1967), 1323–1342.
- Kiefer (1970a) J Kiefer. 1970a. Deviations between the sample quantile process and the sample df. Nonparametric Techniques in Statistical Inference (1970), 299–319.
- Kiefer (1970b) J Kiefer. 1970b. Old and new methods for studying order statistics and sample quantiles. Nonparametric Techniques in Statistical Inference (1970), 349–357.
- Kleiner et al. (2014) Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar, and Michael I Jordan. 2014. A scalable bootstrap for massive data. Journal of the Royal Statistical Society: Series B: Statistical Methodology (2014), 795–816.
- Koenker and Hallock (2001) Roger Koenker and Kevin F Hallock. 2001. Quantile regression. Journal of economic perspectives 15, 4 (2001), 143–156.
- Kohavi et al. (2009) Ronny Kohavi, Thomas Crook, and Roger Longbotham. 2009. Online experimentation at Microsoft. Data Mining Case Studies 11, 2009 (2009), 39.
- Liu et al. (2019) Min Liu, Xiaohui Sun, Maneesh Varshney, and Ya Xu. 2019. Large-Scale Online Experimentation with Quantile Metrics. arXiv preprint arXiv:1903.08762 (2019).
- Lux ([n.d.]) Matthias Lux. [n.d.]. Analyzing Experiment Outcomes: Beyond Average Treatment Effects. ([n. d.]).
- Massart (1990) Pascal Massart. 1990. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. The annals of Probability (1990), 1269–1283.
- Silverman (1986) Bernard W Silverman. 1986. Density estimation for statistics and data analysis. Vol. 26. CRC press.
- Sitter and Wu (2001) Randy R Sitter and Changbao Wu. 2001. A note on Woodruff confidence intervals for quantiles. Statistics & probability letters 52, 4 (2001), 353–358.
- Tang et al. (2010) Diane Tang, Ashish Agarwal, Deirdre O’Brien, and Mike Meyer. 2010. Overlapping experiment infrastructure: More, better, faster experimentation. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. 17–26.
- Woodruff (1952) Ralph S Woodruff. 1952. Confidence intervals for medians and other position measures. J. Amer. Statist. Assoc. 47, 260 (1952), 635–646.
- Wu et al. (2005) Wei Biao Wu et al. 2005. On the Bahadur representation of sample quantiles for dependent sequences. The Annals of Statistics 33, 4 (2005), 1934–1963.
- Xie and Aurisset (2016) Huizhi Xie and Juliette Aurisset. 2016. Improving the sensitivity of online controlled experiments: Case studies at netflix. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 645–654.
- Xu et al. (2015) Ya Xu, Nanyu Chen, Addrian Fernandez, Omar Sinno, and Anmol Bhasin. 2015. From infrastructure to culture: A/B testing challenges in large scale social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2227–2236.
- Zaharia et al. ([n.d.]) Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, Ion Stoica, et al. [n.d.]. Spark: Cluster computing with working sets. ([n. d.]).
- Zaharia et al. (2016) Matei Zaharia, Reynold S Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J Franklin, et al. 2016. Apache spark: a unified engine for big data processing. Commun. ACM 59, 11 (2016), 56–65.