This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

CONQ: CONtinuous Quantile Treatment Effects for Large-Scale Online Controlled Experiments

Weinan Wang [email protected] Snap Inc.2850 Ocean Park Blvd.Santa MonicaCaliforniaUSA  and  Xi Zhang [email protected] Snap Inc.2850 Ocean Park Blvd.Santa MonicaCaliforniaUSA
(2020)
Abstract.

In many industry settings, online controlled experimentation (A/B test) has been broadly adopted as the gold standard to measure product or feature impacts. Most research has primarily focused on user engagement type metrics, specifically measuring treatment effects at mean (average treatment effects, ATE), and only a few have been focusing on performance metrics (e.g. latency), where treatment effects are measured at quantiles. Measuring quantile treatment effects (QTE) is challenging due to the myriad difficulties such as dependency introduced by clustered samples, scalability issues, density bandwidth choices, etc. In addition, previous literature has mainly focused on QTE at some pre-defined locations, such as P50 or P90, which doesn’t always convey the full picture. In this paper, we propose a novel scalable non-parametric solution, which can provide a continuous range of QTE with point-wise confidence intervals while circumventing the density estimation altogether. Numerical results show high consistency with traditional methods utilizing asymptotic normality. An end-to-end pipeline has been implemented at Snap Inc., providing daily insights on key performance metrics at a distributional level.

controlled experiments, quantile treatment effect, scalability, non-parametric estimation, Bahadur representation, continuous.
copyright: acmcopyrightjournalyear: 2020doi: 10.1145/1122445.1122456conference: WSDM ’21: ACM Symposium on Neural Gaze Detection; March 08–12, 2021; Jerusalem, Israelprice: 15.00isbn: 978-1-4503-XXXX-X/18/06

1. Introduction

Large-scale online experiments (George et al., 2005; Gerber and Green, 2012) are routinely conducted at various technology companies (Google, Netflix, Microsoft, LinkedIn, etc.) (Xie and Aurisset, 2016; Kohavi et al., 2009; Tang et al., 2010; Bakshy et al., 2014; Xu et al., 2015) to help gauge quantifiable impacts of potential feature and product launches. At Snap Inc., hundreds of experiments are conducted on a daily basis, aiming to improve user engagement or app-performance. To accurately measure these changes, hundreds and thousands of metrics are further pumped through our in house A/B platform where routine statistical tests are conducted to judge if the treatment effects are indeed statistically significant. These metrics can be roughly divided into two types: engagement vs. performance. For engagement type metrics, typically two sample tt-tests are utilized to determine if the average treatment effects (ATE) are truly nonzero (Imbens and Rubin, 2015). However for performance metrics (e.g., camera transcoding latency, page load latency, etc.), evaluating treatment effects at mean is typically ill-advised (Liu et al., 2019), and the industry standard is to measure QTE (quantile treatment effects), most notably at P50 and P90, whereas P50 is aimed at a summary statistic for overall performance and P90 for tail area performance.

However, it is not straightforward to get such QTE with valid pp-values and confidence intervals as events can no longer be considered independent (each randomization unit, typically users, can contribute multiple events, which in itself is another random variable measuring engagement), therefore delta-methods and user-level correlation calculations are required (Liu et al., 2019). In addition, while the method proposed in (Liu et al., 2019) is statistically valid, it lacks a principled and data-driven way on choosing the bandwidth for its kernel density estimation, which affects the result greatly since the density’s square appears as the denominator of the variance term. Furthermore, QTE at the median and the 9090-th quantile are sometimes not enough to give experimenters the whole picture, especially when the significance or directions do not agree (example in Figure 1), or when heterogeneous treatment effect (HTE) is present for different devices with high and low overall performances. This motivates us to come up with a robust, statistically valid, and scalable method which can provide continuous QTE with point-wise pp-values and confidence intervals.

Refer to caption
Figure 1. A sample experiment where a performance metric has statistically significant QTE at both P50 and P90, however the direction differs.

In this paper, we propose a novel method named CONQ (continuous quantile treatment effect), which utilizes Bag of Little Bootstrap (BLB) (Kleiner et al., 2014) and the Bahadur Representation (Wu et al., 2005) to construct point-wise asymptotically valid confidence intervals and pp-values, which are consistent with the normal approximation method in (Liu et al., 2019). To further improve Monte Carlo efficiency, balanced bootstrap is implemented for variance reduction. We also show the added benefit of log-transformation on the original metric for calculating percentage change QTE due to quantile’s invariance to monotone transformations. Extensive evaluation on Snapchat data shows high consistency of our method with method in (Liu et al., 2019) at P50 and P90, and stable performance across a range of quantile locations bench-marked by known A/A tests. A daily pipeline has been implemented using Spark (Zaharia et al., [n.d.]) and Airflow, generating continuous QTE on core performance metrics for all A/B tests at Snap Inc.

The key contributions of our paper include:

  • A scalable and theoretically sound method that can provide QTE with percentage change pp-values and confidence intervals at arbitrary range of quantile locations simultaneously.

  • Circumvents the issue of density estimation and bandwidth choice altogether, improve accuracy of delta-method by log-transformation for percentage changes.

  • Validation of the method on real experiments at Snap Inc., which shows consistency with existing P50 and P90 results, and stable performance across various quantile locations.

In Section 2, we describe the background and literature on QTE in the A/B setting and statistical inference for quantiles. In Section 3, we detail the motivation and theoretical justification of the CONQ procedure and its implementation. In Section 4, we evaluate CONQ using a large set of A/B experiments at Snap Inc., showing its comparable performance with delta-method based approach in (Liu et al., 2019; Deng et al., 2018), and shows its stable performance using a set of known A/A tests as validation corpus. In Section 5, we discuss the possible extension and summarize the paper.

2. Background and Related Work

2.1. Quantile Treatment Effects

Although in many tech firms, the vast majority of metrics are measuring user engagement activities, another important category of metrics aim to gauge app or web performance, i.e. performance metrics. For such performance metrics, not only do we care about the average users’ experience, we are also interested in the cohort of users that are suffering from the worst app or web performances. Bearing these concerns in mind, it’s important to evaluate these metrics at at least multiple quantile locations in A/B tests, especially in the tail area. At Snap Inc., P50 and P90 are the typical objectives various teams optimize for in A/B experiments. (Liu et al., 2019) proposed a statistically valid and scalable method that can calculate QTE with pp-values and confidence intervals by appealing to quantile estimator’s asymptotic normality (Cramér, 1999), however the varying bandwidths choice mentioned is rather ad-hoc and lacks theoretically guarantee. (Deng et al., 2018) introduced an approach that appeals to delta-method and the outer CI, circumventing the density estimation which still involves user-level correlation calculation for the variance term. Furthermore, these approaches still focus on treatment effects at a number of pre-determined quantile locations (.e.g., P50 or P90), where generalizations to a continuous range of quantile locations cannot be easily obtained without point-wise recalculation.

QTE at various quantile locations has been receiving increased attention these days in the industry, as it provides a fuller picture of distributional changes between control and treatment, and better captures heterogeneity (Lux, [n.d.]). At Uber, the approach they adopted is quantile regression, citing the advantage of relying on existing abundant literature (Koenker and Hallock, 2001; Hunter and Lange, 2000). However, the methodology proposed still requires an optimization algorithm, which is not suitable for the myriad of performance metrics and experiments at Snap Inc. at scale.

2.2. Statistical Inference for Quantiles

In the Statistics literature, the relationship between quantiles and CDF (cumulative distribution function) has been investigated extensively. Most notably, for X1,X2,,XnX_{1},X_{2},\cdots,X_{n} that are i.i.d. random variables from an unknown non-parametric FF with density ff, for 0<p<10<p<1, denote by ξp=inf{x:F(x)p}\xi_{p}=\inf\{x:F(x)\geq p\} the pp-th quantile of FF, ξn,p\xi_{n,p} the pp-th sample quantile of FnF_{n}, (Bahadur, 1966) coined the famed Bahadur representation:

ξn,p=ξp+pFn(ξp)f(ξp)+Oa.s.[n3/4(logn)1/2(loglogn)1/4].\xi_{n,p}=\xi_{p}+\frac{p-F_{n}(\xi_{p})}{f(\xi_{p})}+O_{a.s.}[n^{-3/4}(\log n)^{1/2}(\log\log{n})^{1/4}].

To put it simply, it stated that an asymptotic relationship between quantile and CDF can be established through its density function ff. Refinements of Bahadur’s result in the i.i.d. setting were further proposed by Kiefer (Kiefer, 1967, 1970a, 1970b). (Wu et al., 2005) further extended such representation for a wide range of dependent cases. (Woodruff, 1952) proposed the Woodruff confidence interval, which essentially established a one to one relationship between point-wise confidence intervals of the CDF and quantiles.

Another line of research focused on a \mathcal{L}_{\infty}-norm deviation of the empirical CDF from the truth (uniform bound), instead of at a specific quantile location (point-wise). The celebrated Dvoretsky-Kiefer-Wolfowitz-Massart (DKWM) inequality (Dvoretzky et al., 1956; Massart, 1990) provides a tight bound on the CDF for i.i.d. samples:

(supx(Fn(x)F(x))>ϵ)exp2nϵ2,ϵ12nln2\mathbb{P}\left(\sup_{x\in\mathbb{R}}\left(F_{n}(x)-F(x)\right)>\epsilon\right)\leq\exp^{-2n\epsilon^{2}},\;\forall\epsilon\geq\sqrt{\frac{1}{2n}\ln{2}}

, which by its expression, is indifferent to quantile locations. (Howard and Ramdas, 2019) further proposed two methods for point-wise quantile confidence intervals and two methods for \mathcal{L}_{\infty}-norm type confidence bounds, which are proven to be adaptive in a sequential setting, resonating with the always-valid pp-values line of research (Johari et al., 2015, 2017). However, coming from a QTE perspective, we are still most interested in point-wise statistical inference, and (Howard and Ramdas, 2019)’s method requires either parameter tuning or numeric root finding, and generalization to the dependent case is non-trivial.

In this paper, we focus on the end-horizon time setting which is the common-case at Snap Inc., instead of a sequential test setting as considered by (Howard and Ramdas, 2019). We propose an easy to interpret and implement data-driven method for a range of quantile level treatment effects for percentage changes.

3. Continuous point-wise Statistical Inference on QTE

3.1. Sample Quantiles and Bahadur Representation

At Snap Inc., hundreds of metrics are evaluated in A/B experiments simultaneously, aiming to not only gauge user telemetry data, but also on app-performance. As a social network company with a camera focus, myriad facets of in-app performances are the direct goal of optimization for many teams. There’s almost always a dual performance metric for an engagement metric. For example, as we measure the number of app opens, we also measure app open latency.

Refer to caption
Figure 2. Geometric interpretation of the Bahadur representation between the empirical CDF and metric quantiles.

This brings about another layer of complication in A/B tests. As a typical practice, randomization units are users; for engagement type metrics, each user contributes exactly one point, and all data-points can be considered i.i.d. observations, hence statistical inference can be by and large accomplished by appealing to the Central Limit Theorem. However, the dual performance metrics are not as straight-forward, as each user contributes multiple data-points (exactly the engagement count number of data-points), therefore we have a clustered set of observations which are no longer independent. Using the same notation in 2.2, for i.i.d. samples X1,,XnX_{1},\cdots,X_{n}, the sample pp-th quantile ξn,p\xi_{n,p} is asymptotically normal:

n(ξn,pξp)𝒩(0,p(1p)f(ξp)2).\sqrt{n}\left(\xi_{n,p}-\xi_{p}\right)\rightarrow\mathcal{N}\left(0,\frac{p(1-p)}{f\left(\xi_{p}\right)^{2}}\right).

With performance metrics, typically we need delta method to approximate the numerator in the variance term (Deng et al., 2018; Liu et al., 2019). However, this is only for a fixed quantile location pp, and it’s hard to generalize to other locations without recalculation. Furthermore, kernel density estimation is typically required for f^(ξp)\hat{f}(\xi_{p}), in which its bandwidth is hard to choose. Although some ad-hoc rules could work well in practice (Liu et al., 2019), small deviations in the bandwidth affect the result greatly.

To demonstrate such issue, here is an example on a typical performance metric measuring the latency on starting the Discover section inside the Snapchat app. The study improved this latency at P50 from 911.991911.991 to 908.173908.173, with a pp-value of 0.0440.044 using the method mentioned in (Liu et al., 2019) with the recommended bandwidth choice. However, if we were to vary the bandwidth using the below choices (with normal reference rule (Silverman, 1986) chosen as hn=1.06min{s,IQR/1.34}/n1/5h_{n}=1.06\min\{s,\text{IQR}/1.34\}/n^{1/5} (IQR iinter-quantile range, ss is sample standard deviation and nn is sample size), we get drastically different pp-values demonstrated in Table 1.

kernel density estimator’s bandwidth Δ%\Delta\% pp-value
0.002 1.08e-34
0.01 0.0125
0.02 0.049
normal reference rule 0.352
Table 1. How bandwidth choice affects kernel density estimation, resulting in different percentage change pp-values at P50.

Using 0.050.05 as a threshold, we can see that pp-values vary from being very significant to not significant at all as the bandwidth varies. However, we lack a robust and data-driven approach on choosing such bandwidth which can guarantee good estimation quality for inference across all quantiles.

As noted in sub-Section 2.2, the famed Bahadur representation establishes the asymptotic relationship between sample quantiles and the empirical CDF, especially in various dependent cases (Wu et al., 2005).

Such Bahadur representation has a nice geometric interpretation as illustrated by 2. At the neighborhood of point (ξp,p)(\xi_{p},p), to convert from the confidence interval of empirical CDF to quantiles, we can simply divide by the tangent of the curve F(ξp)F(\xi_{p}), which is F(ξp)=f(ξp)F^{\prime}(\xi_{p})=f(\xi_{p}). Therefore, for i.i.d. samples, we have

pFn(ξp)𝒩(0,p(1p)n)p-F_{n}\left(\xi_{p}\right)\rightarrow\mathcal{N}\left(0,\frac{p(1-p)}{n}\right)

which no longer involves the density estimation. For clustered sample’s case, simply replacing the variance term p(1p)/np(1-p)/n with ν\nu would suffice, which ν\nu can be empirically estimated using the Bag of Little Bootstrap (BLB) technique in the next section. Take one step further, we have the result that:

Let ξ^pL\displaystyle\text{Let }\hat{\xi}_{p}^{L} =inf{t:Fn(t)pzα/2ν1/2[Fn(ξp)]},\displaystyle=\inf\left\{t:F_{n}(t)\geq p-z_{\alpha/2}\nu^{1/2}[F_{n}(\xi_{p})]\right\},
ξ^pU\displaystyle\hat{\xi}_{p}^{U} =inf{t:Fn(t)p+zα/2ν1/2[Fn(ξp)]},then\displaystyle=\inf\left\{t:F_{n}(t)\geq p+z_{\alpha/2}\nu^{1/2}[F_{n}(\xi_{p})]\right\},\;\text{then }
(ξ^pLξpξ^pU)\displaystyle\mathbb{P}\left(\hat{\xi}_{p}^{L}\leq\xi_{p}\leq\hat{\xi}_{p}^{U}\right) 1α.\displaystyle\approx 1-\alpha.

Woodruff (Woodruff, 1952) first proposed this interval empirically, which ingeniously circumvents the density estimation altogether, while still provides valid statistical inference results.

Below we discuss how we extend such CI to a continuous setting and clustered sample’s case, suitable for the large-scale A/B testing need at Snap Inc.

3.2. Balanced Bag-of-Little-Bootstrap for CDF

In the previous section, we discuss the relationship between empirical CDF and quantiles through the Bahadur representation. In this section, we discuss how the variance term ν\nu can be efficiently calculated for at a continuous range of quantile locations using one set of bootstrap results, and how Woodruff type confidence intervals can be applied.

As mentioned in (Deng et al., 2018), ’ntiles’ operation (query quantile from set of observations) is expensive and should be avoided if possible. Furthermore, bearing in mind the one to one relationship between CI for quantiles and CI for CDF, in order to simultaneously get quantile level inference for all possible locations, we can first tackle the variance of CDF, and then appeal to Woodruff CI for transforming back to quantiles. In this paper, we use the Bag of Little Bootstrap in (Kleiner et al., 2014) with the balanced method (Gleason, 1988) for resampling to further reduce Monte Carlo variance, achieving higher bootstrap efficiency.

Suppose there are NN users, for user i=1,,Ni=1,\cdots,N, there are total of MiM_{i} events, denoted as Xi,1,,Xi,MiX_{i,1},\cdots,X_{i,M_{i}}, while i=1NMi=n\sum_{i=1}^{N}M_{i}=n. We first take the logarithm of these metrics, then preserve a pre-defined digits of precision (we chose 2): round(logXi,Mi,2),i=1,,N\text{round}(\log{X_{i,M_{i}}},2),\;i=1,\cdots,N. Note this step is to reduce the total number of unique log-scaled values for efficiency. Furthermore, due to quantile’s invariance to monotone transformation (Berger, 2013), we have

logξf(x)(p)=ξf(logx)(p),0<p<1.\log\xi_{f(x)}(p)=\xi_{f(\log x)}(p),\;\forall 0<p<1.

This ensures we can get back the QTE on the original metric XX by simply taking the exponent. Furthermore, by delta method, we have

Var(%diff between control and treatment at p-th quantile)\displaystyle\text{Var}(\%\text{diff between control and treatment at p-th quantile})
=\displaystyle= Var(ξf(x)T(p)ξf(x)C(p)ξf(x)C(p))=Var(ξf(x)T(p)ξf(x)C(p))\displaystyle\text{Var}\left(\frac{\xi^{T}_{f(x)}(p)-\xi^{C}_{f(x)}(p)}{\xi^{C}_{f(x)}(p)}\right)=\text{Var}\left(\frac{\xi^{T}_{f(x)}(p)}{\xi^{C}_{f(x)}(p)}\right)
=\displaystyle= Var(exp(logξf(x)T(p)logξf(x)C(p)))\displaystyle\text{Var}\left(\exp\left(\log\xi^{T}_{f(x)}(p)-\log\xi^{C}_{f(x)}(p)\right)\right)
=\displaystyle= Var(exp(ξf(logx)T(p)ξf(logx)C(p)))\displaystyle\text{Var}\left(\exp\left(\xi^{T}_{f(\log{x})}(p)-\xi^{C}_{f(\log{x})}(p)\right)\right)
\displaystyle\approx Var(ξf(logx)T(p)ξf(logx)C(p))(ξf(x)T(p)ξf(x)C(p))2\displaystyle\text{Var}\left(\xi^{T}_{f(\log{x})}(p)-\xi^{C}_{f(\log{x})}(p)\right)\left(\frac{\xi^{T}_{f(x)}(p)}{\xi^{C}_{f(x)}(p)}\right)^{2}
=\displaystyle= (Var(ξf(logx)T(p))+Var(ξf(logx)C(p)))(ξf(x)T(p)ξf(x)C(p))2,\displaystyle\left(\text{Var}\left(\xi^{T}_{f(\log{x})}(p)\right)+\text{Var}\left(\xi^{C}_{f(\log{x})}(p)\right)\right)\left(\frac{\xi^{T}_{f(x)}(p)}{\xi^{C}_{f(x)}(p)}\right)^{2},

where the first term in the last expression can be bootstrapped on log-transformed data separately for control and treatment. This way of applying the delta-method is empirically more accurate than on the original scale metric, especially when percentage change is large, or when ξf(x)C(p)\xi^{C}_{f(x)}(p) is small.

Then we split NN users together with their corresponding events into ss subsets (s=100s=100 for Snap) for control and treatment separately, all events for any given user shall be preserved in the same subset. For each subset, generate a data sketch with unique rounded log-scaled metric values (denoted by X~j,j=1,,K\tilde{X}_{j},\;j=1,\cdots,K, KK here is the total number of unique values in the original dataset) with its corresponding count cij,i=1,,s,j=1,,Kc_{ij},\;i=1,\cdots,s,\;j=1,\cdots,K:

Table 2. Disjoint subsets splitting the original data, events belong to the same user are in the same bucket.
1 2 \cdots s
X~1\tilde{X}_{1} c1,1c_{1,1} c2,1c_{2,1} \cdots cs,1c_{s,1}
X~2\tilde{X}_{2} c1,2c_{1,2} c2,2c_{2,2} \cdots cs,2c_{s,2}
\cdots \cdots \cdots \cdots \cdots
X~K\tilde{X}_{K} c1,Kc_{1,K} c2,Kc_{2,K} \cdots cs,Kc_{s,K}

Let BB be the number of bootstraps, to achieve balanced bootstrapping, i.e. where all samples appear exactly BB times, randomly permute the long vector 𝑽={1,,1B repetition,2,,2B repetition,,s,,sB repetition}\boldsymbol{V}=\left\{\underbrace{1,\cdots,1}_{\text{B repetition}},\underbrace{2,\cdots,2}_{\text{B repetition}},\cdots\cdots,\underbrace{s,\cdots,s}_{\text{B repetition}}\right\} and then split 𝑽\boldsymbol{V} into BB vectors of length ss, and treat each vector 𝒗i,i=1,,B\boldsymbol{v}_{i},\;i=1,\cdots,B as one bootstrap sample on the buckets. For each bootstrap sample vector 𝒗i\boldsymbol{v}_{i}, we can get a counter of how many times 1,,s1,\cdots,s each appears, and weight corresponding ci,jc_{i,j} by how many times ii-th bucket appears in 𝒗i\boldsymbol{v}_{i}. For each bootstrap sample, we can get the empirical cdf at all unique log-scaled values X~j,j=1,,K\tilde{X}_{j},\;j=1,\cdots,K. Using all bootstrap samples, we can get an estimate of the standard deviation of empirical cdf at all X~j\tilde{X}_{j} (denoted as 𝔽n,i(X~j),i=1,,B\mathbb{F}_{n,i}(\tilde{X}_{j}),\;i=1,\cdots,B) as well, which would approximate vv at 𝔽n(X~j)\mathbb{F}_{n}(\tilde{X}_{j}):

Var(𝔽n(X~j))\displaystyle\text{Var}\left(\mathbb{F}_{n}(\tilde{X}_{j})\right) B1i=1B(𝔽n,i(X~j)B1k=1B𝔽n,k(X~k))2\displaystyle\approx B^{-1}\sum_{i=1}^{B}\left(\mathbb{F}_{n,i}(\tilde{X}_{j})-B^{-1}\sum_{k=1}^{B}\mathbb{F}_{n,k}(\tilde{X}_{k})\right)^{2}
:=ν^nB[𝔽n(X~j)].\displaystyle:=\hat{\nu}^{B}_{n}[\mathbb{F}_{n}(\tilde{X}_{j})].

Bootstrapping on the empirical cdf avoids the ’ntiles’ operations, and enables us to use one set of bootstrap to get all potentially interested QTE by interpolating the estimated variance via a general continuity assumption. Below is a typical example of the interpolated bootstrapped standard error across all quantiles:

Refer to caption
Figure 3. A typical example of bootstrapped SE for the empirical CDF across all quantile locations for control and treatment.

Since the original Woodruff method does not always produce a symmetric CI, empirically, we propose the following conservative estimate for each unique log-scaled metric value X~j\tilde{X}_{j} for control and treatment separately:

(1) Let ξ^𝔽n(X~j)L\displaystyle\text{Let }\hat{\xi}_{\mathbb{F}_{n}(\tilde{X}_{j})}^{L} =inf{t:𝔽n(t)𝔽n(X~j)ν^nB[𝔽n(X~j)]},\displaystyle=\inf\left\{t:\mathbb{F}_{n}(t)\geq\mathbb{F}_{n}(\tilde{X}_{j})-\hat{\nu}^{B}_{n}[\mathbb{F}_{n}(\tilde{X}_{j})]\right\},
(2) ξ^𝔽n(X~j)U\displaystyle\hat{\xi}_{\mathbb{F}_{n}(\tilde{X}_{j})}^{U} =inf{t:𝔽n(t)𝔽n(X~j)+ν^nB[𝔽n(X~j)]},\displaystyle=\inf\left\{t:\mathbb{F}_{n}(t)\geq\mathbb{F}_{n}(\tilde{X}_{j})+\hat{\nu}^{B}_{n}[\mathbb{F}_{n}(\tilde{X}_{j})]\right\},
(3) then let SE^(X~j)\displaystyle\text{then let }\widehat{\text{SE}}(\tilde{X}_{j}) =max{X~jξ^𝔽n(X~j)L,ξ^𝔽n(X~j)UX~j}.\displaystyle=\max\left\{\tilde{X}_{j}-\hat{\xi}_{\mathbb{F}_{n}(\tilde{X}_{j})}^{L},\hat{\xi}_{\mathbb{F}_{n}(\tilde{X}_{j})}^{U}-\tilde{X}_{j}\right\}.
Input and data-processing:
  • Users ii and events jj: Xi,jC,i=1,,NC,j=1,,MiCX^{C}_{i,j},\;i=1,\cdots,N^{C},\;j=1,\cdots,M^{C}_{i}, i=1NCMiC=nC\sum_{i=1}^{N^{C}}M^{C}_{i}=n^{C}; Xi,jT,i=1,,NT,j=1,,MiTX^{T}_{i,j},\;i=1,\cdots,N^{T},\;j=1,\cdots,M^{T}_{i}, i=1NTMiT=nT\sum_{i=1}^{N^{T}}M^{T}_{i}=n^{T}.

  • Digits of precision for log-scaling: dd, 2 at Snap.

  • Unique log-scaled observations and their counts: X~jC,j=1,,KC\tilde{X}^{C}_{j},\;j=1,\cdots,K^{C}; X~jT,j=1,,KT\tilde{X}^{T}_{j},\;j=1,\cdots,K^{T}. Further let ciC,i=1,,KCc^{C}_{i},\;i=1,\cdots,K^{C}; ciT,i=1,,KTc^{T}_{i},\;i=1,\cdots,K^{T} be their corresponding count in the dataset.

  • # of user buckets: sC=sT=ss^{C}=s^{T}=s, 100 at Snap.

  • # of bootstrap iterations: BC=BT=BB^{C}=B^{T}=B, 200 at Snap.

  • Quantile grid locations for QTE evaluation: gridi(0,1),i=1,,Ngrid\text{grid}_{i}\in(0,1),\;i=1,\cdots,N_{\text{grid}}; e.g., 20%20\% to 99%99\% with step size 1%1\% chosen for Snap.

  • Confidence level: 1α1-\alpha, α=0.05\alpha=0.05 at Snap.

Output: Quantile Treatment Effects–QTE
  • Δ%\Delta\% and pp-value evaluated at grid: Δi%,pi,i=1,,Ngrid\Delta_{i}\%,\;p_{i},\;i=1,\cdots,N_{\text{grid}}.

  • Δ%\Delta\% confidence interval at level 1α1-\alpha evaluated at grid: CIi:=[CIiL,CIiU],i=1,,Ngrid\text{CI}_{i}:=[\text{CI}_{i}^{L},\text{CI}_{i}^{U}],\;i=1,\cdots,N_{\text{grid}}.

for control and treatment identifier id{C,T}id\in\{C,T\}, do
  • evaluate the overall empirical CDF 𝔽nid(X~jid),j=1,,Kid\mathbb{F}_{n}^{id}(\tilde{X}_{j}^{id}),\;j=1,\cdots,K^{id}.

  • split NidN^{id} users into ss buckets of equal size Nid/s\lfloor N^{id}/s\rfloor; for all users in bucket ii, put all events in bucket ii, get corresponding log-scaled event’s count X~jid,j=1,,Kid\tilde{X}^{id}_{j},\;j=1,\cdots,K^{id} as ci,jidc^{id}_{i,j}.

  • permute the long vector 𝑽={1,,1B repetition,2,,2B repetition,,s,,sB repetition}\boldsymbol{V}=\left\{\underbrace{1,\cdots,1}_{\text{B repetition}},\underbrace{2,\cdots,2}_{\text{B repetition}},\cdots\cdots,\underbrace{s,\cdots,s}_{\text{B repetition}}\right\} randomly and split into BB small vectors 𝒗i\boldsymbol{v}_{i} of equal length ss.

    • for each bootstrap sample 𝒗i=(vi1,,vis)\boldsymbol{v}_{i}=(v_{i_{1}},\cdots,v_{i_{s}}), define 𝔽nid,i,id(x):=(a=1sj=1Kidcvia,jid𝕀(X~jx))/a=1sj=1Kidcvia,jid\mathbb{F}^{*,id}_{n^{id},i}(x):=\left(\sum_{a=1}^{s}\sum_{j=1}^{K^{id}}c^{id}_{v_{i_{a}},j}\mathbb{I}(\tilde{X}_{j}\leq x)\right)/\sum_{a=1}^{s}\sum_{j=1}^{K^{id}}c^{id}_{v_{i_{a}},j};

    • evaluate 𝔽nid,i,id(x)\mathbb{F}^{*,id}_{n^{id},i}(x) at x=X~1id,X~2id,,X~Kididx=\tilde{X}^{id}_{1},\tilde{X}^{id}_{2},\cdots,\tilde{X}^{id}_{K^{id}};

    For each X~jid,j=1,,Kid\tilde{X}^{id}_{j},\;j=1,\cdots,K^{id}, estimate Var(𝔽nid(X~jid))B1i=1B(𝔽nid,i,id(X~jid)B1k=1B𝔽nid,k,id(X~kid))2:=ν^nB[𝔽nid(X~jid)].\text{Var}\left(\mathbb{F}_{n}^{id}(\tilde{X}^{id}_{j})\right)\approx B^{-1}\sum_{i=1}^{B}\left(\mathbb{F}^{*,id}_{n^{id},i}(\tilde{X}^{id}_{j})-B^{-1}\sum_{k=1}^{B}\mathbb{F}^{*,id}_{n^{id},k}(\tilde{X}^{id}_{k})\right)^{2}:=\hat{\nu}^{B}_{n}[\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})].

  • For each j=1,,Kidj=1,\cdots,K^{id}, let ξ^𝔽nid(X~jid)L=inf{t:𝔽nid(t)𝔽nid(X~jid)ν^nB[𝔽nid(X~jid)]}\hat{\xi}_{\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})}^{L}=\inf\left\{t:\mathbb{F}^{id}_{n}(t)\geq\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})-\hat{\nu}^{B}_{n}[\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})]\right\}, ξ^𝔽nid(X~jid)U=inf{t:𝔽nid(t)𝔽nid(X~jid)+ν^nB[𝔽nid(X~jid)]}\hat{\xi}_{\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})}^{U}=\inf\left\{t:\mathbb{F}^{id}_{n}(t)\geq\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})+\hat{\nu}^{B}_{n}[\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})]\right\}, then let SE^(X~jid)=max{X~jidξ^𝔽nid(X~jid)L,ξ^𝔽nid(X~jid)UX~jid}.\widehat{\text{SE}}(\tilde{X}^{id}_{j})=\max\left\{\tilde{X}^{id}_{j}-\hat{\xi}_{\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})}^{L},\hat{\xi}_{\mathbb{F}^{id}_{n}(\tilde{X}^{id}_{j})}^{U}-\tilde{X}^{id}_{j}\right\}.

  • Use linear interpolation on (𝔽nid(X~jid),SE^(X~jid)),j=1,,Kid\left(\mathbb{F}_{n}^{id}(\tilde{X}_{j}^{id}),\widehat{\text{SE}}(\tilde{X}_{j}^{id})\right),\;j=1,\cdots,K^{id}, and evaluate on grid,ii=1,,Ngrid(gridi,SE^(𝔽nid,1(gridi))).{}_{i},\;i=1,\cdots,N_{\text{grid}}\rightarrow\left(\text{grid}_{i},\widehat{\text{SE}}\left(\mathbb{F}_{n}^{id,-1}(\text{grid}_{i})\right)\right).

QTE:
  • Δ%\Delta\% at grid: Δi%=(exp(𝔽nT,1(gridi))exp(𝔽nC,1(gridi)))/exp(𝔽nC,1(gridi))×100%;\Delta_{i}\%=\left(\exp\left(\mathbb{F}_{n}^{T,-1}(\text{grid}_{i})\right)-\exp\left(\mathbb{F}_{n}^{C,-1}(\text{grid}_{i})\right)\right)/\exp\left(\mathbb{F}_{n}^{C,-1}(\text{grid}_{i})\right)\times 100\%;

  • SE for Δ%\Delta\% at grid: SE^(Δi%)=SE^(𝔽nC,1(gridi))2+SE^(𝔽nT,1(gridi))2×exp(𝔽nT,1(gridi))exp(𝔽nC,1(gridi));\widehat{\text{SE}}(\Delta_{i}\%)=\sqrt{\widehat{\text{SE}}\left(\mathbb{F}_{n}^{C,-1}(\text{grid}_{i})\right)^{2}+\widehat{\text{SE}}\left(\mathbb{F}_{n}^{T,-1}(\text{grid}_{i})\right)^{2}}\times\frac{\exp\left(\mathbb{F}_{n}^{T,-1}(\text{grid}_{i})\right)}{\exp\left(\mathbb{F}_{n}^{C,-1}(\text{grid}_{i})\right)};

  • Δ%\Delta\% pp-value at grid: pi=2×Φ(|Δi%SE^(Δi%)|);p_{i}=2\times\Phi\left(-\left|\frac{\Delta_{i}\%}{\widehat{\text{SE}}(\Delta_{i}\%)}\right|\right);

  • Δ%\Delta\% confidence interval at level 1α1-\alpha at grid: CIi=[Δi%zα/2SE^(Δi%),Δi%+zα/2SE^(Δi%)].\text{CI}_{i}=[\Delta_{i}\%-z_{\alpha/2}\widehat{\text{SE}}(\Delta_{i}\%),\Delta_{i}\%+z_{\alpha/2}\widehat{\text{SE}}(\Delta_{i}\%)].

Algorithm 1 CONQ: Continuous QTE on performance type metrics for large-scale online controlled experiments.

Furthermore, using these results joined with the original observations’ empirical CDF, we can then linearly interpolate these results on a fixed grid of quantiles (ensuring common quantile locations between control and treatment for QTE evaluation on such grid). At Snap, we chose a range of P20 to P99. The same example referenced in Figure 3 has the interpolated standard error, shown in Figure 4.

Refer to caption
Figure 4. Estimated SE on a grid of quantile locations using Woodruff type estimation mentioned above for control and treatment.

Once we have such common quantile locations and estimated log-scaled metrics’ SE for control and treatment, we have all the ingredients for traditional point-wise tt-test based statistical inference, including percentage change pp-values, confidence intervals, etc.

The following section provides a detailed summary of the aforementioned steps constituting the algorithm named CONQ (Continuous QTE for large-scaled experiments).

3.3. CONQ Algorithm and Implementations

Here we summarize the end-to-end CONQ algorithm in Algorithm 1, to adopt earlier notations, superscripts CC and TT denote data associated with the control and the treatment group respectively, nn being the total number of events, NN being the total number of users. As can be seen, it only requires several hyper-parameters including dd for log-scaling precision, ss for number of buckets, BB for balanced bootstrap iterations, a pre-determined grid on quantile locations for evaluation, and 1α1-\alpha for confidence level. As for the bag of little bootstrap part, it can be simply construed as re-weighting observations in buckets based on samples on bucket indices. This greatly reduces computational cost in comparison to the method in (Deng et al., 2018) as the number of unique observations reduce from nn to KK, and KK here can be further adjusted by the log-scaling precision dd.

Refer to caption
Figure 5. Illustration of CONQ engineering pipeline at Snap, orchestrated through Airflow and Spark.

One key step that enables us to get a range of continuous QTE evaluation is the evaluation of 𝔽nid,i,id\mathbb{F}^{*,id}_{n^{id},i} on all X~jid\tilde{X}^{id}_{j} for each bootstrap sample ii and id{C,T}id\in\{C,T\}. This is straightforward after we do a simple sort on X~jid\tilde{X}^{id}_{j} and a weighted cumulative sum on the re-weighted count ci,jidc^{id}_{i,j}. This avoids ’ntiles’ operation and improves efficiency.

We implemented the CONQ algorithm at Snap’s A/B experimentation pipeline, the flowchart illustrated in Figure 5 demonstrates the overall workflow orchestrated by Spark (Zaharia et al., 2016) and Airflow.

Recall our earlier example in Figure 1 where we see significant but directionally different results at P50 and P90 QTE, here using CONQ, in Figure 6, we can see that the regression mainly happens at P20 to P80, i.e. lower quantiles, while improvements mainly happen after P90, thus indicating presence of heterogeneity in treatment effects, especially for devices with varying performance. Further breakdown results by varying device performance clusters (2 to 7, with lower number indicating devices with general poorer performance, while higher number indicating higher performance) in Figure 7, the results corroborate with our speculation that low regressions mainly occur for high end devices, whereas improvements occur for low end devices.

Refer to caption
Figure 6. Example in Figure 1 evaluated by CONQ at P20 to P99, as can be seen, regression (indicated by red) happens mainly at P20 to P80, whereas improvements (indicated by green) mainly happen after P90, indicating presence of heterogeneity in treatment effects. Purple dotted line is plotted at 0.05, y-axis is log-scaled. The threshold for pp-value here is chosen as 0.05.
Refer to caption
Figure 7. Example in Figure 1 evaluated by CONQ at P20 to P99 at device cluster breakdown level, as can be seen, regressions mainly occur at high end devices, while improvements mainly occur at low end devices.

4. Evaluation

In this section we evaluate CONQ using real A/B experiments measuring various performance type metrics at Snap. Specifically we aim to address the following questions:

  1. (1)

    Comparing with density estimation based methods (Liu et al., 2019; Deng et al., 2018), can CONQ achieve similar results empirically at P50 and P90?

  2. (2)

    Under an known A/A test setting (null case), can CONQ be robust across all quantile locations when it comes to false positives after multiple testing adjustment, thus indicating stability?

4.1. Evaluation on Snap’s A/B Experiments

For real experiments comparison, we use method in (Deng et al., 2018) (denoted by DELTA) with kernel density bandwidth choice hh chosen by the ad-hoc rule proposed in (Liu et al., 2019), and compare Δ%\Delta_{\%} pp-value results at P50 and P90 with CONQ. A total of 15 performance metrics on 696 treatment and control pairs are evaluated (total of 10440 pp-values for P50 and P90). The performance metrics included measure a wide array of latencies and crashes for the Snapchat app, which itself should be representative enough for industry A/B experiments.

We compare the resulting pp-values from DELTA and CONQ first using a scatter-plot. As can be seen in Figure 8, these two methods align quite well and mostly adhere a linear pattern as expected, indicating stable performance of CONQ across various settings and pp-value regions. Then we compare the proportions of significance discoveries with a static threshold on pp-values ranging from 0.0001 to 0.2, DELTA and CONQ also perform quite similarly at both P50 and P90 levels, with CONQ resulting in slightly more discoveries across all thresholds.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 8. Comparison of DELTA and CONQ on p-values and significance discoveries at various threshold levels.

4.2. Evaluation on Snap’s A/A Tests

In this section, we evaluate CONQ on a subset of known A/A tests at Snap where no QTE is expected across all quantile locations. Then for any given quantile location, the set of pp-values associated QTE across different treatment control pairs and various performance metrics should have no significance after multiple testing adjustment using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995). The reason for using BH here instead of testing for a uniform distributed pp-value, is the positive dependency relationships between many performance metrics, and the robustness of BH under such setting. Here we choose the nominal FDR level as α=0.05,0.1,0.2\alpha=0.05,0.1,0.2. We evaluate a total of 15 performance metrics across 54 treatment and control pair combinations. The percentile grid for evaluation is chosen as P20 to P95 with 5%5\% increment. BH is applied conditioning on a fixed percentile. Since we know that all scenarios considered are A/A tests, any discoveries by BH are false discoveries. Table 3 summarizes the percentile and their corresponding count of false discoveries and its percentage (total of 810 pp-value per quantile). As can be seen, CONQ performs stable across various quantile locations under a realistic FDR nominal thresholds of either 0.050.05 or 0.10.1. With a larger threshold at 0.20.2, CONQ still performs robust with slightly elevated number of false positives in the tail area.

Percentile α\alpha 0.05 0.1 0.2
P20 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P25 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P30 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P35 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P40 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P45 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P50 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P55 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P60 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P65 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P70 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P75 0 (0%0\%) 0 (0%0\%) 0 (0%0\%)
P80 0 (0%0\%) 0 (0%0\%) 1 (0.12%0.12\%)
P85 1 (0.12%0.12\%) 2 (0.25%0.25\%) 3 (0.37%0.37\%)
P90 0 (0%0\%) 0 (0%0\%) 5 (0.62%0.62\%)
P95 2 (0.25%0.25\%) 2 (0.25%0.25\%) 8 (0.99%0.99\%)
Table 3. : number of false positives discovered and its percentage using various nominal FDR thresholds for BH adjustment out of 810 pp-values, evaluated at different percentiles.

5. Concluding Remarks and Possible Extension

In this paper, We propose a non-parametric method for point-wise statistical inference on a continuous range of QTE in A/B experiments, which is density estimation free and scalable for the large-scale setting at industry. It achieves stable performance across all ranges and performs similarly to existing delta-method based approaches in (Liu et al., 2019; Deng et al., 2018).

Note the confidence interval provided is conservative to ensure symmetry and pp-value calculation, for tail areas, the direct application of Woodruff type confidence intervals may have better coverage rate as investigated in (Sitter and Wu, 2001). Furthermore, we can indirectly use the estimated SE on quantile and empirical CDF to get an estimate of the density, which can be used for other tasks, avoiding kernel bandwidth choice.

we mainly focus on point-wise statistical inference for QTE, namely for any given p(0,1)p\in(0,1), we produce 1α1-\alpha confidence intervals CI s.t. (ξpCI)1α\mathbb{P}(\xi_{p}\in\text{CI})\geq 1-\alpha. We could also be interested in making a confidence interval for all quantiles simultaneously, namely (p(0,1),s.t. ξpCI)1α\mathbb{P}(\exists p\in(0,1),\;\text{s.t. }\xi_{p}\in\text{CI})\geq 1-\alpha, which is in line with the DKWM inequality mentioned above. The method CONQ described in this paper can be further extended to produce \mathcal{L}_{\infty}-norm type bound as well, we can use the bootstrapped variance ν\nu to get an estimate of the so-called “effective degrees of freedom”, and use it to substitute nn in the DKWM inequality or the inequality produced in (Howard and Ramdas, 2019). This approach is justified by the central limit theory on clustered samples, specifically in (Hansen and Lee, 2019). We leave this for future research.

References

  • (1)
  • Bahadur (1966) R Raj Bahadur. 1966. A note on quantiles in large samples. The Annals of Mathematical Statistics 37, 3 (1966), 577–580.
  • Bakshy et al. (2014) Eytan Bakshy, Dean Eckles, and Michael S Bernstein. 2014. Designing and deploying online field experiments. In Proceedings of the 23rd international conference on World wide web. 283–292.
  • Benjamini and Hochberg (1995) Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) 57, 1 (1995), 289–300.
  • Berger (2013) James O Berger. 2013. Statistical decision theory and Bayesian analysis. Springer Science & Business Media.
  • Cramér (1999) Harald Cramér. 1999. Mathematical methods of statistics. Vol. 43. Princeton university press.
  • Deng et al. (2018) Alex Deng, Ulf Knoblich, and Jiannan Lu. 2018. Applying the Delta method in metric analytics: A practical guide with novel ideas. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 233–242.
  • Dvoretzky et al. (1956) Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. 1956. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics (1956), 642–669.
  • George et al. (2005) EP George, J Stuart Hunter, William Gordon Hunter, Roma Bins, Kay Kirlin IV, and Destiny Carroll. 2005. Statistics for experimenters: design, innovation, and discovery. Wiley New York.
  • Gerber and Green (2012) Alan S Gerber and Donald P Green. 2012. Field experiments: Design, analysis, and interpretation. WW Norton.
  • Gleason (1988) John R Gleason. 1988. Algorithms for balanced bootstrap simulations. The American Statistician 42, 4 (1988), 263–266.
  • Hansen and Lee (2019) Bruce E Hansen and Seojeong Lee. 2019. Asymptotic theory for clustered samples. Journal of econometrics 210, 2 (2019), 268–290.
  • Howard and Ramdas (2019) Steven R Howard and Aaditya Ramdas. 2019. Sequential estimation of quantiles with applications to A/B-testing and best-arm identification. arXiv preprint arXiv:1906.09712 (2019).
  • Hunter and Lange (2000) David R Hunter and Kenneth Lange. 2000. Quantile regression via an MM algorithm. Journal of Computational and Graphical Statistics 9, 1 (2000), 60–77.
  • Imbens and Rubin (2015) Guido W Imbens and Donald B Rubin. 2015. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.
  • Johari et al. (2017) Ramesh Johari, Pete Koomen, Leonid Pekelis, and David Walsh. 2017. Peeking at a/b tests: Why it matters, and what to do about it. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1517–1525.
  • Johari et al. (2015) Ramesh Johari, Leo Pekelis, and David J Walsh. 2015. Always valid inference: Bringing sequential analysis to A/B testing. arXiv preprint arXiv:1512.04922 (2015).
  • Kiefer (1967) J Kiefer. 1967. On Bahadur’s representation of sample quantiles. The Annals of Mathematical Statistics 38, 5 (1967), 1323–1342.
  • Kiefer (1970a) J Kiefer. 1970a. Deviations between the sample quantile process and the sample df. Nonparametric Techniques in Statistical Inference (1970), 299–319.
  • Kiefer (1970b) J Kiefer. 1970b. Old and new methods for studying order statistics and sample quantiles. Nonparametric Techniques in Statistical Inference (1970), 349–357.
  • Kleiner et al. (2014) Ariel Kleiner, Ameet Talwalkar, Purnamrita Sarkar, and Michael I Jordan. 2014. A scalable bootstrap for massive data. Journal of the Royal Statistical Society: Series B: Statistical Methodology (2014), 795–816.
  • Koenker and Hallock (2001) Roger Koenker and Kevin F Hallock. 2001. Quantile regression. Journal of economic perspectives 15, 4 (2001), 143–156.
  • Kohavi et al. (2009) Ronny Kohavi, Thomas Crook, and Roger Longbotham. 2009. Online experimentation at Microsoft. Data Mining Case Studies 11, 2009 (2009), 39.
  • Liu et al. (2019) Min Liu, Xiaohui Sun, Maneesh Varshney, and Ya Xu. 2019. Large-Scale Online Experimentation with Quantile Metrics. arXiv preprint arXiv:1903.08762 (2019).
  • Lux ([n.d.]) Matthias Lux. [n.d.]. Analyzing Experiment Outcomes: Beyond Average Treatment Effects. ([n. d.]).
  • Massart (1990) Pascal Massart. 1990. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. The annals of Probability (1990), 1269–1283.
  • Silverman (1986) Bernard W Silverman. 1986. Density estimation for statistics and data analysis. Vol. 26. CRC press.
  • Sitter and Wu (2001) Randy R Sitter and Changbao Wu. 2001. A note on Woodruff confidence intervals for quantiles. Statistics & probability letters 52, 4 (2001), 353–358.
  • Tang et al. (2010) Diane Tang, Ashish Agarwal, Deirdre O’Brien, and Mike Meyer. 2010. Overlapping experiment infrastructure: More, better, faster experimentation. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. 17–26.
  • Woodruff (1952) Ralph S Woodruff. 1952. Confidence intervals for medians and other position measures. J. Amer. Statist. Assoc. 47, 260 (1952), 635–646.
  • Wu et al. (2005) Wei Biao Wu et al. 2005. On the Bahadur representation of sample quantiles for dependent sequences. The Annals of Statistics 33, 4 (2005), 1934–1963.
  • Xie and Aurisset (2016) Huizhi Xie and Juliette Aurisset. 2016. Improving the sensitivity of online controlled experiments: Case studies at netflix. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 645–654.
  • Xu et al. (2015) Ya Xu, Nanyu Chen, Addrian Fernandez, Omar Sinno, and Anmol Bhasin. 2015. From infrastructure to culture: A/B testing challenges in large scale social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2227–2236.
  • Zaharia et al. ([n.d.]) Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, Ion Stoica, et al. [n.d.]. Spark: Cluster computing with working sets. ([n. d.]).
  • Zaharia et al. (2016) Matei Zaharia, Reynold S Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J Franklin, et al. 2016. Apache spark: a unified engine for big data processing. Commun. ACM 59, 11 (2016), 56–65.