Selective Inference for Hierarchical Clustering
Abstract
Classical tests for a difference in means control the type I error rate when the groups are defined a priori. However, when the groups are instead defined via clustering, then applying a classical test yields an extremely inflated type I error rate. Notably, this problem persists even if two separate and independent data sets are used to define the groups and to test for a difference in their means. To address this problem, in this paper, we propose a selective inference approach to test for a difference in means between two clusters. Our procedure controls the selective type I error rate by accounting for the fact that the choice of null hypothesis was made based on the data. We describe how to efficiently compute exact p-values for clusters obtained using agglomerative hierarchical clustering with many commonly-used linkages. We apply our method to simulated data and to single-cell RNA-sequencing data.
Keywords: post-selection inference, hypothesis testing, difference in means, type I error
1 Introduction
Testing for a difference in means between groups is fundamental to answering research questions across virtually every scientific area. Classical tests control the type I error rate when the groups are defined a priori. However, it is increasingly common for researchers to instead define the groups via a clustering algorithm. In the context of single-cell RNA-sequencing data, researchers often cluster the cells to identify putative cell types, then test for a difference in means between the putative cell types in that same data set; see e.g. Hwang et al. (2018). Unfortunately, available tests do not properly account for the double-use of data, which invalidates the resulting inference. One example of this problematic issue can be seen in the FindMarkers
function in the popular and highly-cited R
package Seurat
(Satija et al. 2015, Butler et al. 2018, Stuart et al. 2019, Hao et al. 2021). Many recent papers have described the issues associated with the use of data for both clustering and downstream testing, without proposing suitable solutions (Luecken & Theis 2019, Lähnemann et al. 2020, Deconinck et al. 2021). In fact, testing for a difference in means between a pair of estimated clusters while controlling the type I error rate has even been described as one of eleven “grand challenges” in the entire field of single-cell data science (Lähnemann et al. 2020). Similar issues arise in the field of neuroscience (Kriegeskorte et al. 2009).
In this paper, we develop a valid test for a difference in means between two clusters estimated from the data. We consider the following model for observations of features:
(1) |
where , with rows , is unknown, and is known. (We discuss the case where is unknown in Section 4.3.) For , let
(2) |
which we refer to as the mean of and the empirical mean of in , respectively. Given a realization of , we first apply a clustering algorithm to obtain , a partition of . We then use to test, for a pair of clusters ,
(3) |
It is tempting to simply apply a Wald test of (3), with p-value given by
(4) |
where . However, since we clustered to get , we will observe substantial differences between even when there is no signal in the data, as is shown in Figure 1(a). That is, in (4), the random variable on the left-hand side of the inequality follows a scaled distribution, but the right-hand side of the inequality is not drawn from a scaled distribution, because and are functions of . In short, the problem is that we used the data to select a null hypothesis to test. Since the Wald test does not account for this hypothesis selection procedure, it is extremely anti-conservative, as is shown in Figure 1(b).

At first glance, it seems that we might be able to overcome this problem via sample splitting. That is, we divide the observations into a training and a test set, cluster the observations in the training set, and then assign the test set observations to those clusters, as in Figures 2(a)–(c). Then, we apply the Wald test in (4) to the test set. Unfortunately, by assigning test observations to clusters, we have once again used the data to select a null hypothesis to test, in the sense that in (3) is a function of the test observations. Thus, the Wald test is extremely anti-conservative (Figure 2(d)). In other words, sample splitting does not provide a valid way to test the hypothesis in (3).

In this paper, we develop a selective inference framework to test for a difference in means after clustering. This framework exploits ideas from the recent literature on selective inference for regression and changepoint detection (Fithian et al. 2014, Loftus & Taylor 2015, Lee et al. 2016, Yang et al. 2016, Hyun et al. 2018, Jewell et al. 2019, Mehrizi & Chenouri 2021). The key idea is as follows: since we chose to test because , we can account for this hypothesis selection procedure by defining a p-value that conditions on the event . This yields a correctly-sized test, as seen in Figure 1(c).
A large body of work evaluates the statistical significance of a clustering by testing the goodness-of-fit of models under the misspecification of the number of clusters (Chen et al. 2001, Liu et al. 2008, Chen et al. 2012, Maitra et al. 2012, Kimes et al. 2017) or by assessing the stability of estimated clusters (Suzuki & Shimodaira 2006). Most of these papers conduct bootstrap sampling or asymptotic approximations to the null distribution. Our proposed framework avoids the need for resampling and provides exact finite-sample inference for the difference in means between a pair of estimated clusters, under the assumption that in (1) is known. Chapter 3 of Campbell (2018) considers testing for a difference in means after convex clustering (Hocking et al. 2011), a relatively esoteric form of clustering. Our framework is particularly efficient when applied to hierarchical clustering, which is one of the most popular types of clustering across a number of fields. Zhang et al. (2019) proposes splitting the data, clustering the training set, and applying these clusters to the test set as illustrated in Figures 2(a)–(c). They develop a selective inference framework that yields valid p-values for a difference in the mean of a single feature between two clusters in the test set. Our framework avoids the need for sample splitting, and thereby allows inference on the set of clusters obtained from all (rather than a subset of) the data.
The rest of the paper is organized as follows. In Section 2, we develop a framework to test for a difference in means after clustering. We apply this framework to compute p-values for hierarchical clustering in Section 3. We describe extensions, simulation results, and applications to real data in Sections 4, 5, and 6. The discussion is in Section 7.
2 Selective inference for clustering
2.1 A test of no difference in means between two clusters
Let be an arbitrary realization from (1), and let and be an arbitrary pair of clusters in . Since we chose to test because , it is natural to define the p-value as a conditional version of (4),
(5) |
This amounts to asking, “Among all realizations of that result in clusters and , what proportion have a difference in cluster means at least as large as the difference in cluster means in our observed data set, when in truth ?” One can show that rejecting when (5) is below controls the selective type I error rate (Fithian et al. 2014) at level .
Definition 1 (Selective type I error rate for clustering).
For any non-overlapping groups of observations , let denote the null hypothesis that . We say that a test of based on controls the selective type I error rate for clustering at level if
(6) |
That is, if is true, then the conditional probability of rejecting based on at level , given that and are clusters in , is bounded by .
However, (5) cannot be calculated, since the conditional distribution of given involves the unknown nuisance parameters , where projects onto the orthogonal complement of the vector , and where
(7) |
In other words, it requires knowing aspects of that are not known under the null. Instead, we will define the p-value for in (3) to be
|
||||
(8) |
where . The following result shows that conditioning on these additional events makes (8) computationally tractable by constraining the randomness in to a scalar random variable, while maintaining control of the selective type I error rate.
Theorem 1.
For any realization from (1) and for any non-overlapping groups of observations ,
(9) |
where is defined in (8), denotes the cumulative distribution function of a random variable truncated to the set , and
(10) |
Furthermore, if is true, then
(11) |
That is, rejecting whenever is below controls the selective type I error rate (Definition 1) at level .
We prove Theorem 1 in Appendix A.1. It follows from (9) that to compute the p-value in (8), it suffices to characterize the one-dimensional set
(12) |
where is defined in (10), and where
(13) |
for defined in (7).
While the test based on (8) controls the selective type I error rate, the extra conditioning may lead to lower power than a test based on (5)(Lee et al. 2016, Jewell et al. 2019, Mehrizi & Chenouri 2021). However, (8) has a major advantage over (5): Theorem 1 reveals that computing (8) simply requires characterizing in (12). This is the focus of Section 3.
2.2 Interpreting and
Since , where is defined in (2) and is defined in (13), it follows that the th row of in (13) is
(14) |
We can interpret as a perturbed version of , where observations in clusters and have been “pulled apart” (if ) or “pushed together” (if ) in the direction of . Furthermore, in (12) describes the set of non-negative for which applying the clustering algorithm to the perturbed data set yields and . To illustrate this interpretation, we apply average linkage hierarchical clustering to a realization from (1) to obtain three clusters. Figure 3(a)-(c) displays for , along with for and , respectively. The clusters are shown in blue and orange. In Figure 3(b), since , the blue and orange clusters have been “pushed together” so that there is no difference between their empirical means, and average linkage hierarchical clustering no longer estimates these clusters. By contrast, in Figure 3(c), the blue and orange clusters have been “pulled apart”, and average linkage hierarchical clustering still estimates these clusters. In this example, .

3 Computing for hierarchical clustering
We now consider computing defined in (12) for clusters defined via hierarchical clustering. After reviewing hierarchical clustering (Section 3.1), we explicitly characterize (Section 3.2), and then show how specific properties, such as the dissimilarity and linkage used, lead to substantial computational savings in computing (Sections 3.3–3.4).
3.1 A brief review of agglomerative hierarchical clustering
Agglomerative hierarchical clustering produces a sequence of clusterings. The first clustering, , contains clusters, each with a single observation. The th clustering, , is created by merging the two most similar (or least dissimilar) clusters in the th clustering, for . Details are provided in Algorithm 1.
Let . For :
-
1.
Define . (We assume throughout this paper that the minimizer is unique.)
-
2.
Merge and at the height of in the dendrogram, and let
Algorithm 1 involves a function , which quantifies the dissimilarity between two groups of observations. We assume throughout this paper that the dissimilarity between the th and ’th observations, , depends on the data through only. For example, we could define . When , then we extend the pairwise similarity to the dissimilarity between groups of obervations using the notion of linakge, to be discussed further in Section 3.3.
3.2 An explicit characterization of for hierarchical clustering
We saw in Sections 2.1–2.2 that to compute the p-value defined in (8), we must characterize the set in (12), where in (13) is a perturbed version of in which observations in the clusters and have been “pulled together” or “pushed apart”. We do so now for hierarchical clustering.
Lemma 1.
Suppose that , i.e. we perform hierarchical clustering to obtain clusters. Then,
(15) |
where is the “winning pair” of clusters that merged at the step of the hierarchical clustering algorithm applied to . Furthermore, for any ,
(16) |
We prove Lemma 1 in Appendix A.2. The right-hand side of (16) says that the same merges occur in the first steps of the hierarchical clustering algorithm applied to and . To characterize the set of merges that occur in , consider the set of all “losing pairs”, i.e. all cluster pairs that co-exist but are not the “winning pair” in the first steps:
(17) |
Each pair has a “lifetime” that starts at the step where both have been created, , and ends at step By construction, each pair is never the winning pair at any point in its lifetime, i.e. for all . Therefore, preserves the merges that occur in the first steps in if and only if for all and for all . Furthermore, (15) says that for all and . This leads to the following result.
Theorem 2.
Suppose that , i.e. we perform hierarchical clustering to obtain clusters. Then, for defined in (12),
(18) |
where is the pair of clusters that merged at the step of the hierarchical clustering algorithm applied to , is defined in (17) to be the set of “losing pairs” of clusters in , and is the lifetime of such a pair of clusters in . Furthermore, (18) is the intersection of sets.
We prove Theorem 2 in Appendix A.3. Theorem 2 expresses in (12) as the intersection of sets of the form , where
(19) |
is the maximum merge height in the dendrogram of over the lifetime of . The next subsection is devoted to understanding when and how these sets can be efficiently computed. In particular, by specializing to squared Euclidean distance and a certain class of linkages, we will show that each of these sets is defined by a single quadratic inequality, and that the coefficients of all of these quadratic inequalities can be efficiently computed.
3.3 Squared Euclidean distance and “linear update” linkages
Consider hierarchical clustering with squared Euclidean distance and a linkage that satisfies a linear Lance-Williams update (Lance & Williams 1967) of the form
(20) |
This includes average, weighted, Ward, centroid, and median linkage (Table 1).
Average | Weighted | Ward | Centroid | Median | Single | Complete | |||
---|---|---|---|---|---|---|---|---|---|
Satisfies (20) | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ||
|
✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ |
We have seen in Section 3.2 that to evaluate (18), we must evaluate sets of the form with , where in (17) is the set of losing cluster pairs in . We now present results needed to characterize these sets.
Lemma 2.
Suppose that we define . Then, for all , ,
where for defined in (13), ,
, and .
Lemma 2 follows directly from the definition of in (13), and does not require (20) to hold. Next, we specialize to squared Euclidean distance and linkages satisfying (20), and characterize , the dissimilarity between pairs of clusters in . The following result follows immediately from Lemma 2 and the fact that linear combinations of quadratic functions of are also quadratic functions of .
Proposition 1.
Lastly, we characterize the cost of computing in (19). Naively, computing could require operations. However, if the dendrogram of has no inversions below the th merge, i.e. if for all , then . More generally, where is the set of steps where inversions occur in the dendrogram of during the lifetime of the cluster pair . This leads to the following result.
Proposition 2.
For any , given its lifetime and , and given , i.e. the set of steps where inversions occur in the dendrogram of below the th merge, we can compute in time.
We prove Proposition 2 in Appendix A.4. Proposition 2 does not require defining and does not require (20) to hold. We now characterize the cost of computing defined in (12), in the case of squared Euclidean distance and linkages that satisfy (20).
Proposition 3.
3.4 Squared Euclidean distance and single linkage
Single linkage does not satisfy (20) (Table 1), and the inequality that defines the set is not quadratic in for or . Consequently, in the case of single linkage with squared Euclidean distance, we cannot efficiently evaluate the expression of in (18) using the approach outlined in Section 3.3.
Fortunately, the definition of single linkage leads to an even simpler expression of than (18). Single linkage defines . Applying this definition to (18) yields
where in (17) is the set of losing cluster pairs in . Therefore, in the case of single linkage,
where and
The following result characterizes the sets of the form .
Proposition 4.
Suppose that , i.e. we perform hierarchical clustering to obtain clusters. Let . If or or , then . Otherwise,
We prove Proposition 4 in Appendix A.5. Therefore,
(21) |
where . Recall from Lemma 2 that in the case of squared Euclidean distance, is a quadratic function of whose coefficients can be computed in time. Furthermore, sets are intersected in (21). Therefore, we can evaluate (21) in time by solving quadratic inequalities and intersecting their solution sets.
4 Extensions
4.1 Monte Carlo approximation to the p-value
We may be interested in computing the p-value defined in (8) for clustering methods where in (12) cannot be efficiently computed (e.g. complete linkage hierarchical clustering or non-hierarchical clustering methods). Thus, we develop a Monte Carlo approximation to the p-value that does not require us to compute . Recalling from (12) that , it follows from Theorem 1 that
(22) |
for , and for defined in (13). Thus, we could naively sample , and approximate the expectations in (22) with averages over the samples. However, when is in the tail of the distribution, the naive approximation of the expectations in (22) is poor for finite values of . Instead, we use an importance sampling approach, as in Yang et al. (2016). We sample , and approximate (22) with
for , where is the density of a random variable, and is the density of a random variable.
4.2 Non-spherical covariance matrix
The selective inference framework in Section 2 assumes that is a realization from (1), so that . In this subsection, we instead assume that is a realization from
(23) |
where is a known positive definite matrix. We define the p-value of interest as
Theorem 3.
4.3 Unknown variance
The selective inference framework in Section 2 assumes that in model (1) is known. If is unknown, then we can plug an estimate of into (9), as follows:
(25) |
Intuitively, if we plug in a consistent estimate of , we might expect to asymptotically control the selective type I error rate. For example, this is true in the context of selective inference for low-dimensional linear regression (Tian & Taylor 2017), where the error variance can be consistently estimated under mild assumptions.
Unfortunately, consistently estimating in (1) presents a major challenge. Similar issues arise when estimating the error variance in high-dimensional linear regression; see e.g. Reid et al. (2016). Thus, we adopt a similar approach to Tibshirani et al. (2018), which studied the theoretical implications of plugging in a simple over-estimate of the error variance in the context of selective inference for high-dimensional linear regression. In the following result, we show that plugging in an asymptotic over-estimate of in (25) leads to asymptotic control of the selective type I error rate (Definition 1).
Theorem 4.
For , suppose that . Let be a realization of , and and be a pair of clusters estimated from . Suppose that . Then, for any , we have that .
4.4 Consequences of selective type I error rate control
This paper focuses on developing tests of that control the selective type I error rate (Definition 1). However, it is cumbersome to demonstrate selective type I error rate control via simulation, as can be small when holds.
Nevertheless, two related properties can be demonstrated via simulation. Let denote the set of null hypotheses of the form that are true. The following property holds.
Proposition 5.
When , i.e. the clustering algorithm estimates two clusters,
(26) |
where is defined in (8). That is, if the two estimated clusters have the same mean, then the probability of falsely declaring otherwise is equal to .
We prove Proposition 5 in Appendix A.7. What if ? Then, given a data set , we can randomly select (independently of ) a single pair of estimated clusters . This leads to the following property.
Proposition 6.
Suppose that , i.e. the clustering algorithm estimates three or more clusters, and let denote a randomly selected pair. If and are conditionally independent given , then for defined in (8),
(27) |
5 Simulation results
Throughout this section, we use the efficient implementation of hierarchical clustering in the fastcluster package (Müllner et al. 2013) in R.
5.1 Uniform p-values under a global null
We generate data from (1) with , so that holds for all pairs of estimated clusters. We simulate 2000 data sets for , , and . For each data set, we use average, centroid, single, and complete linkage hierarchical clustering to estimate three clusters, and then test for a randomly-chosen pair of clusters. We compute the p-value defined in (8) as described in Section 3 for average, centroid, and single linkage. For complete linkage, we approximate the p-value as described in Section 4.1 with . Figure 4 displays QQ plots of the empirical distribution of the p-values against the Uniform distribution. The p-values have a Uniform distribution because our proposed test satisfies (27) and because ; see the end of Section 4.4 for a detailed discussion. In Appendix D.1, we show that plugging in an over-estimate as in (25) yields p-values that are stochastically larger than the Uniform distribution.

5.2 Conditional power and recovery probability
We generate data from (1) with , and three equidistant clusters,
(28) |
for . For each simulated data set, we use average, centroid, single, and complete linkage hierarchical clustering to estimate three clusters, and then test for a randomly-chosen pair of clusters, with significance level . We simulate data sets for , , and seven evenly-spaced values of . We define the conditional power to be the conditional probability of rejecting for a randomly-chosen pair of estimated clusters, given that the randomly-chosen pair of estimated clusters correspond to two true clusters. Since this conditions on the event that the randomly-chosen pair of estimated clusters correspond to two true clusters, we are also interested in how often this event occurs. We therefore consider the recovery probability, the probability that the randomly-chosen pair of estimated clusters correspond to two true clusters. Figure 5 displays the conditional power and recovery probability as a function of the distance between the true clusters ).
Figure 5 displays conditional power and recovery probability as a function of the distance between the true clusters ).

For all four linkages, the conditional power and recovery probability increase as the distance between the true clusters increases. Average and complete linkage have the highest conditional power, and single linkage has the lowest conditional power. Average, centroid, and complete linkage have substantially higher recovery probabilities than single linkage.
We consider an alternative definition of power that does not condition on having correctly estimated the true clusters in Appendix D.2.
6 Data applications
6.1 Palmer penguins (Horst et al. 2020)
In this section, we analyze the penguins
data set from the palmerpenguins
package in R
(Horst et al. 2020). We estimate with for , calculated on
the bill length and flipper length of 58 female penguins observed in the year 2009. We then consider the 107 female penguins observed in the years 2007–2008 with complete data on species, bill length, and flipper length.
Figure 6(a) displays the dendrogram obtained from applying average linkage hierarchical clustering with squared Euclidean distance to the penguins’ bill length and flipper length, cut to yield five clusters, and Figure 6(b) displays the data.

We test for all pairs of clusters that contain more than one observation, using the test proposed in Section 2.1, and using the Wald test described in (4). (The latter does not account for the fact that the clusters were estimated from the data, and does not control the selective type I error rate.) Results are in Table 2. The Wald p-values are small, even when testing for a difference in means between a single species (Clusters 1 and 2). Our proposed test yields a large p-value when testing for a difference in means between a single species (Clusters 1 and 2), and small p-values when the clusters correspond to different species (Clusters 1 and 3, and Clusters 3 and 4). The p-values from our proposed test are large for the remaining three pairs of clusters containing different species, even though visual inspection suggests a large difference between these two clusters. This is because the test statistic is close to the left boundary of defined in (12), which leads to low power: see the discussion of Figure D.3 in Appendix D.2.
6.2 Single-cell RNA sequencing data (Zheng et al. 2017)
Single-cell RNA sequencing data quantifies the gene expression levels of individual cells. Biologists often cluster the cells to identify putative cell types, and then test for differential gene expression between the clusters, without properly accounting for the fact that the clusters were estimated from the data (Luecken & Theis 2019, Lähnemann et al. 2020, Deconinck et al. 2021). Zheng et al. (2017) classified peripheral blood mononuclear cells prior to sequencing. We will use this data set to demonstrate that testing for differential gene expression after clustering with our proposed selective inference framework yields reasonable results.
6.2.1 Data and pre-processing
We subset the data to the memory T cells, B cells, and monocytes. In line with standard pre-processing protocols (Duò et al. 2018), we remove cells with a high mitochondrial gene percentage, cells with a low or high number of expressed genes, and cells with a low number of total counts. Then, we scale the data so that the total number of counts for each cell equals the average count across all cells. Finally, we apply a transformation with a pseudo-count of 1, and subset to the 500 genes with the largest pre-normalization average expression levels. We separately apply this pre-processing routine to the memory T cells only, and to all of the cells. After pre-processing, we construct a “no clusters” data set by randomly sampling 600 memory T cells, and a “clusters” data set by randomly sampling 200 each of memory T cells, B cells, and monocytes.
6.2.2 Data analysis
We apply Ward-linkage hierarchical clustering with squared Euclidean distance to the “no clusters” data to get three clusters, containing , , and cells, respectively. For each pair of clusters, we test using (i) the test proposed in Section 4.2 under model (23) and (ii) using the Wald test under model (23), which has p-value
(29) |
For both tests, we estimate by applying the principal complement thresholding (“POET”, Fan et al. 2013) method to the 9,303 memory T cells left out of the “no clusters” data set. Results are in Table 3. The p-values from our test are large, and the Wald p-values are small. Recall that all of the cells are memory T cells, and so (as far as we know) there are no true clusters in the data.
“No clusters” | “Clusters”l | |||||
---|---|---|---|---|---|---|
Cluster pairs | (1, 2) | (1, 3) | (2, 3) | (1, 2) | (1, 3) | (2, 3) |
Test statistic | 4.05 | 4.76 | 2.96 | 3.04 | 4.27 | 4.38 |
Our p-value | 0.20 | 0.27 | 0.70 | |||
Wald p-value |
We now apply the same analysis to the “clusters” data. Ward-linkage hierarchical clustering with squared Euclidean distance results in three clusters that almost exactly correspond to memory T cells, B cells, and monocytes. For both tests, we estimate by applying the POET method to the 21,757 memory T cells, B cells, and monocytes left out of the “clusters” data set. Results are in Table 3. The p-values from both tests are extremely small. This suggests that our proposed approach is able to correctly reject the null hypothesis when it does not hold.
7 Discussion
In this paper, we proposed a selective inference framework for testing the null hypothesis that there is no difference in means between two estimated clusters, under (1). This directly solves a problem that routinely occurs when biologists analyze data, e.g. when testing for differential expression between clusters estimated from single-cell RNA-sequencing data (Luecken & Theis 2019, Lähnemann et al. 2020, Deconinck et al. 2021).
The framework proposed in Section 2 assumed that in (1) was known. Similarly, the extended framework in Section 4.2 assumed that in (23) was known. These assumptions are unlikely to hold in practice. Thus, in the data applications of Sections 6.1–6.2, we replaced in (1) and in (23) with estimates. In Section 4.3, we explored the theoretical implications of this replacement, under model (1). Future work could include investigating how best to estimate in (1) and in (23). Another potential avenue for future work would be to develop an analogue of Theorem 4 in Section 4.3 under model (23).
The tests developed in this paper are implemented in the R
package clusterpval
. Instructions on how to download and use this package can be found at
http://lucylgao.com/clusterpval
. Links to download the data sets in Section 6 can be found at https://github.com/lucylgao/clusterpval-experiments
, along with code to reproduce the simulation and real data analysis results from this paper.
Acknowledgments
Lucy L. Gao was supported by the NSERC Discovery Grants program. Daniela Witten and Jacob Bien were supported by NIH Grant R01-GM123993. Jacob Bien was supported by NSF CAREER Award DMS-1653017. Daniela Witten was supported by NSF CAREER Award DMS-1252624, NIH Grants R01-EB026908 and R01-DA047869, and an Simons Investigator Award in Mathematical Modeling of Living Systems (No. 560585).
Conflict of interest statement
The authors have no relevant financial or non-financial competing interests to report.
References
- (1)
- Bourgon (2009) Bourgon, R. (2009), Overview of the intervals package. R Vignette, URL https://cran.r-project.org/web/packages/intervals/vignettes/intervals_overview.pdf.
- Butler et al. (2018) Butler, A., Hoffman, P., Smibert, P., Papalexi, E. & Satija, R. (2018), ‘Integrating single-cell transcriptomic data across different conditions, technologies, and species’, Nature Biotechnology 36(5), 411–420.
- Campbell (2018) Campbell, F. (2018), Statistical Machine Learning Methodology and Inference for Structured Variable Selection, PhD thesis, Rice University.
- Chen et al. (2001) Chen, H., Chen, J. & Kalbfleisch, J. D. (2001), ‘A modified likelihood ratio test for homogeneity in finite mixture models’, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63(1), 19–29.
- Chen et al. (2012) Chen, J., Li, P. & Fu, Y. (2012), ‘Inference on the order of a normal mixture’, Journal of the American Statistical Association 107(499), 1096–1105.
- Chen & Bien (2020) Chen, S. & Bien, J. (2020), ‘Valid inference corrected for outlier removal’, Journal of Computational and Graphical Statistics 29(2), 323–334.
- Deconinck et al. (2021) Deconinck, L., Cannoodt, R., Saelens, W., Deplancke, B. & Saeys, Y. (2021), ‘Recent advances in trajectory inference from single-cell ’omics data’, Current Opinion in Systems Biology 27, 100344.
- Duò et al. (2018) Duò, A., Robinson, M. D. & Soneson, C. (2018), ‘A systematic performance evaluation of clustering methods for single-cell RNA-seq data’, F1000Research 7.
- Fan et al. (2013) Fan, J., Liao, Y. & Mincheva, M. (2013), ‘Large covariance estimation by thresholding principal orthogonal complements’, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 75(4), 603–680.
- Fithian et al. (2014) Fithian, W., Sun, D. & Taylor, J. (2014), ‘Optimal inference after model selection’, arXiv preprint arXiv:1410.2597 .
- Hao et al. (2021) Hao, Y., Hao, S., Andersen-Nissen, E., Mauck III, W. M., Zheng, S., Butler, A., Lee, M. J., Wilk, A. J., Darby, C., Zager, M. et al. (2021), ‘Integrated analysis of multimodal single-cell data’, Cell 184(13), 3573–3587.
- Hocking et al. (2011) Hocking, T. D., Joulin, A., Bach, F. & Vert, J.-P. (2011), Clusterpath: An algorithm for clustering using convex fusion penalties, in ‘Proceedings of the 28th International Conference on Machine Learning’, pp. 1–8.
- Horst et al. (2020) Horst, A. M., Hill, A. P. & Gorman, K. B. (2020), palmerpenguins: Palmer Archipelago (Antarctica) penguin data. R package version 0.1.0, available on CRAN.
- Hwang et al. (2018) Hwang, B., Lee, J. H. & Bang, D. (2018), ‘Single-cell RNA sequencing technologies and bioinformatics pipelines’, Experimental & Molecular Medicine 50(8), 1–14.
- Hyun et al. (2018) Hyun, S., G’Sell, M., Tibshirani, R. J. et al. (2018), ‘Exact post-selection inference for the generalized lasso path’, Electronic Journal of Statistics 12(1), 1053–1097.
- Jewell et al. (2019) Jewell, S., Fearnhead, P. & Witten, D. (2019), ‘Testing for a change in mean after changepoint detection’, arXiv preprint arXiv:1910.04291 .
- Kimes et al. (2017) Kimes, P. K., Liu, Y., Neil Hayes, D. & Marron, J. S. (2017), ‘Statistical significance for hierarchical clustering’, Biometrics 73(3), 811–821.
- Kivaranovic & Leeb (2020) Kivaranovic, D. & Leeb, H. (2020), On the length of post-model-selection confidence intervals conditional on polyhedral constraints. To appear in Journal of the American Statistical Association.
- Kriegeskorte et al. (2009) Kriegeskorte, N., Simmons, W. K., Bellgowan, P. S. & Baker, C. I. (2009), ‘Circular analysis in systems neuroscience: the dangers of double dipping’, Nature Neuroscience 12(5), 535.
- Lähnemann et al. (2020) Lähnemann, D., Köster, J., Szczurek, E., McCarthy, D. J., Hicks, S. C., Robinson, M. D., Vallejos, C. A., Campbell, K. R., Beerenwinkel, N., Mahfouz, A. et al. (2020), ‘Eleven grand challenges in single-cell data science’, Genome Biology 21(1), 1–35.
- Lance & Williams (1967) Lance, G. N. & Williams, W. T. (1967), ‘A general theory of classificatory sorting strategies: 1. hierarchical systems’, The Computer Journal 9(4), 373–380.
- Lee et al. (2016) Lee, J. D., Sun, D. L., Sun, Y., Taylor, J. E. et al. (2016), ‘Exact post-selection inference, with application to the lasso’, The Annals of Statistics 44(3), 907–927.
- Liu et al. (2008) Liu, Y., Hayes, D. N., Nobel, A. & Marron, J. S. (2008), ‘Statistical significance of clustering for high-dimension, low–sample size data’, Journal of the American Statistical Association 103(483), 1281–1293.
- Loftus & Taylor (2015) Loftus, J. R. & Taylor, J. E. (2015), ‘Selective inference in regression models with groups of variables’, arXiv preprint arXiv:1511.01478 .
- Luecken & Theis (2019) Luecken, M. D. & Theis, F. J. (2019), ‘Current best practices in single-cell RNA-seq analysis: a tutorial’, Molecular Systems Biology 15(6), e8746.
- Maitra et al. (2012) Maitra, R., Melnykov, V. & Lahiri, S. N. (2012), ‘Bootstrapping for significance of compact clusters in multi-dimensional datasets’, Journal of the American Statistical Association 107(497), 378–392.
- Mehrizi & Chenouri (2021) Mehrizi, R. V. & Chenouri, S. (2021), ‘Valid post-detection inference for change points identified using trend filtering’, arXiv preprint arXiv:2104.12022 .
- Müllner et al. (2013) Müllner, D. et al. (2013), ‘fastcluster: Fast hierarchical, agglomerative clustering routines for R and Python’, Journal of Statistical Software 53(9), 1–18.
- Murtagh & Contreras (2012) Murtagh, F. & Contreras, P. (2012), ‘Algorithms for hierarchical clustering: an overview’, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2(1), 86–97.
- Reid et al. (2016) Reid, S., Tibshirani, R. & Friedman, J. (2016), ‘A study of error variance estimation in lasso regression’, Statistica Sinica pp. 35–67.
- Satija et al. (2015) Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. (2015), ‘Spatial reconstruction of single-cell gene expression data’, Nature Biotechnology 33(5), 495–502.
- Stuart et al. (2019) Stuart, T., Butler, A., Hoffman, P., Hafemeister, C., Papalexi, E., Mauck III, W. M., Hao, Y., Stoeckius, M., Smibert, P. & Satija, R. (2019), ‘Comprehensive integration of single-cell data’, Cell 177(7), 1888–1902.
- Suzuki & Shimodaira (2006) Suzuki, R. & Shimodaira, H. (2006), ‘Pvclust: an R package for assessing the uncertainty in hierarchical clustering’, Bioinformatics 22(12), 1540–1542.
- Tian & Taylor (2017) Tian, X. & Taylor, J. (2017), ‘Asymptotics of selective inference’, Scandinavian Journal of Statistics 44(2), 480–499.
- Tibshirani et al. (2018) Tibshirani, R. J., Rinaldo, A., Tibshirani, R., Wasserman, L. et al. (2018), ‘Uniform asymptotic inference and the bootstrap after model selection’, Annals of Statistics 46(3), 1255–1287.
- Wood (2015) Wood, S. (2015), mgcv: Mixed GAM Computation Vehicle with Automatic Smoothness Estimation. R package version 1.8-31, available on CRAN.
- Yang et al. (2016) Yang, F., Barber, R. F., Jain, P. & Lafferty, J. (2016), Selective inference for group-sparse linear models, in ‘Advances in Neural Information Processing Systems’, pp. 2469–2477.
- Zhang et al. (2019) Zhang, J. M., Kamath, G. M. & David, N. T. (2019), ‘Valid post-clustering differential analysis for single-cell RNA-seq’, Cell Systems 9(4), 383–392.
- Zheng et al. (2017) Zheng, G. X., Terry, J. M., Belgrader, P., Ryvkin, P., Bent, Z. W., Wilson, R., Ziraldo, S. B., Wheeler, T. D., McDermott, G. P., Zhu, J. et al. (2017), ‘Massively parallel digital transcriptional profiling of single cells’, Nature Communications 8(1), 1–12.
Appendix A Proofs
A.1 Proof of Theorem 1
The proof of Theorem 1 is similar to the proof of Theorem 3.1 in Loftus & Taylor (2015), the proof of Lemma 1 in Yang et al. (2016), and the proof of Theorem 3.1 in Chen & Bien (2020).
For any , we have
(A.1) |
Therefore,
(A.2) |
where the first equality follows from (A.1), and the second equality follows from the definition of in (7). Substituting (A.2) into the definition of given by (8) yields
(A.3) | |||
To simplify (A.3), we now show that
(A.4) |
and that under ,
(A.5) |
First, we will show (A.4). Recall that is the orthogonal projection matrix onto the subspace orthogonal to . Thus, . It follows from properties of the matrix normal and multivariate normal distributions that . This implies (A.4), since . Second, we will show (A.5). It follows from (1) that . Thus, under ,
(A.6) |
and (A.5) follows from the independence of the length and direction of a standard multivariate normal random vector.
We now apply (A.4) and (A.5) to (A.3). This yields
(A.7) |
where (A.7) follows from the definition of in (10). Under ,
by (A.6). That is, under , we have . Therefore, for , it follows from (A.7) that
(A.8) |
since denotes the cumulative distribution function of a random variable truncated to the set . This is (9).
A.2 Proof of Lemma 1
First, we state and prove a preliminary result involving the dissimilarities between groups of observations in any perturbed data set , which holds for any clustering method .
Lemma A.1.
For any , and for any two sets and that are both contained in , both contained in , or both contained in , we have that
where is defined in (13).
Proof.
By (13), for all , and therefore all pairwise distances are preserved for all . The resut follows. ∎
Suppose that . Then, (15) follows immediately from observing that and are clusters merged at an earlier stage of the hierarchical clustering algorithm and thus must satisfy the condition in Lemma A.1.
We will now show (16). Applying the right-hand side of (16) with yields the left-hand side, so the direction holds. We will now prove the direction by contradiction. Suppose that there exists some such that
(A.10) |
and let specifically denote the smallest such value. Then, , which implies that
(A.11) |
Since clusters cannot unmerge once they have merged, and by definition, it must be the case that and are both in , are both in , or are both in . Similarly, since we assumed that , it must be the case that and are both in , are both in , or are both in . Thus, we can apply Lemma A.1 to both sides of (A.11) to yield
(A.12) |
which implies that is not the pair of clusters that merged in the step of the hierarchical clustering procedure applied to . This is a contradiction. Therefore, for all such that , it must be the case that . Observing that for all completes the proof of the direction.
A.3 Proof of Theorem 2
Observe from Algorithm 1 that
(A.13) |
if and only if for all ,
(A.14) | |||
Equation (15) in Lemma 1 says that
Therefore, (A.13) is true if and only if for all ,
(A.15) | |||
Recall that . Observe that (A.15) is true for all if and only if for all ,
(A.16) |
where and are the start and end of the lifetime of the cluster pair , respectively. Therefore, it follows from (16) in Lemma 1 that if and only if (A.16) is true for all . Recalling from (12) that , it follows that
This is (18). Furthermore, this intersection involves sets, because
A.4 Proof of Proposition 2
Let , where in (17) is the set of losing cluster pairs in . Suppose that we are given the start and end of the lifetime of this cluster pair in , and , and we are given the set of steps where inversions occur in the dendrogram of below the th merge,
Observe that for in (19),
where Therefore, we can compute for each cluster pair as follows:
-
1.
Compute by checking every step in to see if it is in .
-
2.
Compute by taking the max over merge heights.
Step 1 requires operations, and Step 2 requires operations. Since , it follows that we can compute in time.
A.5 Proof of Proposition 4
Recall that
(A.17) |
Case 1: (A.18) holds
We first state and prove a preliminary result.
Lemma A.2.
Suppose that , i.e. we perform hierarchical clustering to obtain clusters. Let , , . Suppose that (A.18) holds. Then, for all ,
(A.19) |
Proof.
Since , it follows that . Furthermore, clusters that merge cannot unmerge, and . This means that all observations in the set must belong to the same cluster in , and all observations in the set must belong to the same cluster in . Therefore, (A.18) implies that either
Case 2: (A.18) does not hold
Suppose that do not satisfy (A.18). Then, and are assigned to different clusters in . Since is created by merging a pair of clusters in , it follows that and are assigned to different clusters and in such that is not the winning pair. That is, there exists such that , and , , and . This means that , with . Furthermore, single linkage cannot produce inversions (Murtagh & Contreras 2012), i.e. for all . It follows that
(A.23) |
Applying (A.23) to the definition of in (A.17) yields
A.6 Proof of Theorem 4
The proof of Theorem 4 is adapted from Appendices A.10 and A.11 in Tibshirani et al. (2018). We first state and prove an intermediate result.
Lemma A.3.
Recall that denotes the CDF of a random variable truncated to the set . If , then
Proof.
Let denote the probability density function (pdf) of a random variable truncated to the set . Observe that
Thus, is an exponential family with natural parameter . Therefore, it has a monotone non-increasing likelihood ratio in its sufficient statistic, . This means that for any fixed ,
(A.24) |
Rearranging terms in (A.24) yields:
(A.25) |
Let with . Then,
where the inequality follows from (A.25). That is, we have shown that
(A.26) |
Let . Then,
(A.27) | ||||
where the inequality follows from (A.26) and the fact that for all . Rearranging terms in (A.27) yields
∎
We will now prove Theorem 4. In the following, we will use to denote the event , to denote , and to denote . Observe that
(A.28) |
Recall from (9) that
and recall from (25) that
It follows from Lemma A.3 that for any such that , we have that
Thus, for all and for all ,
(A.29) |
where the equality follows from (11). Applying (A.29) to (A.28) yields
(A.30) |
By our assumption in Theorem 4, . Applying this fact to (A.30) yields
This completes the proof of Theorem 4.
A.7 Proof of Propositions 5 and 6
Recall that denotes the null hypothesis that . Further recall that denotes the set of hypotheses of the form that are true. We start by rewriting (11) as follows.
(A.31) |
We will first prove Proposition 5. Let . Then,
(A.32) | |||
where (A.32) follows from (A.31) and the fact that . Therefore,
This is (26).
We will now prove Proposition 6. Let , and let and be conditionally independent given . Then,
(A.33) | |||
where the last equality follows from the fact that implies that . Since we assumed that and are conditionally independent given , we have for any that
where the last equality follows from (A.31). It follows that for any , we have that
(A.34) |
Applying (A.34) to (A.33) yields
where the second-to-last equality follows from the fact that implies that . Therefore,
This is (27).
Appendix B Algorithm for computing in the case of squared Euclidean distance and linkages that satisfy (20)
We begin by defining quantities used in the algorithm. For all , let
(B.1) |
so that for any ,
(B.2) | ||||
Recall from Section 3.3 that
(B.3) | ||||
(B.4) |
Let for all . Define
(B.5) |
and for , define
(B.6) |
so that for defined in (17), Thus, it follows from Theorem 2 that
(B.7) |
where for all ,
(B.8) |
The following algorithm computes the set (defined in (12)) in the case of squared Euclidean distance and linkages that satisfy (20) in , where is the number of inversions in the dendrogram of below the th merge:
-
1.
Compute .
-
2.
Compute and defined in (B.1) for all as follows:
-
(a)
Let for all . Let for all .
-
(b)
For all , initialize .
-
(c)
For , update and .
-
(a)
-
3.
For all , compute defined in (B.8) as follows:
-
(a)
Use and to compute and likewise for , according to (B.2).
-
(b)
Compute in (B.4) by checking which elements of are in .
-
(c)
Use and to compute according to (B.3).
-
(d)
Compute the coefficients corresponding to the quadratic function in constant time, using Lemma 2.
-
(e)
Use these coefficients and to evaluate . This amounts to solving a quadratic inequality in .
-
(a)
-
4.
For all and for all , compute defined in (B.8) as follows:
-
5.
Compute .
To see that this algorithm computes in time, observe that:
-
•
Steps 1 and 2 can each be performed in time. Note that .
-
•
Each of the iterations in Step 3 can be performed in time.
-
•
Each of the iterations in Step 4 can be performed in time.
-
•
Step 5 can be performed in time. This is because solves a quadratic inequality for each , we can take the intersection over the solution sets of quadratic inequalities in time (Bourgon 2009), and .
Appendix C Supplementary material for Section 4.3
The goal of this section is to establish an estimator of that satisfies the condition from Theorem 4. For any data set , define
(C.1) |
where for all . We will first show that under some assumptions, asymptotically over-estimates . Then, we will show that if we estimate by applying to an independent and identically distributed copy of , then the condition from Theorem 4 is satisfied.
Let be unknown. We assume that the sequence of mean matrices for in Theorem 4, which we denote as , satisfies the following assumptions.
Assumption 1.
For all , , i.e. there are exactly distinct mean vectors among the first observations.
Assumption 2.
For all , where and . That is, the proportion of the first observations that have mean vector converges to for all .
This leads to the following result, which says that under Assumptions 1 and 2, asymptotically over-estimates .
Lemma C.1.
Proof.
Observe that
(C.3) |
We will first show that
(C.4) |
Since for any positive integer and for any , we have
(C.5) |
it follows that
(C.6) |
Furthermore,
(C.7) | ||||
where (C.7) follows from the fact that . It follows from (C.5) that
(C.8) |
Equation (C.4) follows from (C.6) and (C.8). Next, since (C.5) holds for and for all , we have that
Thus, for all , we have that . Combining this fact with (C.4) and (C.3) yields:
(C.9) |
where for all . It follows from (C.9) that
(C.10) |
for . Applying the continuous mapping theorem to (C.10) yields where . Therefore, ∎
The following result establishes an estimator of that satisfies the condition in Theorem 4, by making use of an independent and identically distributed copy of , if such a copy is available. If not, then sample-splitting can be used.
Corollary 1.
For , suppose that , and let be a realization from , and that Assumptions 1 and 2 hold for some . For all let and be a pair of estimated clusters in . For all let be an independent and identically distributed copy of . Then,
(C.11) |
Appendix D Additional simulation studies
D.1 Null p-values when is unknown
In Section 4.3, we propose plugging an estimator of into the p-value defined in (8) to get (25), and we established conditions under which rejecting when (25) is below asymptotically controls the selective type I error rate (Theorem 4). Furthermore, in Section 4.3, we establish an estimator of that satisfisfies the conditions in Theorem 4 (Corollary 1). In the following, we will investigate the finite-sample implications of applying the approach outlined in Section C.
We generate data from (1) with , , , and two clusters,
(D.1) |
for . We randomly split each data set into equally sized “training” and “test” sets. For each training set, we use average, centroid, single, and complete linkage hierarchical clustering to estimate three clusters, and then test for a randomly chosen pair of clusters. We estimate by applying in (C.1) to the corresponding test set, then use it to compute p-values for average, centroid, and single linkage. In the case of complete linkage, we approximate (25) by replacing in the importance sampling procedure described in Section 4.1 with in (C.1) applied to the corresponding test set. Figure D.1 displays QQ plots of the empirical distribution of the p-values against the Uniform distribution, over 500 simulated data sets where holds, for .

The p-values appear to be stochastically larger than the Uniform distribution, across all linkages and all values of in (D.1). As decreases and the clusters become less separated, in (C.1) overestimates less severely. Thus, as decreases, the distribution of the p-values becomes closer to the Uniform distribution.
D.2 Power as a function of effect size
Recall that in Section 5.2, we considered the conditional power of the test proposed in Section 2.1 to detect a difference in means between two true clusters. We now evaluate the power of the test proposed in Section 2.1 to detect a difference in means between estimated clusters, without conditioning on having correctly estimated the true clusters.
We generate data from (1) with , , , and given by (28). We simulate data sets for nine evenly-spaced values of . For each simulated data set, we test for a randomly chosen pair of clusters obtained from single, average, centroid, and complete linkage hierarchical clustering, with significance level . We define the effect size to be
and consider the probability of rejecting as a function of . We smooth our power estimates by fitting a regression spline using the gam
function in the R
package mgcv
(Wood 2015). Figure D.2(a)–(d) displays the smoothed estimates when , and Figure D.2(e)–(h) displays the smoothed estimates when . (We stratify our results because the power is much lower when is small.)


It may come as a surprise that the -axis on Figure D.2(c) starts at 4.5, while the -axis on Figure D.2(d) starts at 0. This is because single linkage hierarchical clustering produces unbalanced clusters unless it successfully detects the true clusters. Thus, the condition is only satisfied for single linkage hierarchical clustering when the true clusters are detected, which happens when is greater than . The -axis on Figure (b) starts at 3 rather than 0 for a similar reason.
In Figure D.2, for all four linkages, the power to reject increases as the effect size () increases. All four linkages have similar power where the -axes overlap.

Our test has low power when the test statistic () is very close to the left boundary of defined in (12). This is a direct result of the fact that , for (Theorem 1). Figure D.3 displays the smoothed p-values from average linkage hierarchical clustering with effect size as a function of the distance between the test statistic and the left boundary of . In Figure D.3, the p-values increase as the distance increases. Similar results have been observed in other selective inference frameworks; see Kivaranovic & Leeb (2020) for a detailed discussion.