Delaunay Weighted Two-sample Test for High-dimensional Data by Incorporating Geometric Information
Abstract
Two-sample hypothesis testing is a fundamental problem with various applications, which faces new challenges in the high-dimensional context. To mitigate the issue of the curse of dimensionality, high-dimensional data are typically assumed to lie on a low-dimensional manifold. To incorporate geometric informtion in the data, we propose to apply the Delaunay triangulation and develop the Delaunay weight to measure the geometric proximity among data points. In contrast to existing similarity measures that only utilize pairwise distances, the Delaunay weight can take both the distance and direction information into account. A detailed computation procedure to approximate the Delaunay weight for the unknown manifold is developed. We further propose a novel nonparametric test statistic using the Delaunay weight matrix to test whether the underlying distributions of two samples are the same or not. Applied on simulated data, the new test exhibits substantial power gain in detecting differences in principal directions between distributions. The proposed test also shows great power on a real dataset of human face images.
Keywords: Delaunay triangulation; geometric proximity; high dimension; manifold learning; permutation test.
1 Introduction
The two-sample test on whether the two underlying distributions are the same or not is a fundamental problem in statistics and machine learning with broad applications in change point detection (Cao et al.,, 2018; Chen,, 2019), goodness-of-fit evaluation (Chwialkowski et al.,, 2016; Arias-Castro et al.,, 2018) and experimental design (Morgan and Rubin,, 2012). In the era of big data, there are emerging challenges that the power of classical two-sample tests (e.g., Hotelling’s -squared test, Wald test or likelihood ratio test) decreases or even vanishes as the data dimension increases. Under the high-dimensional setting, several methods have been developed to test a simpler hypothesis on whether two population means are the same. For example, under the sparse alternative that mean vectors of two samples only differ in a small number of dimensions, Cai et al., 2013b developed a maximum-type test statistic. To enhance the power and robustness to extreme values in detecting sparse mean differences, a parametric bootstrap approach with feature screening is proposed by Chang et al., (2017), while Wang et al., (2019) incorporated the trimmed mean approach to develop a robust test. Liu et al., (2022) developed a projection test by assuming the sparsity of the optimal projection direction. However, these methods are not consistent against general alternatives, e.g., the two populations possess the same mean but differ in the covariance structures.
Nonparametric two-sample testing for general differences between distributions is a historical research topic with rich literature. For univariate data, the Kolmogorov–Smirnov test (Kolmogorov,, 1933; Smirnov,, 1948), the Wilcoxon–Mann–Whitney test (Wilcoxon,, 1945; Mann and Whitney,, 1947) and the Wald–Wolfowitz runs test (Wald and Wolfowitz,, 1940) are early developments on this topic. Although such methods may be directly applicable to multivariate settings (Bickel,, 1969), they would perform poorly unless the total sample size grows at an exponential rate with the dimension. To circumvent such difficulty, Friedman and Rafsky, (1979) developed a two-sample test based on the minimum spanning tree (MST) that connects all data points with the minimal total length of edges. Given an MST of the pooled sample, the test statistic is constructed as the number of edges connecting data points from different samples, which is shown to be consistent against general alternatives. More recently, many nonparametric two-sample tests have been proposed for multivariate/high-dimensional data, including: (i) distance/energy-based tests based on pairwise distances of the pooled sample (Székely and Rizzo,, 2004; Jurečková and Kalina,, 2012; Marozzi,, 2015); (ii) graph-based tests, e.g., the -nearest neighbor (-NN) graph (Schilling,, 1986; Henze,, 1988; Hall and Tajvidi,, 2002) and the -MST graph (Chen and Friedman,, 2017; Chen et al.,, 2018); (iii) kernel tests via a decaying kernel of pairwise Euclidean distances of the pooled sample (Gretton et al.,, 2009, 2012; Cheng and Xie,, 2021; Yan and Zhang,, 2023); and (iv) regression tests by connecting the binary classification and two-sample testing (Kim et al.,, 2019; Hediger et al.,, 2022).
Some of the aforementioned works can handle high-dimensional data. To mitigate the curse of dimensionality, high-dimensional data are often assumed to lie on an unknown low-dimensional manifold, which is reasonable in many applications, such as bioinformatics (Moon et al.,, 2018), image data analysis (Pless and Souvenir,, 2009) and natural language processing (Zhao et al.,, 2021). Performances of the nonparametric tests under the low-dimensional manifold setting have been studied in the literature. For example, it is shown that the kernel test (Cheng and Xie,, 2021) and the local regression test (Kim et al.,, 2019) are adaptive to the intrinsic dimension of the manifold by properly choosing the tuning parameter. Arias-Castro et al., (2018) also showed that the multivariate chi-squared test is adaptive to the intrinsic dimension of the manifold, while their work is mainly focused on theoretical development rather than practical implementation. However, none of these tests directly take the manifold structure into account, while our work aims to fill this gap.
In particular, we develop a new nonparametric two-sample test, named the Delaunay weighted test, for high-dimensional data under the low-dimensional manifold assumption. Following the -NN test statistic proposed by Schilling, (1986), we propose the test statistic as the sum of all within-group proximities of the pooled sample. The proximity is measured by our newly developed Delaunay weight instead of the -NN. By incorporating the Delaunay triangulation on a Riemannian manifold (Leibon and Letscher,, 2000), the Delaunay weight measures the geometric proximity among data points by taking both the geodesic distance and the relative direction into account. In practice, the Delaunay weight cannot be computed exactly because the manifold is usually unknown. As a remedy, we compute the Delaunay weight matrix of low-dimensional manifold representations for approximate inference. We modify a robust manifold learning algorithm (Dimeglio et al.,, 2014) and apply the classical multi-dimensional scaling (MDS, Kruskal,, 1964) to obtain the manifold representations. The stereographic projected DELAUNAYSPARSE algorithm is used to find the Delaunay simplicies for computing the Delaunay weight. The -value of our test is obtained by a permutation procedure, and our Delaunay weighted test is applicable as long as the sample size is larger than the (estimated) intrinsic dimension of the underlying manifold. Compared with the kernel test and the local regression test based on local distances that are adaptive to the intrinsic dimension of the manifold, ours further takes the relative direction among data points into account, and thus it is more efficient in detecting the direction difference in addition to the location difference of two distributions. Numerical experiments on both synthetic and real data reveal that our Delaunay weighted test outperforms existing approaches, especially when the two distributions only differ in the principal directions of covariance matrices.
The rest of this article is organized as follows. Section 2 formulates the two-sample testing problem for high-dimensional data on an unknown manifold and introduces necessary geometric concepts on the manifold. The Delaunay weighted test is proposed in Section 3, including development of Delaunay weight and the test statistic, computational strategies and the permutation procedure for inference. Section 4 investigates the theoretical properties of the proposed test, while numerical experiments in Section 5 demonstrate the practical advantages of our method. Supplementary material contains details of our computational strategies, proofs of theorems and additional experiments.
2 Delaunay Triangulation on Manifold
Consider two samples and of i.i.d. data from two distributions and , respectively. Our goal is to test versus . Let be the union of two samples, , and let denote the group indicator of the pooled sample, where , if and if , for . We assume that both and are supported on a -dimensional geodesically convex Riemannian manifold embedded in the ambient space with (Arias-Castro et al.,, 2018; Kim et al.,, 2019; Cheng and Xie,, 2021).
We first review a few notions in Riemannian geometry. Let denote the shortest geodesic path between two points , whose length is the geodesic distance, , induced by the ambient space . The manifold is assumed to be geodesically convex meaning that the path uniquely exists for any . Let denote the tangent space of at point . As illustrated in Figure 1, the logarithmic map is defined as the vector starting from a whose length equals and direction is the derivative of at a. Since always exists uniquely for any two points , the domain of the logarithmic map is the whole , for all .

We can generalize the geometric concepts from the Euclidean space to the geodesically convex Riemannian manifold as follows.
-
•
Convex set: A subset is convex on if for any two points , .
-
•
Convex hull: For a set of points , the convex hull of , denoted by , is the intersection of all convex sets on that contain .
-
•
Ball and sphere: A ball on with the center and radius is defined as and the corresponding sphere is defined as
-
•
Simplex and circumscribing ball: For , the -simplex of points is the convex hull , and the circumscribing ball of is the smallest radius ball among all balls on containing . In particular, the -simplex of points is the shortest geodesic path .
-
•
Projection: For a set of points , the projection of on is defined as , in the case that the right hand side exists uniquely; otherwise, we first find and define as the Euclidean projection of onto , which exists uniquely almost everywhere with respect to the Lebesgue measure of (Bhattacharya and Patrangenaru,, 2003).
Following Leibon and Letscher, (2000), the Delaunay triangulation can be defined on as follows.
Definition 1.
For a set of points , the Delaunay triangulation of , denoted by , is a mesh of -simplices satisfying:
-
1.
For , the set of vertices of simplex , denoted by , is a subset of and is not contained in any -simplex () on .
-
2.
Different simplices are disjoint except on their shared boundaries.
-
3.
The union is the convex hull of on , .
-
4.
The empty ball property: For , the circumscribing ball of on contains no points of except on its sphere.
We refer simplices as the Delaunay simplices. According to Leibon and Letscher, (2000), the Delaunay triangulation is unique if and only if the pooled sample is generic, i.e., any points in do not lie on a sphere on and any points do not lie in a -simplex on (). As long as both and have Lipschitz continuous densities and with respect to the measure on induced by the Lebesgue measure of the ambient Euclidean space, the pooled sample is generic with probability one. Being the geometric dual of the Voronoi diagram, the Delaunay triangulation generates a mesh of simplices on that are most regularized in shape. Figure 2 shows the empty ball property of the Delaunay triangulation for , and maximizes the minimum angle in all the triangles (-simplices) over all possible triangulations of (Sibson,, 1978).



3 Delaunay Weighted Test
3.1 Delaunay Weight
Considering the simplest case , for each internal point , there exists a Delaunay simplex such that . We define the Delaunay weight vector for with respect to as a vector such that
-
(a)
if and if , where denotes the vertex set of the simplex ;
-
(b)
;
-
(c)
.
According to Abadie and L’Hour, (2021), the Delaunay weight vector for corresponds to the convex combination of with the smallest compound discrepancy among all convex combinations satisfying . That is, selects at most points in (those with ) that are convexly combined to form p and closest to p in the sense of the minimal compound discrepancy.
Conditions (a)–(c) only define the Delaunay weight vector for an internal point . For generalization to an external point , we utilize the projection idea from Section 2.3 of Abadie and L’Hour, (2021). Let be the solution to the optimization problem,
subject to and , where denotes the Euclidean norm. The Delaunay weight vector for can be rewritten as . For , Abadie and L’Hour, (2021) showed that where is the projection of p on the convex hull . Therefore, we have
(1) |
Based on (1), we can generalize the definition of the Delaunay weight for all by substituting condition (c) with
-
(c†)
However, when is not Euclidean, condition (c†) is no longer applicable because it relies on the linear operation of Cartesian coordinates, which is not well-defined on . To generalize the Delaunay weight vector to nonlinear manifolds, we make use of the logarithmic map.
Definition 2.
For a set , the Delaunay weight vector of is satisfying (a), (b) and
-
(c‡)
,
where is the projection of p on the convex hull and in (a) is the Delaunay simplex containing .
For the special case that , (c†) and (c‡) are equivalent because
(2) |
where denotes the vector from to . Therefore, condition (c‡) is indeed a generalization of (c†) on a nonlinear manifold. Since the Delaunay triangulation is unique for generic (Leibon and Letscher,, 2000), the Delaunay weight vector is unique for generic and any point , as shown in Section S1 of the supplementary material.
Based on Definition 2, the Delaunay weight matrix of is defined as , with
(3) |
where measures the geometric proximity of to and represents the remaining set of excluding . Because and are usually not the same, the Delaunay weight matrix is a directed weighted adjacency matrix of . Unlike the kernel, local regression and -NN based approaches which only consider local distances to , the Delaunay weight evaluates the geometric proximity of the vertices of the Delaunay simplex to using both the distance and direction information. In particular, and can be different even if . Indeed, we see from Definition 2 that depends on the vectors rather than merely the distances , for .


To gain more insights on the Delaunay weight, we see that, when is an inner point of the Delaunay simplex , the non-zero elements of correspond to the points in from all directions surrounding . In contrast, under sparse or nonuniform sampling, nearest neighbors of cannot cover all directions from because they tend to cluster in certain directions. We illustrate this point using an example with a sample in . Figure 3 shows the Delaunay simplex that contains the projection of on the convex hull . Based on (), the weight vectors are concatenated to form the Delaunay weight matrix , as shown in Figure 4. For , the three nearest neighbors of are and , which are all from the top-right direction of and the information from the bottom-left direction is not captured. In contrast, the Delaunay simplex is formed by and , which covers all directions from , and the relative directions of are used in constructing . Therefore, the Delaunay weight utilizes the local geometric information in a more comprehensive way than the -NN.
3.2 Test Statistic
For the pooled sample with the group indicator , the -NN test statistic in Schilling, (1986) is
which has inspired a family of nonparametric two-sample tests for detecting distribution differences via a proximity graph of the pooled sample (Rosenbaum,, 2005; Chen and Friedman,, 2017; Chen et al.,, 2018). Based on the Delaunay weight , we construct the test statistic
(4) |
where the new geometric proximity measure accounts for the manifold structure and direction information.
3.3 Approximation of Delaunay Weight
In practice, the underlying manifold is typically unknown, so that exact computation of the Delaunay weight matrix via Definition 2 (and thus ) is infeasible. Because is constructed by Delaunay weight vectors which only measure the geometric proximity between and the points locally surrounding (i.e., the vertices of ), we can utilize manifold learning techniques to approximate by , where is a low-dimensional Euclidean representation of whose Delaunay weight matrix can be computed. Generally speaking, any nonlinear dimension reduction technique that aims to preserve the local geometric structure of can be applied to obtain . Often, a -NN nonlinear dimension reduction technique may yield a proximity graph of with more than one connected components. Under such cases, elements in corresponding to all pairs of points from different components must be zero, which incurs information loss and power deterioration. To circumvent this issue, we suggest adopting a procedure that can return a connected low-dimensional representation . Specifically, we modify the robust manifold learning algorithm in Dimeglio et al., (2014), as detailed in Section S2 of the supplementary material, to estimate the geodesic distances and apply the classical MDS (Kruskal,, 1964) to obtain . For the unknown intrinsic dimension , any existing method can be applied to estimate , e.g., Facco et al., (2017) as summarized in Section S2 of the supplementary material. Our numerical experiments show that our method is robust to estimation of .
To compute , we note the equivalence of conditions (c†) and (c‡) in (2) for the low-dimensional Euclidean representation . Once we know the Delaunay simplex that contains , which is the projection of on the convex hull , we can easily compute by solving the linear equations,
(5) |
The remaining difficulty in computing lies in finding the Delaunay simplex , for .
Although the DELAUNAYSPARSE algorithm by Chang et al., (2020) can efficiently find for , it does not work for those . To tackle this difficulty, we incorporate the inverse stereographic projection to the DELAUNAYSPARSE algorithm and develop the stereographic projected DELAUNAYSPARSE algorithm to find , regardless of whether or not. Without loss of generality, we assume the Euclidean representation is centered at . The stereographic projected DELAUNAYSPARSE algorithm is given as follows.




-
1.
We first compute , set a scaling parameter and apply the inverse stereographic projection,
(6) to project to a -sphere . Let , for , and let the reference point be . We define the augmented point set as . A graphical illustration of obtaining from with is exhibited in Figure 5 (a).
-
2.
As shown in Figure 5 (b), for any , the union of simplicies in the Delaunay triangulation covers the whole sphere . By projecting back to under the convention that the geodesic path (for any ) is mapped to the line in from in the opposite direction of , an approximation of is obtained as shown in Figure 5 (c). Figure 5 (d) shows that such an approximation converges to as the scaling parameter tends to infinity.
-
3.
Because covers , for any , there must exist a Delaunay simplex that contains on . Thus, we use the breadth first search procedure similar to the DELAUNAYSPARSE algorithm (Chang et al.,, 2020) to compute the Delaunay simplex . Finally, if , on is obtained by the simplex formed by ; if , is obtained by the simplex formed by , where is the neighbor simplex of opposite to .
The stereographic projected DELAUNAYSPARSE algorithm is detailed in Section S3 of the supplementary material. Note that we only need to generate the local (but not full) Delaunay triangulation to find the Delaunay simplex , and thus the computation is not intensive. Specifically, Section S3 of the supplementary material shows that the computational complexity of the stereographic projected DELAUNAYSPARSE algorithm is . As the computational complexity of solving (5) via Gaussian elimination is and the sample size is usually much larger than the intrinsic dimension , we conclude that the overall computational complexity for computing the approximation is . As shown in Figure 5 (d), the difference between and its approximation with is negligible. Because computation of the Delaunay weight is unrelated to provided that the simplex is correctly identified, we suggest in practice.
3.4 Permutation Test
The Delaunay weighted test statistic in (4), , aims to detect differences between the underlying distributions and via the deviation of the sum of within-group weights from its expectation under the null. Figure 4 gives an illustrative example of the Delaunay weight matrix with group indicators , for , and otherwise. If we partition at the 4-th row and column, equals the sum of all Delaunay weights in the two diagonal blocks of . Generally, off-diagonal elements in are identically distributed under the null. In contrast, under , more ’s in are located in the region where the density of is higher than . For these ’s in , the expected proportion of data points from in is larger than that in . Similar arguments also hold for . Thus, is stochastically larger under the alternative than under the null.
The exact distribution of under the null is complex and difficult to derive, and thus we implement a permutation procedure to compute the -value.
-
1.
Compute the observed value of test statistic via (4).
-
2.
For , we randomly generate a permutation of group indicators , where and , and compute under .
-
3.
Compute where is added in both the numerator and denominator to calibrate that under , for all . Indeed, under , are i.i.d. given the pooled sample and sample sizes . Therefore, the is uniformly distributed on , which converges to as .
Under the unknown manifold setting where cannot be computed exactly, we replace it by for approximate inference. Unlike most of the existing tests based on pairwise distances only, the Delaunay weight takes the relative directions among data points into account, and thus our test gains more power in detecting the direction difference between and .
4 Theoretical Analysis
For convenience, we write , for all , and we use the subscript “” in and to denote the condition that is true, for . To deduce asymptotic properties of the test statistic , it is common to deduce the distribution of the unconditional standardized statistic. However, since is a function of both and the Delaunay weight matrix , it is difficult to obtain an explicit expression of its variance under the null due to discrete nature of the Delaunay triangulation. Therefore, we first provide the expectation and variance of the standardized statistic conditional on , where the proximity graph based on becomes nonrandom as in Chen and Friedman, (2017) and Chen et al., (2018).
Theorem 1.
Let be a set of generic points on a -dimensional geodesically convex Riemannian manifold . When is true, we have
and
where
are average values of , and , respectively.
The proof of Theorem 1 is based on the fact that is uniformly distributed on all realizations containing ones and zeros conditional on under , as detailed in Section S4 of the supplementary material. Given the finite-sample conditional mean and conditional variance , we deduce the asymptotic null distribution of the standardized statistic of .
Theorem 2.
Assume that (i) ; (ii) and they have Lipschitz continuous densities with respect to the measure on induced by the Lebesgue measure on the ambient Euclidean space; and (iii) there exists such that , with . As , we have
(7) |
where denotes convergence in distribution.
The proof of Theorem 2 is given in Section S5 of the supplementary material. In Theorem 2, conditions (i) and (ii) are standard and mild. To gain insight into condition (iii), we define a hub as a point with a degree much larger than the average degree in the , which only occurs under the rare case that many data points are located on the sphere for some . Condition (iii) regularizes the Delaunay triangulation , which is satisfied if no hub exists in . Empirical evidence of Theorem 2 is provided in Section S7 of the supplementary material.
The consistency of the Delaunay weighted test with the permutation procedure can be shown as follows.
Theorem 3.
Under the same conditions as in Theorem 2, the Delaunay weighted test using is consistent against all alternatives. That is, for any significance level and , the Delaunay weighted test with the permutation procedure rejects the null with asymptotic probability one.
The proof of Theorem 3 follows Henze and Penrose, (1999) as detailed in Section S6 of the supplementary material, where we show converges to a positive value and converges to zero a.s. as . Although the test statistic is constructed based on instead of in practice, the consistency of the test is preserved as long as the mapping from to is one-to-one, a standard assumption in manifold learning.
5 Experiments
To illustrate the empirical performance of the Delaunay weighted test, we conduct extensive experiments on both simulated and real data. For comparison, we also implement several existing nonparametric tests, including the -NN test (Hall and Tajvidi,, 2002), the -distance test (Székely and Rizzo,, 2004), the covariance test (Cai et al., 2013a, ), the -MST test (Chen and Friedman,, 2017), the regression test (Kim et al.,, 2019) and the kernel test (Gretton et al.,, 2012; Cheng and Xie,, 2021). Among these methods, the (local) regression test with the -NN regression and the kernel test are adaptive to the intrinsic dimension of the underlying manifold, and thus we adopt the -NN regression with as suggested by Kim et al., (2019) in the regression test. Following the suggestion that the bandwidth should scale with and (Cheng and Xie,, 2021), we choose the bandwidth for the kernel test, where is the sum of variances of and is the Hölder constant. As the Delaunay weight , for each , contains at most nonzero components, we use in the -NN tests and -MST test for fair comparison. The covariance test is consistent only if the covariances of and differ, while it is of interest to compare its performance under general cases. For the standard permutation procedure, we set the number of permutations as .
5.1 Scenario 1: Euclidean data



To validate the size of the Delaunay weighted test on Euclidean data, i.e., , we generate 1000 Gaussian datasets under the null . In each dataset, and respectively contain and i.i.d. samples from the multivariate normal distribution , where is the identity matrix with . We consider different dimensions and sample sizes . Figure 6 shows the empirical cumulative distribution functions (ECDFs) of the -value of the Delaunay weighted test under several cases, while Figrue S3 shows ECDFs for other cases. The ECDF of the -value is close to that of the uniform distribution under all dimensions and sample sizes, suggesting that the Delaunay weighted test can control the type I error rate at any target level .






We also investigate the power of the Delaunay weighted test in distinguishing differences between distributions and on 1000 Gaussian datasets. We consider both the location alternative and the direction alternative, where the former imposes the difference in location parameters (mean or median) of distributions and , and the latter sets the same location parameter for distributions and but their covariance matrices have different principal directions (directions with large variance).
Under the location alternative, contains independent samples from , while in are generated from , where is an independent variable uniformly distributed on for and for . Under the direction alternative, both samples and are generated from zero-mean Gaussian distributions and , with covariance matrices,
Figure 7 presents ECDFs of the -value of the Delaunay weighted test in comparison with other methods under the location alternative. For both and , the power of the Delaunay weighted test tends to as the total sample size grows, demonstrating its ability in distinguishing the location difference of distributions. Compared with the existing ones, our test outperforms the -NN test and the regression test and it yields comparable power with the -MST test, while the kernel test and the -distance test possess the highest power. As expected, the covariance test is powerless, because it is not consistent in distinguishing the location difference.






Figure 8 presents empirical distributions of the -value under the direction alternative, where our method outperforms all the tests in comparison, except that the covariance test has comparable power. Under the direction alternative, our test, which takes the direction information into account, is more powerful than those tests only using the distance information. The advantage of our Delaunay weighted test over the others further amplifies when the dimension grows to , because the information contained in the Euclidean distance diminishes as grows.
5.2 Scenario 2: Manifold Data
To validate the effectiveness of the Delaunay weighted test on high-dimensional data that lie on a low-dimensional manifold, we use a digit image in the MNIST data as shown in Figure 9 (a) to generate synthetic datasets. The size of original image is , leading to pixels in total with the gray scale in . To generate synthetic data, we increase the size of the original image to by padding rows (or columns) of to both the top and bottom (or left and right) as shown in Figure 9 (b). Each pixel is assigned a 2D coordinate at its center via the Cartesian coordinate system as shown in Figure 9 (c). We next apply the rotation and shift distortion on the padded image where the pixel located at is projected to , where is the rotation angle, and and are horizontal and vertical shift distances respectively. We then align the gray scale of all pixels of each synthetic image by rows into a vector of components, which are located in a manifold with the intrinsic dimension .
To validate the size and power of the Delaunay weighted test on the manifold data, we randomly generate 1000 synthetic datasets with sample sizes by applying the rotation and shift distortion on the padded image. The transformation parameters follow a uniform distribution in different domains under different scenarios as follows.
-
•
Null: for both and .
-
•
Location alternative: for and for .
-
•
Direction alternative: for and for .
To investigate how the intrinsic dimension affects the test performance, we implement the Delaunay weighted test and the kernel test with both the true and the estimated using Algorithm S1 in the supplementary material. Table 1 presents rejection proportions of all tests under different scenarios. Although usually overestimates , the performance of our test is robust to . Sometimes the test using performs even better than using , possibly because rounding of pixel values after distortion increases the intrinsic dimension. Under the direction alternative, the Delaunay weighted test outperforms all the others no matter whether the true or the estimated is used, which suggests its effectiveness for manifold data with direction differences. Under the location alternative, although the kernel test often performs the best, the Delaunay weighted test still yields comparable power, and it outperforms all other tests. The covariance test performs poorly, mainly because the sparsity assumption does not hold for the manifold data.



Tests | Null () | Location () | Direction () | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
, | ||||||||||||
Delaunay weighted | True | 1.0 | 4.4 | 9.6 | 8.7 | 26.5 | 39.3 | 15.1 | 39.3 | 54.4 | ||
Estimate | 0.6 | 4.9 | 10.1 | 9.0 | 26.7 | 39.5 | 15.6 | 33.8 | 46.7 | |||
Kernel | True | 0.9 | 3.8 | 8.3 | 6.1 | 22.1 | 34.0 | 6.5 | 23.6 | 38.9 | ||
Estimate | 1.2 | 6.2 | 10.6 | 9.5 | 27.1 | 39.8 | 4.9 | 21.0 | 32.9 | |||
Regression | – | 1.3 | 6.1 | 11.5 | 4.2 | 15.8 | 25.6 | 3.6 | 13.8 | 23.5 | ||
-NN | – | 1.3 | 5.2 | 10.1 | 3.1 | 10.9 | 20.1 | 3.8 | 13.4 | 21.8 | ||
-MST | – | 0.7 | 5.4 | 10.4 | 1.7 | 8.1 | 14.0 | 4.8 | 15.8 | 24.5 | ||
-distance | – | 0.7 | 4.3 | 9.2 | 9.1 | 20.3 | 26.6 | 1.8 | 12.4 | 21.8 | ||
Covariance | – | 0.0 | 0.1 | 0.1 | 1.0 | 3.8 | 5.5 | 0.1 | 0.5 | 0.9 | ||
, | ||||||||||||
Delaunay weighted | True | 1.1 | 5.3 | 10.6 | 10.6 | 30.5 | 43.2 | 27.8 | 53.7 | 67.2 | ||
Estimate | 0.7 | 4.1 | 10.4 | 9.1 | 28.4 | 41.4 | 33.4 | 60.6 | 73.1 | |||
Kernel | True | 0.7 | 4.5 | 9.8 | 9.4 | 27.1 | 38.6 | 11.1 | 32.7 | 47.7 | ||
Estimate | 1.5 | 5.6 | 10.4 | 14.1 | 34.7 | 47.3 | 10.6 | 30.7 | 46.8 | |||
Regression | – | 1.1 | 4.5 | 8.3 | 8.8 | 20.2 | 32.0 | 7.6 | 21.2 | 32.1 | ||
-NN | – | 0.7 | 3.9 | 8.9 | 2.2 | 9.9 | 17.7 | 6.5 | 19.0 | 29.5 | ||
-MST | – | 1.0 | 6.4 | 12.2 | 2.5 | 7.7 | 13.4 | 6.4 | 17.4 | 26.6 | ||
-distance | – | 0.8 | 5.4 | 9.4 | 13.9 | 25.3 | 31.8 | 3.0 | 14.6 | 27.2 | ||
Covariance | – | 0.6 | 1.4 | 2.5 | 12.5 | 25.8 | 34.1 | 3.1 | 7.9 | 13.3 | ||
, | ||||||||||||
Delaunay weighted | True | 0.7 | 4.3 | 11.4 | 20.2 | 48.5 | 60.2 | 40.7 | 69.3 | 79.3 | ||
Estimate | 1.8 | 5.5 | 14.3 | 31.0 | 56.6 | 66.9 | 43.6 | 72.7 | 84.4 | |||
Kernel | True | 0.4 | 5.8 | 12.1 | 19.1 | 44.4 | 58.2 | 23.3 | 52.9 | 68.3 | ||
Estimate | 0.9 | 3.7 | 7.9 | 27.0 | 49.5 | 64.1 | 23.8 | 55.0 | 70.6 | |||
Regression | – | 1.6 | 4.6 | 10.0 | 9.6 | 27.6 | 42.6 | 14.4 | 37.8 | 48.9 | ||
-NN | – | 1.6 | 8.3 | 12.8 | 4.1 | 15.1 | 29.3 | 9.5 | 26.2 | 38.1 | ||
-MST | – | 0.7 | 5.4 | 11.4 | 4.1 | 16.8 | 24.7 | 14.5 | 30.2 | 40.7 | ||
-distance | – | 0.9 | 4.3 | 11.0 | 23.4 | 36.9 | 42.0 | 6.2 | 23.6 | 39.5 | ||
Covariance | – | 0.0 | 0.0 | 0.2 | 14.3 | 24.7 | 34.4 | 4.8 | 12.8 | 18.4 |
5.3 Real Data Analysis: Human Face Images
For illustration, we apply the Delaunay weighted test to the Human Face Image (HFI) dataset222https://www.kaggle.com/datasets/nipunarora8/age-gender-and-ethnicity-face-data-csv, a subset of the large-scale UTKFace dataset (Zhang et al.,, 2017). The HFI dataset contains 27305 images of individuals of different races (White, Black, Asian, Indian, and Others) and different ages (from to years old). The size of each image is , leading to pixels in total with the gray scale in . We extract images of individuals between 20 and 29 years old and align the gray scale of all pixels of each image by rows into a vector of components.
We consider two scenarios for the extracted dataset. The first is the null scenario, where for each replication, we randomly draw records without replacement and distribute and records to and respectively. We consider sample sizes or , under which the estimated ranges in to . The -value of the Delaunay weighted test over 1000 replications is shown in the top panel of Figure 10. Similar to the results in Section 5.1, the -value of the Delaunay weighted test is uniformly distributed in , as expected.




We also consider the alternative scenario, where records of two samples differ in age. Under this scenario, (or ) records of individuals not older than 25 years (or not younger than 26 years) are randomly drawn without replacement to form (or ). Under different sample sizes or , empirical distributions of the -value of the Delaunay weighted test and the compared ones over 1000 replications are shown in the second row of Figure 10. The Delaunay weighted test outperforms all the existing tests in distinguishing human faces of different ages.
References
- Abadie and L’Hour, (2021) Abadie, A. and L’Hour, J. (2021). A Penalized Synthetic Control Estimator for Disaggregated Data. Journal of the American Statistical Association, 116(536):1817–1834.
- Arias-Castro et al., (2018) Arias-Castro, E., Pelletier, B., and Saligrama, V. (2018). Remember the Curse of Dimensionality: The Case of Goodness-of-fit Testing in Arbitrary Dimension. Journal of Nonparametric Statistics, 30(2):448–471.
- Bhattacharya and Patrangenaru, (2003) Bhattacharya, R. and Patrangenaru, V. (2003). Large sample theory of intrinsic and extrinsic sample means on manifolds. The Annals of Statistics, 31(1):1–29.
- Bickel, (1969) Bickel, P. J. (1969). A Distribution Free Version of the Smirnov Two Sample Test in the -Variate Case. The Annals of Mathematical Statistics, 40(1):1–23.
- (5) Cai, T. T., Liu, W., and Xia, Y. (2013a). Two-Sample Covariance Matrix Testing and Support Recovery in High-Dimensional and Sparse Settings. Journal of the American Statistical Association, 108(501):265–277.
- (6) Cai, T. T., Liu, W., and Xia, Y. (2013b). Two-sample Test of High Dimensional Means under Dependence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2):349–372.
- Cao et al., (2018) Cao, Y., Nemirovski, A., Xie, Y., Guigues, V., and Juditsky, A. (2018). Change Detection via Affine and Quadratic Detectors. Electronic Journal of Statistics, 12(1):1–57.
- Chang et al., (2017) Chang, J., Zheng, C., Zhou, W.-X., and Zhou, W. (2017). Simulation-based Hypothesis Testing of High Dimensional Means under Covariance Heterogeneity. Biometrics, 73(4):1300–1310.
- Chang et al., (2020) Chang, T. H., Watson, L. T., Lux, T. C. H., Butt, A. R., Cameron, K. W., and Hong, Y. (2020). Algorithm 1012: DELAUNAYSPARSE: Interpolation via a Sparse Subset of the Delaunay Triangulation in Medium to High Dimensions. ACM Transactions on Mathematical Software, 46(4):1–20.
- Chen, (2019) Chen, H. (2019). Sequential Change-point Detection based on Nearest Neighbors. The Annals of Statistics, 47(3):1381–1407.
- Chen et al., (2018) Chen, H., Chen, X., and Su, Y. (2018). A Weighted Edge-Count Two-Sample Test for Multivariate and Object Data. Journal of the American Statistical Association, 113(523):1146–1155.
- Chen and Friedman, (2017) Chen, H. and Friedman, J. H. (2017). A New Graph-Based Two-Sample Test for Multivariate and Object Data. Journal of the American Statistical Association, 112(517):397–409.
- Cheng and Xie, (2021) Cheng, X. and Xie, Y. (2021). Kernel Two-Sample Tests for Manifold Data. arXiv:2105.03425.
- Chwialkowski et al., (2016) Chwialkowski, K., Strathmann, H., and Gretton, A. (2016). A Kernel Test of Goodness of Fit. In Proceedings of The 33rd International Conference on Machine Learning, pages 2606–2615, New York, New York, USA. PMLR.
- Dimeglio et al., (2014) Dimeglio, C., Gallón, S., Loubes, J.-M., and Maza, E. (2014). A Robust Algorithm for Template Curve Estimation based on Manifold Embedding. Computational Statistics & Data Analysis, 70:373–386.
- Facco et al., (2017) Facco, E., d’Errico, M., Rodriguez, A., and Laio, A. (2017). Estimating the Intrinsic Dimension of Datasets by a Minimal Neighborhood Information. Scientific Reports, 7:12140.
- Friedman and Rafsky, (1979) Friedman, J. H. and Rafsky, L. C. (1979). Multivariate Generalizations of the Wald-Wolfowitz and Smirnov Two-Sample Tests. The Annals of Statistics, 7(4):697–717.
- Gretton et al., (2012) Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., and Smola, A. (2012). A Kernel Two-Sample Test. Journal of Machine Learning Research, 13(25):723–773.
- Gretton et al., (2009) Gretton, A., Fukumizu, K., Harchaoui, Z., and Sriperumbudur, B. K. (2009). A Fast, Consistent Kernel Two-Sample Test. In Proceedings of the 22nd International Conference on Neural Information Processing Systems, volume 22, page 673–681, Red Hook, New York, USA. Curran Associates, Inc.
- Hall and Tajvidi, (2002) Hall, P. and Tajvidi, N. (2002). Permutation Tests for Equality of Distributions in High-dimensional Settings. Biometrika, 89(2):359–374.
- Hediger et al., (2022) Hediger, S., Michel, L., and Näf, J. (2022). On the Use of Random Forest for Two-sample Testing. Computational Statistics & Data Analysis, 170:107435.
- Henze, (1988) Henze, N. (1988). A Multivariate Two-Sample Test Based on the Number of Nearest Neighbor Type Coincidences. The Annals of Statistics, 16(2):772–783.
- Henze and Penrose, (1999) Henze, N. and Penrose, M. D. (1999). On the Multivariate Runs Test. The Annals of Statistics, 27(1):290–298.
- Jurečková and Kalina, (2012) Jurečková, J. and Kalina, J. (2012). Nonparametric Multivariate Rank Tests and Their Unbiasedness. Bernoulli, 18(1):229–251.
- Kim et al., (2019) Kim, I., Lee, A. B., and Lei, J. (2019). Global and Local Two-sample Tests via Regression. Electronic Journal of Statistics, 13(2):5253–5305.
- Kolmogorov, (1933) Kolmogorov, A. N. (1933). Sulla Determinazione Empirica di Una Legge di Distribuzione. Giornale dell’Istituto Italiano degli Attuari, 4:83–91.
- Kruskal, (1964) Kruskal, J. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29:1–27.
- Leibon and Letscher, (2000) Leibon, G. and Letscher, D. (2000). Delaunay Triangulations and Voronoi Diagrams for Riemannian Manifolds. In Proceedings of the sixteenth annual symposium on Computational geometry, pages 341–349, New York, New York, USA. Association for Computing Machinery.
- Liu et al., (2022) Liu, W., Yu, X., Zhong, W., and Li, R. (2022). Projection Test for Mean Vector in High Dimensions. Journal of the American Statistical Association.
- Mann and Whitney, (1947) Mann, H. B. and Whitney, D. R. (1947). On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. The Annals of Mathematical Statistics, 18(1):50–60.
- Marozzi, (2015) Marozzi, M. (2015). Multivariate Multidistance Tests for High-dimensional Low Sample Size Case-control Studies. Statistics in Medicine, 34(9):1511–1526.
- Moon et al., (2018) Moon, K. R., Stanley III, J. S., Burkhardt, D., van Dijk, D., Wolf, G., and Krishnaswamy, S. (2018). Manifold Learning-based Methods for Analyzing Single-cell RNA-sequencing Data. Current Opinion in Systems Biology, 7:36–46.
- Morgan and Rubin, (2012) Morgan, K. L. and Rubin, D. B. (2012). Rerandomization to Improve Covariate Balance in Experiments. The Annals of Statistics, 40(2):1263–1282.
- Pless and Souvenir, (2009) Pless, R. and Souvenir, R. (2009). A Survey of Manifold Learning for Images. IPSJ Transactions on Computer Vision and Applications, 1:83–94.
- Rosenbaum, (2005) Rosenbaum, P. R. (2005). An Exact Distribution-free Test Comparing Wwo Multivariate Distributions based on Adjacency. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(4):515–530.
- Schilling, (1986) Schilling, M. F. (1986). Multivariate Two-sample Tests based on Nearest Neighbors. Journal of the American Statistical Association, 81(395):799–806.
- Sibson, (1978) Sibson, R. (1978). Locally Equiangular Triangulations. The Computer Journal, 21(3):243–245.
- Smirnov, (1948) Smirnov, N. (1948). Table for Estimating the Goodness of Fit of Empirical Distributions. The Annals of Mathematical Statistics, 19(2):279–281.
- Székely and Rizzo, (2004) Székely, G. J. and Rizzo, M. L. (2004). Testing for Equal Distributions in High Dimension. InterStat, November(5).
- Wald and Wolfowitz, (1940) Wald, A. and Wolfowitz, J. (1940). On a Test Whether Two Samples are from the Same Population. The Annals of Mathematical Statistics, 11(2):147–162.
- Wang et al., (2019) Wang, W., Lin, N., and Tang, X. (2019). Robust Two-sample Test of High-dimensional Mean Vectors under Dependence. Journal of Multivariate Analysis, 169:312–329.
- Wilcoxon, (1945) Wilcoxon, F. (1945). Individual Comparisons by Ranking Methods. Biometrics Bulletin, 1(6):80–83.
- Yan and Zhang, (2023) Yan, J. and Zhang, X. (2023). Kernel two-sample tests in high dimensions: interplay between moment discrepancy and dimension-and-sample orders. Biometrika, 110(2):411–430.
- Zhang et al., (2017) Zhang, Z., Song, Y., and Qi, H. (2017). Age Progression/Regression by Conditional Adversarial Autoencoder. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4352–4360. IEEE.
- Zhao et al., (2021) Zhao, D., Wang, J., Lin, H., Chu, Y., Wang, Y., Zhang, Y., and Yang, Z. (2021). Sentence Representation with Manifold Learning for Biomedical Texts. Knowledge-Based Systems, 218:106869.