Flexible Bayesian Nonparametric Product Mixtures for Multi-scale Functional Clustering
Abstract
There is a rich literature on clustering functional data with applications to time-series modeling, trajectory data, and even spatio-temporal applications. However, existing methods routinely perform global clustering that enforces identical atom values within the same cluster. Such grouping may be inadequate for high-dimensional functions, where the clustering patterns may change between the more dominant high-level features and the finer resolution local features. While there is some limited literature on local clustering approaches to deal with the above problems, these methods are typically not scalable to high-dimensional functions, and their theoretical properties are not well-investigated. Focusing on basis expansions for high-dimensional functions, we propose a flexible non-parametric Bayesian approach for multi-resolution clustering. The proposed method imposes independent Dirichlet process (DP) priors on different subsets of basis coefficients that ultimately results in a product of DP mixture priors inducing local clustering. We generalize the approach to incorporate spatially correlated error terms when modeling random spatial functions to provide improved model fitting. An efficient Markov chain Monte Carlo (MCMC) algorithm is developed for implementation. We show posterior consistency properties under the local clustering approach that asymptotically recovers the true density of random functions. Extensive simulations illustrate the improved clustering and function estimation under the proposed method compared to classical approaches. We apply the proposed approach to a spatial transcriptomics application where the goal is to infer clusters of genes with distinct spatial patterns of expressions. Our method makes an important contribution by expanding the limited literature on local clustering methods for high-dimensional functions with theoretical guarantees.
Keywords Global and local clustering Product of Dirichlet process mixtures Posterior consistency Spatial function Spatial transcriptomics
1 Introduction
Functional data analysis is perhaps one of the most prominent areas of research in statistics and machine learning (Ramsay & Silverman, 2005). This is not surprising given the vast amounts of functional data collected in practice, ranging from time-series data (Aghabozorgi et al., 2015), spatial data (Mateu & Romano, 2017), spatio-temporal data (Atluri et al., 2018), among others. In contrast to a rich literature for analyzing functional datasets (Wang et al., 2015), Bayesian literature on functional data clustering is (somewhat surprisingly) not very well-developed (Zhang & Parnell, 2023). Most Bayesian functional data clustering methods rely on parametric finite mixture models (Bouveyron et al., 2019; Chamroukhi & Nguyen, 2019) or more flexible non-parametric infinite mixtures (Frühwirth-Schnatter & Malsiner-Walli, 2019).
Bayesian functional clustering approaches typically assign a global membership, creating subgroups in which functions share the same cluster membership within each subgroup. These global clustering methods produce distinct groups of functions defined by overarching patterns; however, they may not capture the heterogeneity in localized features within the functions lying in the same cluster. Some examples of global clustering methods include (Ray & Mallick, 2006; Crandell & Dunson, 2011; Bigelow & Dunson, 2009; Rodriguez & Dunson, 2014). We refer the readers to Wade (2023) for a more comprehensive review of such approaches. The majority of Bayesian global clustering methods have focused their implementation on one-dimensional functions such as time-series curves or longitudinal trajectories. While extending these approaches to higher order functions (such as spatial functions or images) should be feasible in principle, it is not straightforward in practice. This is due to the fact that higher order functions involve high-dimensional parameters that are challenging to cluster using classical global clustering methods. In this context, Chandra et al. (2023) showed that classical Bayesian mixture modeling approaches produce global clustering that are susceptible to poor performance and degenerated clustering in high-dimensional settings. In particular, these approaches may potentially produce an unrealistic large number of clusters or collapse to just one cluster when the model dimension far exceeds the sample size. While these results in Chandra et al. (2023) were developed with respect to clustering of multivariate vectors, they are naturally generalizable to our functional data analysis settings of interest.
We conjecture that a potential reason for inadequate performance of global clustering approaches in high-dimensional settings may stem from their inability to account for: (i) heterogeneity of localized features (Di Iorio & Vantini, 2023); and (ii) scenarios where a small subset of important features separate the clusters (Floriello & Vitelli, 2017). To circumvent difficulties associated with global clustering, a limited number of local clustering methods have been proposed that enable flexible clustering in different resolutions or domains (e.g. Dunson, 2009; Petrone et al., 2009; Rodríguez et al., 2010). These approaches produce distinct clustering allocations across various subsets of model parameters characterizing the function and thereby relax the restrictions inherent in global clustering. This enables one to account for greater heterogeneity and flexibility when clustering high dimensional functions, which is desirable.
Local clustering approaches can be broadly categorized into two classes. The first class of methods often directly cluster the element-wise mean parameters corresponding to the different observed instances within the function domain, while incorporating spatial dependence via these mean parameters. Petrone et al. (2009) proposed a hybrid Dirichlet process prior that formulates the individual functions as hybrids of global functions drawn from a Gaussian process. A latent Gaussian copula is used to allow local surface selection. Related alternative approaches were proposed by Rodríguez et al. (2010), and Nguyen & Gelfand (2011). The above approaches are quite useful, but can be computationally intensive due to the need to update as many labeling indicator vectors as the number of locations within MCMC sampling. The second class of methods rely on basis expansions to smooth the mean function, while imposing suitable mixture priors on the basis coefficients. Dunson (2009) proposed local partition processes (LPP) that enabled independent clustering across basis coefficients, while also specifying an additional shared mixture component for global clustering. Suarez & Ghosal (2016) proposed a spike and slab prior independently for wavelet basis coefficients, and proposed a post-MCMC decision-theoretic criteria to consolidate the local clusters to inform global clustering. Recently, Fan & Sarkar (2023) proposed a hidden Markov model for local partitioning and clustering of multiple time-series data.
While local clustering approaches are potentially more adept at modeling high-dimensional functions, there are still several unmet challenges. This is exemplified by an increase in the number of possible clusters to when clustering a function with observed points, where denotes the number of mixture components for the th coefficient (). Therefore, the increase in flexibility due to a greater number of mixture components under local clustering comes at a cost of greater computational complexity that must be controlled via careful modeling considerations. Moreover, the theoretical implications of Bayesian local clustering methods are not particularly well-understood, to our knowledge. Using wavelet expansions, Suarez & Ghosal (2016) established asymptotic consistency in recovering the true (finite dimensional) model parameters as well as the true partition structure in the ideal case when the observed measurement error associated with the function approaches zero. Unfortunately, the assumption of vanishing noise may not be supported in most practical settings. Nguyen & Gelfand (2011) used a finite mixture model and explored posterior consistency for density estimation in the limiting case when the number of components . While useful, they assumed the same form of the true density as the working model for their theoretical derivations that may be restrictive. Further, the working model based on finite-dimensional representations (via truncation) is deemed less flexible compared to the infinite dimensional mixture representation that is ultimately used for their theoretical analyses. Beyond these limited findings, posterior consistency guarantees pertaining to density estimation of random functions under Bayesian non-parametric local clustering approaches are limited or essentially absent to our knowledge.
Our focus is on developing practical non-parametric Bayesian product mixture models for local clustering of spatial functions that is scalable to high-dimensions and to investigate the theoretical properties such as posterior consistency. We leverage classical basis representations (such as wavelets) of functions to induce spatial smoothing for the observed noisy functional data, while simultaneously allowing for spatially correlated measurement errors for a more flexible characterization. To encourage local clustering among the high-dimensional basis coefficients, we apply independent Dirichlet process (DP) priors to coefficients corresponding to distinct wavelet resolutions. This structure introduces resolution-specific clustering and increases flexibility by delineating the clustering of high-level (or global) features and finer resolution (or local) features. By allowing the global features to drive the overall clustering process, the method diminishes the impact of local features on the clustering mechanism. This strategy is particularly useful when local features introduce additional noise and heterogeneity, or when the clusters are separated by a small subset of global features only. The proposed approach facilitates dimension reduction by dividing the basis coefficients into smaller subsets, which enhances model parsimony, reduces computational complexity, and improves scalability. Although our method is reminiscent of the local partitioning approach by Dunson (2009), there are important distinctions in terms of the local partitioning methodology and theoretical contributions, as discussed in Section 2.3.
The practical advantages of the proposed approach are complimented by desirable posterior consistency properties for density estimation, under mild assumptions. These results are established via careful sieve constructions and deriving corresponding entropy bounds and builds upon recent developments in non-parametric Bayesian density estimation literature (Canale & De Blasi, 2017; Kundu & Lukemire, 2024) focused on multivariate data. We develop an efficient MCMC algorithm for implementation of the proposed approach based on wavelet representations. Simulation studies illustrate strong advantages of the proposed approach assuming correlated or independent errors compared to global clustering approaches, in terms of clustering. The proposed method is applied to a spatial transcriptomics data focused on clustering the spatially varying gene expression profiles in a breast cancer study where it infers interpretable and reproducible gene clusters. While we focus on spatial functions for this article, the proposed approach is broadly applicable to a wide class of functions that admit suitable basis representations.
In summary, our approach based on non-parametric Bayesian product mixture models for high-dimensional functional data introduces four main contributions:
-
a.
Local clustering to address the curse of dimensionality resulting in improved computation complexity: Using independent DPs on basis coefficients corresponding to distinct wavelet resolutions, the proposed approach partitions the parameter space and results in local clustering across resolutions and global clustering within a given resolution. The resolution-specific priors reduce the dimensionality and improve the computation complexity as illustrated in Lemma 1.
- b.
-
c.
Inclusion of correlated noise: Our framework incorporates correlated residual noise terms (with independent errors as a special case) that allows an improved characterization of local perturbations that are unrelated to clustering and may accentuate heterogeneity. We introduce low-rank representations of the error covariance that improves scalability to high-dimensions while retaining desirable theoretical properties.
-
d.
Efficient Gibbs sampler : We implement an efficient Markov chain Monte Carlo (MCMC) algorithm (Section 4) that facilitates model implementation for high dimensional spatial functions.
Among the above notable contributions, the inclusion of correlated noise (c) is motivated by recent literature on measurement error models (Ma & Kundu, 2022). Existing Bayesian functional clustering literature typically ignores correlated error terms distributed over spatial locations, which may amplify local perturbations and ultimately hinder the ability of clustering methods to delineate clear patterns in the data. The proposed method addresses this gap by not only incorporating correlated measurement errors, but also allowing these correlations to vary across subjects to accommodate additional heterogeneity.
The article is structured as follows. We develop the proposed methodology for noisy functional data in Section 2. Section 3 shows the theoretical guarantees including posterior consistency, while Section 4 outlines posterior inference strategy. In Section 5, we present results from extensive simulation studies. Section 6 applies the approach to a spatial transcriptomics data for clustering gene expressions. The Discussion section comments on potential limitations of the proposed methods as well as extensions of this approach to other contexts. Supplementary materials contain proofs of theoretical results, posterior computation steps, and additional details.
2 Methodology
2.1 Functional Data Analysis via Basis Expansion
Consider observed functions for units , where the support of these functions corresponds to points , and refers to the space of all support points where the function can be observed. Assume that each observed function depends on a corresponding underlying random function subject to an additive measurement error. In other words, we consider the model:
(1) |
where is an additive measurement error for unit at point , and is the underlying random function that follows some probability distributions assigned to the space of square integrable measurable functions. Classical functional data analysis frameworks use basis expansions for representing (Morris, 2015). While our approach is generalizable to various types of basis expansions, we focus on wavelet basis functions (Ray & Mallick, 2006; Mallat, 2008) that has been used extensively for modeling spatial functions such as images (Reiss et al., 2015). Wavelet basis expansion provides us with several advantages, such as providing a natural partition of the coefficient space based on varying resolution levels that are conducive for the proposed multi-scale local clustering approach. Moreover, it is possible to leverage existing efficient computational schemes such as the pyramid algorithm (Mallat, 2008) that is more scalable to high-dimensional functional data.
In particular, we represent
(2) |
where is the scaling function with a scalar scaling coefficient and is a vector of wavelet functions that contains orthonormal wavelet bases at resolution level with the corresponding wavelet coefficients as a vector of . Further, the set of finite observed support points is assumed to be fixed and identical for all units in the data. Notably, the sample size is much smaller than the number of observed instances () for high-dimensional functional data. Instead of directly defining densities on the space of functions of infinite dimensions, these assumptions enable us to translate the problem to a multivariate setting that is more suitable for both methodological and theoretical analyses. Specifically, we assume the number of observed instances to be dyadic of for the ease of wavelet basis expansion (Mallat, 2008). If this is not the case, the dimension can be inflated to satisfy this assumption by simply padding zeros around the function (Ma & Kundu, 2022). Denote , and as vectors of the observed data, the realizations of the underlying function, and scaling function basis at observed locations , respectively. Let be a matrix of wavelet basis functions of dimension with each row as a vector of basis functions for level . The expansion representation can be re-expressed as:
(3) |
where and is a covariance matrix. Without loss of generality, we assume the zero scaling coefficients for the centered data.
2.2 Product of Dirichlet Process Prior
Equation (3) includes two sets of main parameters of interest: wavelet coefficients for and covariance . We complete the Bayesian specification by elaborating the priors for these components below.
Priors on basis coefficients. We introduce resolution-specific priors on the basis coefficients such that all elements within a resolution are jointly modeled under a particular DP prior, while coefficients across distinct resolutions are modeled under independent DP priors. This structure induces local clustering by independently partitioning coefficients across distinct resolutions, while imposing global clustering for all coefficients within a given resolution level. The proposed prior is naturally justified for wavelet expansions due to the straightforward partition of the coefficient space based on the resolution level, where coefficients at different resolutions operate at distinct scales. Notably, such a specification renders strong advantages in terms of being able to model the dimensional coefficient vector using independent resolution-specific priors that leads to massive gains in computational complexity and enhanced model parsimony (see Section 2.3), while simultaneously relaxing the restrictive assumptions under global clustering. In other words, we specify:
(4) |
where is the concentration parameter and is the base measure of the Dirichlet process.
Priors on covariances. The proposed method provides a flexible characterization by considering correlated errors across support points , where the error covariances are subject-specific and clustered across different subjects. For unstructured covariance matrices, we assign:
(5) |
where is the concentration parameter and is the base measure. While we consider an unstructured covariance to develop the general approach and theoretical properties, we discuss the case of a structured low rank covariance matrix for high-dimensional applications in Section 3 that also retains desirable theoretical properties while improving scalability.
The priors defined in (4) and (5) induce a product of Dirichlet process priors on the parameter space that can be denoted as . This prior ultimately induces a distribution on the class of densities on the observed noisy function denoted as via the representation . We denote the resulting class of priors on as the product of Dirichlet process mixture on functional data (fPDPM). This method generalizes previous work on product mixture priors that was proposed for multivariate time-series data (Kundu & Lukemire, 2024) to the case of spatial functions. By using the stick-breaking representation (Sethuraman, 1994), the induced density takes the form:
(6) |
where is the normal density function with covariance and mean , is the weight of in the th cluster with and for , and is the weight of in the th cluster with and . Equation (6) generalizes the generic kernel mixture representation in the nonparametric multivariate density estimation literature (Wu & Ghosal, 2008; Kundu & Lukemire, 2024) to the case of functional data analysis relying on suitable basis expansions.
2.3 Comparison to Related Local Clustering Methods
The proposed method is reminiscent of the LPP approach in Dunson (2009) that specified independent clustering for each basis coefficient in model (3), with the coefficients being allocated to either a local cluster of a shared global cluster. While useful, such a construction may not be directly applicable to a wavelet basis representation, as coefficients at different resolutions operate on distinct scales and are not expected to share common global component values. Instead of dichotomizing features into global versus local clusters, the proposed method pursues a more curated approach by specifying an independent clustering (under a DP prior) for each distinct resolution level of the wavelet basis expansion. Such an approach is motivated by the consideration that features at different scales/resolutions are likely to cluster differentially, while all basis elements within a given resolution may adhere to a global clustering mechanism. Therefore, our specification uses independent priors (instead of in LPP) that results in massive improvements in computational complexity. This result is formalized in the Lemma below that compares the computational complexity of the classical LPP method with that of the proposed local clustering approach using a low rank decomposition to model the residual covariance.
Lemma 1.
Assume that the maximum number of clusters is across all basis coefficients. Under the low rank decomposition of covariance matrix in Equation (8) with rank , the complexity of the proposed model is , while the complexity of LPP with independent priors on the basis coefficients and independent error terms is .
A detailed proof with an empirical simulation are offered in Supplementary Material Section S2. The result is a straightforward consequence of the Gibbs sampler, and clearly shows the considerable gains in computational complexity under the proposed approach for especially in high-dimensional settings. Notably, the gain in computational complexity in Lemma 1 holds even under the assumption of correlated errors for the proposed method compared to a more straightforward independent error assumption for the LPP approach.
3 Theorem
The proposed method defines a prior on the observed functions of Equation (6) that satisfies several desirable theoretical properties. In our theoretical analysis, particularly for deriving posterior consistency results, we omit the subscript when appropriate, following standard convention. All theoretical results are constructed under the conditions of and fixed . For our theoretical treatment, we assume that the number of observed support points for the spatial function is that produces a square orthonormal basis matrix . In general, the theoretical results hold for all orthogonal basis representations with the product of DP prior on basis coefficients, as long as the true density satisfies certain reasonable assumptions. Notably, our posterior consistency results generalized the rich theoretical results for Bayesian non-parametric multivariate density estimation to the case of Bayesian functional data analysis, for which there are limited results. In particular, our focus is on estimating densities of random functions admitting suitable basis expansions.
Denote the Euclidean norm of a vector as , the spectral norm of a matrix as and the eigenvalues of positive definite matrix in decreasing order as . Let be the Kullback-Leibler (KL) divergence between and with . We first show that the fPDPM prior on the space of densities lead to the posterior consistency of KL neighborhoods with the formal argument below.
Lemma 2.
The detailed proof of Lemma 2 is available in Appendix. Notably, the assumptions on the true density in Lemma 2 are routinely assumed in multivariate density estimation literature (Wu & Ghosal, 2008). Under an orthonormal basis (resulting in a finite determinant and Frobenius norm of ), the proof mainly follows from the results in Wu & Ghosal (2008). Following the argument of Schwartz (1965), a positive prior support for arbitrarily small KL neighborhoods of from Lemma 2 provides the weak consistency of fPDPM.
While weak consistency is useful, it is of interest to investigate whether fPDPM also results in a more salient strong consistency property. Strong consistency is desirable since it ensures that the posterior concentrates in arbitrarily small neighborhoods of , thereby guaranteeing more reasonable numerical estimation. However, given that the space can be non-compact, additional conditions are required to establish strong consistency (Canale & De Blasi, 2017). The basic strategy follows the framework in Ghosal & van der Vaart (2017) by constructing sieves (a compact subset of ) that possesses certain required properties and that eventually grows to cover the whole space as . Next, we consider how to construct such sieves.
Denote as the -product of the measure that is associated with the true density and as the entropy of the space of densities , where is the minimum integer for which there exists densities such that . For the metric used here, we consider the Hellinger distance, , and distance, . We state below the sufficient conditions for the sieves that are required for strong consistency and was originally proposed in Canale & De Blasi (2017). This result will be used in our theoretical calculations when deriving strong consistency.
Lemma 3.
Consider sieves with as , where . Suppose the sieves satisfy the following conditions for : (1A) for some ; and (1B) for some . Then in -probability for any in the weak support of .
Conditions (1A) and (1B) in Lemma 3 regulate the growth of sieve in when sample size increases. Specifically, (1A) suggests the prior probability assigned to needs to shrink exponentially in , implying that the tails with only exponentially small probability are excluded. Condition (1B) can be considered as a summability condition that stipulates that the weighted sum of the entropy number (weighted by the square root of the prior probability) is required to go toward zero when increases. To achieve both conditions, we carefully construct sieves (compact sub-spaces) as outlined below.
Under the above construction, the entropy of the sieves as given by the following Theorem.
Theorem 1.
The entropy number of is given by
Detailed proof is available in the Appendix. Having established the entropy bounds in Theorem 1, we are now in a position to establish strong consistency results via the Theorem below.
Theorem 2.
Suppose that satisfies Lemma 2 and fPDPM prior satisfies the following conditions for all sufficiently large and some positive constants and :
-
(2A)
for ; (2B) ;
-
(2C)
; (2D) .
Then, the fPDPM prior satisfies the sufficient conditions in Lemma 3 and therefore the posterior distribution is strongly consistent at in -probability, as .
Detailed proof of Theorem 2 is provided in the Appendix. Theorem 2 provides guidelines for the prior that gives salient strong posterior consistency and a solid theoretical foundation corresponding to a practical implementation of the fPDPM approach. Fortunately, several common priors satisfy conditions (2A) to (2D). For example, a multivariate normal prior on the basis coefficients satisfies the condition (2A) and an inverse-Wishart prior on the error covariances satisfies conditions (2B) to (2D). A formal statement is provided in the Corollary below as
Corollary 1.
The proof is a direct result of Corollary 1 of Canale & De Blasi (2017). In this paper, we generalize these priors to Laplace priors (Park & Casella, 2008) on the base measures that achieve better shrinkage and therefore results in improved model fitting. In particular, we express the Laplace priors via scale mixtures of normals as
(7) |
In addition for the subject-specific covariance matrices, we impose the low-rank factor model to further improve the scalability. In particular, we specify
(8) |
where is the factor loading matrix of dimension with as the number of factors for th unit, and . Denote as the membership of covariance for th unit. We specify the base measure as
(9) |
The multiplicative variance terms involves local shrinkages that control the shrinkage of individual factor loadings, and global terms that control the number of factors and the cluster-specific shrinkage, respectively. A key advantage of the model specified in (8) is that the number of factors is decided by the data instead of pre-specification (Murphy et al., 2020) and can vary among different clusters. The above prior construction is borrows from the infinite mixture of factor analyzers considered in Murphy et al. (2020), and results in base measures on the covariance the satisfy conditions (2B)-(2D) in Theorem 2 as shown in the following Corollary. Ultimately, this implies that the resulting base measures leads to strong consistency.
Corollary 2.
Denote as the true density that satisfies conditions in Lemma 2. Consider a prior on density induced by the based on (4) for coefficients and (5). Let follow a Laplace distribution of (7) and be a multiplicative Gamma prior of (3). Then conditions (2A) to (2D) are satisfied and the posterior distribution is consistent at .
Detailed proof is available in Appendix. Note that the case of independent covariance of can be easily obtain as a special case Corollary 2 with .
4 Posterior Computation and Inference
We implement the fPDPM approach via an efficient Gibbs sampler that proceeds via a slice sampling technique. The full details are provided in Supplementary Material Section S4. The Gibbs sampler results in posterior samples of coefficients and memberships across all resolution levels and units. The posterior samples are summarized for the following analysis.
Clustering. The fPDPM may result in varying cluster memberships for different resolution levels. One can consolidate these local clusters to inform global clustering patterns as follows. The global clustering coefficients are summarized based on pairwise agreement among the subject-specific coefficients by adapting the method in Suarez & Ghosal (2016). Denote as a vector of members with as the membership at resolution level of unit for the th posterior sample with . Define the pairwise distance between units and as a weighted sum as
(10) |
where is the weight for the agreement of coefficients at the resolution level . To focus on the global pattern and reduce the impact of the local features on the overall clustering, we assign a higher weight on the global coefficients and downweight the coefficients corresponding to the remaining resolution levels. We achieve this by setting when and if . Based on the pairwise distance , we build an agglomerative clustering with the complete linkage and decide the memberships by cutting the tree at different levels.
Function estimation. We report the accuracy of the estimated functions via the mean squared error (MSE) of the posterior mean. Denote posterior samples of coefficients across all resolution levels for unit as . Let be the estimated function from the th posterior sample of coefficients. We calculate the posterior mean function as and obtain the MSE by .
5 Simulation Studies
5.1 Data-generating Mechanisms
A series of simulation studies are conducted to empirically investigate the performance of the proposed fPDPM, while comparing it to competing approaches. We generate functional data that are commensurate with the spatial transcriptomics data application and consider two-dimensional () spatial functions of size and sample size . We generate functional data under three different scenarios with different clustering structures of (1) global clustering, (2) local clustering, and (3) spatial heterogeneity. The first two scenarios involve the wavelet coefficients. For global clustering, all basis coefficients are zero, , except for the highest resolution, , with eight different non-zero values. Local clustering allows non-zero coefficients at finer resolutions and clusters different resolutions independently. For each resolution level with non-zero coefficients, we consider different values, which results in possible groups for the local clustering scenario. Although only a subset of these patterns are included in the generated data, a large number of candidate clustering patterns creates challenges for model fitting. In contrast to Scenarios 1 and 2 involving basis coefficients, Scenario 3 generates images directly and introduces spatial heterogeneity with the clustering patterns varying over the spatial domain. Using a square image (i.e. ), we assign four non-overlapping circular areas centered across four quadrants as follows: , where is the center of circle . We then generate the underlying function at location with if and . Since each circular area has two possible values, different clusters are generated. Both Scenarios 1 and 3 represent various degree of model mis-specification, with Scenario 1 foregoing local clustering in favor of global clustering and Scenario 3 introducing spatial heterogeneity in clustering that is challenging for most methods. For these scenarios, independent and spatially correlated additive noise are added to the images. The correlated errors were generated under the low rank decomposition of (8) with ranks 1 and 10. A higher choice of rank resulted in non-ignorable spatially correlated noises with a lower signal-to-noise ratio (mean ratio of global clustering: , and spatial heterogeneity: ), comparing to the ratio from independent errors (global clustering: , and spatial heterogeneity: ). A negative signal-to-noise ratio implies that the signal is smaller than the noise in norm with a lower ratio indicating a less clear signal (de Loynes & an Baptiste Olivier, 2021). Figure 1 illustrates unobservable noise-free images (left) as well as noisy observed images for each scenario. More details of data-generating mechanism are available in Supplementary Material Section S5.

Performance Metrics and Benchmarks. We implemented two variants of the proposed fPDPM model with and without spatially correlated noise terms (denoted as fPDPM and fPDPMi respectively) that perform local clustering. We initialized fPDPM with three different numbers factors of for the low rank covariance and report results for these choices. We compared the performance with global clustering approaches such as the joint Dirichlet process model (DPM) and a K-means clustering performed on functional principal components (PCA-KM) as proposed in (Happ & Greven, 2018). Unfortunately, it was not possible to compare with other local partitioning methods such as LPP (Dunson, 2009), due to an unreasonable computational burden as described in Section 2.3 and Supplementary Material Section S1. We evaluate the clustering performance via the adjusted rand index (ARI) (Rand, 1971), and function estimation error via mean squared error (MSE), noting that the clustering performance is of primary interest. ARI ranges between zero and one with the ARI of one representing that the estimated and the true clustering are perfectly aligned. We calculate the ARI based on the memberships for the basis coefficients corresponding to the mean clustering, and ignore the variations due to the covariance clustering. For Scenarios 1 and 3, we report ARI based on global clustering that are obtained via hierarchical clustering on the pairwise distance in (10) that is averaged over resolutions. We cut the tree at the level of the true group number to report the ARI metrics. Due to a large number of the true clusters in Scenario 2, we choose to compute the ARI at each resolution level with non-zero coefficients for and . For the global clustering approaches, the ARI from the different resolutions are obtained from comparing the global membership to the true local membership at different resolutions. For PCA-KM, we select smallest number of principal components that explain at least variance and decide the number of clusters by maximizing the average silhouette width (Rousseeuw, 1987). We ran the MCMC for a few thousand iterations, and used post burn-in MCMC samples to report results. All results are obtained by averaging over independent replicates for each simulation scenario.
5.2 Simulation Results
Simulation results for Scenarios 1 (global clustering) and 3 (spatial heterogeneity) are shown in Table 3 and 3 and Scenario 2 (local clustering) are available in Table 3. When data is generated under global clustering (Scenario 1), fPDPM outperforms all competing methods in terms of clustering performance (higher ARI) for data generated under both independent and correlated errors. Moreover, fPDPM has comparable MSE with fPDPMi for the function estimation with correlated errors under Scenario 1, but slightly lower MSE under independent errors.
For Scenario 2 involving data generated under local clustering, fPDPM shows a better () or an equivalent () clustering accuracy than fPDPMi. However, the estimated image from fPDPM is slightly worse than fPDPMi with a higher MSE that is potentially due to the higher rank of covariance under fPDPM. This is particularly evident from the higher MSE for fPDPM models employing a larger number of factors for the low rank covariance. For Scenario 3 involving spatial heterogeneity, fPDPM has comparable clustering accuracy as well as MSE compared with fPDPMi although the MSE under fPDPMi is slightly lower. In such cases involving spatial heterogeneity, the local clustering approaches with a simple error covariance or independent errors is seen to result in the best performance. The higher MSE under Scenario 3 for all approaches is indicative of the difficulties of estimation the function in the presence of spatial heterogeneity.
In contrast, the global clustering approaches (DPM and PCA-KM) consistently produce considerably poorer clustering performance across all settings. The clustering performance for DPM is the highest for Scenario 1 involving global clustering and considerably worse for Scenario 2 involving local clustering that is expected. For Scenario 1 involving global clustering, the relatively poor performance of DPM and PCA-KM compared to the fPDPM approaches highlights the challenges in high-dimensional clustering that was noted in Chandra et al. (2023), and illustrates the clear advantages of local clustering. The same observation holds for Scenario 3, but the relative performance under the global clustering approaches drops even further, illustrating the advantages of local clustering in the presence of spatial heterogeneity. DPM also has a higher MSE corresponding to Scenarios 1 and 2 compared to fPDPM, with the MSE under Scenario 2 (corresponding to local clustering) being orders of magnitude higher than the fPDPM approaches. This highlights the challenges of global clustering approaches in the presence of local clusters in the data. On the other hand, for Scenario 3 involving spatial heterogeneity, the MSE for DPM may be lower compared to the local clustering approaches in some cases particularly under independent or low rank correlated errors. The improved MSE under DPM under these cases suggests that a smaller number of model parameters that may be better suited for estimating the mean function when the correlation structure for the measurement error is diagonal or low rank.
Independent Error | Low-Rank Correlated Error | High-Rank Correlated Error | ||||
---|---|---|---|---|---|---|
ARI | MSE | ARI | MSE | ARI | MSE | |
100 (0) | 0.007 (0.002) | 89.6 (12.4) | 0.049 (0.012) | 99.3 (2.61) | 0.12 (0.02) | |
100 (0) | 0.008 (0.006) | 91.0 (11.3) | 0.053 (0.010) | 98.9 (3.97) | 0.12 (0.01) | |
100 (0) | 0.027 (0.037) | 94.1 (10.1) | 0.062 (0.028) | 99.6 (1.67) | 0.15 (0.11) | |
fPDPMi | 90.4 (9.0) | 0.001 (0.001) | 35.7 (8.17) | 0.046 (0.008) | 73.8 (13.2) | 0.12 (0.006) |
DPM | 74.7 (13.7) | 0.218 (0.135) | 76.9 (13.3) | 0.192 (0.129) | 78.7 (10.9) | 0.23 (0.098) |
PCA-KM | 77.4 (9.59) | NA | 80.5 (9.80) | NA | 82.7 (10.2) | NA |
Independent Error | Low-Rank Correlated Error | High-Rank Correlated Error | ||||
---|---|---|---|---|---|---|
ARI | MSE | ARI | MSE | ARI | MSE | |
100 (0.14) | 2.54 (0.25) | 99.5 (1.76) | 2.95 (0.29) | 90.6 (7.80) | 3.15 (0.24) | |
98.9 (2.87) | 4.14 (0.37) | 89.3 (9.40) | 4.93 (0.54) | 89.0 (6.61) | 4.46 (0.47) | |
87.7 (11.2) | 5.98 (0.41) | 86.6 (8.12) | 5.95 (0.18) | 94.7 (5.19) | 5.25 (0.31) | |
fPDPMi | 100 (0) | 2.25 (0.21) | 100 (0) | 2.43 (0.20) | 98.1 (4.30) | 2.86 (0.19) |
DPM | 61.3 (7.76) | 1.76 (0.31) | 49.9 (7.78) | 2.22 (0.34) | 39.1 (12.3) | 3.57 (0.65) |
PCA-KM | 84.9 (7.82) | NA | 85.8 (6.26) | NA | 84.1 (6.40) | NA |
Independent Error | ||||
---|---|---|---|---|
MSE | ||||
95.1 (4.5) | 100 (0.006) | 93.9 (8.75) | 0.28 (0.10) | |
95.2 (4.1) | 100 (0.003) | 94.3 (8.84) | 0.27 (0.11) | |
94.9 (4.8) | 100 (0.002) | 97.2 (6.74) | 0.31 (0.26) | |
fPDPMi | 87.8 (5.6) | 99.0 (1.6) | 95.1 (9.84) | 0.07 (0.13) |
DPM | 20.8 (1.4) | 0.92 (0.48) | 1.11 (0.65) | 2.55 (0.06) |
PCA-KM | 18.5 (3.6) | 0.17 (0.39) | 0.21 (0.44) | NA |
6 Spatial Transcriptomics of Breast Cancer
6.1 Data Overview and Pre-processing
We applied fPDPM to a publicly available spatial transcriptomics data of HER2-positive breast tumor from Andersson et al. (2021). Our primary goal is to discover clusters of genes with similar profiles of spatially varying expression levels. This is achieved via fitting the proposed fPDPM approach to the data to discover local clusters of genes that are then consolidated to discover global clusters. We used expression data of patient that consists of three consecutive cryosection samples (H1, H2, and H3, henceforth), and compared the clustering reliability of the genes across the tissues, along with the biological interpretation of the clusters. We present the results for cluster memberships for the H1 tissue sample in the main manuscript, with the results for the remaining two samples in Supplementary Material Section S6.1. We retained genes with more than non-zero spots and spots with at least non-zero expression genes (Shang & Zhou, 2022). After removing genes that are identified as technical artifacts (Andersson et al., 2021), we obtained genes measured on spots for three samples. For each sample, we normalized the expression data with mean zero and unit variance and selected spatially variable genes (Shang & Zhou, 2022), which results in genes. We padded zeros in all samples to attain the dimensions of by . We ran fPDPM of iterations and discarded the first of iterations. The convergence diagnostics are provided in Supplementary Material Section S6.2.
6.2 Analysis Results
Based on the Silhouette method, we cut the estimated hierarchical clustering at clusters with each cluster contains , and genes, respectively. (See Supplementary Material Table S1 for the full list.) If we focus on common genes among three samples, we obtain a reasonably high clustering reproducibility among three samples, which is measured by ARI of (H1 and H2) and (H1 and H3). We identify the first cluster as the innate immune system with out of genes in the pathway. For example, CXCL12 activates the leukocyte, a member of innate immune system, and facilitates the tumor cell invasion for breast cancer (Ye et al., 2021). The second cluster belongs to the pathway of signal transduction, and genes out of genes are part of the pathway. For example, Akt1 is a member of AKT pathway, which is a common mutated pathway related to the drug resistance in HER2 positive breast cancer (Pan et al., 2024). The third cluster is characterized by immune system with out of genes belong to the immune system pathway. For example, CD4 is a receptor on the surface of many immune cells such as T cells, and the stimulated CD4 improves the treatment efficacy of HER2 positive breast cancer (Song et al., 2020).
In contrast, global clustering under the classical DPM seems produced an overly large number of clusters, with one giant cluster containing the majority of genes and other smaller clusters comprising a small number of genes. Specifically, DPM generated clusters that contained one major cluster with out of genes, five medium-sized clusters with number of genes ranging from to , and the remaining clusters with at most two genes. The presence of a non-trivial number of small clusters seems biologically unrealistic and results in lower clustering reproducibility for the set of common genes across tissues, with ARIs of between H1 and H2, and between H1 and H3. In summary, our analysis points to the challenges of global clustering for high-dimensional spatial functions containing, while simultaneously highlighting the ability of the proposed approach to infer interpretable and reproducible clusters.
7 Discussion
In this paper, we present a flexible Bayesian nonparametric framework for local clustering, termed fPDPM, designed to cluster high-dimensional functional data across multiple resolution levels. By employing a wavelet basis expansion combined with a product of Dirichlet process priors on the basis coefficients, fPDPM enables independent clustering of functional data at different resolutions. This approach achieves a form of local clustering that reduces computational complexity compared to traditional local clustering methods, while effectively mitigating the curse of dimensionality often encountered in global clustering techniques. Additionally, we propose a low rank decomposition for the residual covariance matrix that provides an improved characterization and simultaneously improves computational complexity. Under mild conditions, we show the salient strong consistency of fPDPM and exemplify some common priors that satisfies the assumed conditions. We also propose an efficient Gibbs sampler for posterior inference. Our simulations demonstrate that fPDPM outperforms existing methods that focus on global clustering under a variety of settings. We use fPDPM on spatial transcriptomics data of breast cancer and identify replicable clusters comprising genes that corroborate existing biological literature.
Currently, fPDPM considers cross-sectional functional data under the normality error assumption. One generalization is to consider the longitudinal functional data such as spatio-temporal data (Atluri et al., 2018) and model the temporal correlation by allowing the coefficients to vary on the temporal relationships. Another possible extension is to extend the fPDPM approach for clustering based on images from multiple modalities. All these directions are left for future investigations.
References
- Aghabozorgi et al. (2015) Aghabozorgi, S., Seyed Shirkhorshidi, A. & Ying Wah, T. (2015). Time-series clustering - A decade review. Information Systems 53, 16–38.
- Andersson et al. (2021) Andersson, A., Larsson, L., Stenbeck, L., Salmén, F., Ehinger, A., Wu, S. Z., Al-Eryani, G., Roden, D., Swarbrick, A., Borg, Å., Frisén, J., Engblom, C. & Lundeberg, J. (2021). Spatial deconvolution of HER2-positive breast cancer delineates tumor-associated cell type interactions. Nature Communications 12, 1–14.
- Atluri et al. (2018) Atluri, G., Karpatne, A. & Kumar, V. (2018). Spatio-temporal data mining: A survey of problems and methods. ACM Computing Surveys 51.
- Bigelow & Dunson (2009) Bigelow, J. L. & Dunson, D. B. (2009). Bayesian semiparametric joint models for functional predictors. Journal of the American Statistical Association 104, 26–36.
- Bouveyron et al. (2019) Bouveyron, C., Celeux, G., Murphy, T. B. & Raftery, A. E. (2019). Model-Based Clustering and Classification for Data Science: With Applications in R. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press.
- Canale & De Blasi (2017) Canale, A. & De Blasi, P. (2017). Posterior asymptotics of nonparametric location-scale mixtures for multivariate density estimation. Bernoulli 23, 379–404.
- Chamroukhi & Nguyen (2019) Chamroukhi, F. & Nguyen, H. D. (2019). Model-based clustering and classification of functional data. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 9.
- Chandra et al. (2023) Chandra, N. K., Canale, A. & Dunson, D. B. (2023). Escaping the curse of dimensionality in bayesian model-based clustering. Journal of Machine Learning Research 24, 1–42.
- Crandell & Dunson (2011) Crandell, J. L. & Dunson, D. B. (2011). Posterior simulation across nonparametric models for functional clustering. Sankhya B 73, 42–61.
- de Loynes & an Baptiste Olivier (2021) de Loynes, B. & an Baptiste Olivier, F. N. (2021). Data-driven thresholding in denoising with spectral graph wavelet transform. Journal of Computational and Applied Mathematics 389, 113319.
- Di Iorio & Vantini (2023) Di Iorio, J. & Vantini, S. (2023). funLOCI: A Local Clustering Algorithm for Functional Data. Journal of Classification .
- Dunson (2009) Dunson, D. B. (2009). Nonparametric Bayes local partition models for random effects. Biometrika 96, 249–262.
- Fan & Sarkar (2023) Fan, J. & Sarkar, A. (2023). Bayesian Semiparametric Local Clustering of Multiple Time Series Data. Technometrics 0, 1–19.
- Floriello & Vitelli (2017) Floriello, D. & Vitelli, V. (2017). Sparse clustering of functional data. Journal of Multivariate Analysis 154, 1–18.
- Frühwirth-Schnatter & Malsiner-Walli (2019) Frühwirth-Schnatter, S. & Malsiner-Walli, G. (2019). From here to infinity: sparse finite versus Dirichlet process mixtures in model-based clustering. Advances in Data Analysis and Classification 13, 33–64.
- Ghosal & van der Vaart (2017) Ghosal, S. & van der Vaart, A. (2017). Fundamentals of nonparametric Bayesian inference.
- Happ & Greven (2018) Happ, C. & Greven, S. (2018). Multivariate Functional Principal Component Analysis for Data Observed on Different (Dimensional) Domains. Journal of the American Statistical Association 113, 649–659.
- Kundu & Lukemire (2024) Kundu, S. & Lukemire, J. (2024). Flexible Bayesian Product Mixture Models for Vector Autoregressions. Journal of Machine Learning Research , 1–44.
- Ma & Kundu (2022) Ma, X. & Kundu, S. (2022). Multi-Task Learning with High-Dimensional Noisy Images. Journal of the American Statistical Association 0, 1–35.
- Mallat (2008) Mallat, S. (2008). A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press, third edition ed.
- Mateu & Romano (2017) Mateu, J. & Romano, E. (2017). Advances in spatial functional statistics. Stochastic Environmental Research and Risk Assessment 31, 1–6.
- Morris (2015) Morris, J. S. (2015). Functional Regression. Annual Review of Statistics and Its Application 2, 321–359.
- Murphy et al. (2020) Murphy, K., Viroli, C. & Gormley, I. C. (2020). Infinite mixtures of infinite factor analysers. Bayesian Analysis 15, 937–963.
- Nguyen & Gelfand (2011) Nguyen, X. L. & Gelfand, A. E. (2011). The dirichlet labeling process for clustering functional data. Statistica Sinica 21, 1249–1289.
- Pan et al. (2024) Pan, L., Li, J., Xu, Q., Gao, Z., Yang, M., Wu, X. & Li, X. (2024). HER2/PI3K/AKT pathway in HER2-positive breast cancer: A review. Medicine (Baltimore) 103, e38508.
- Park & Casella (2008) Park, T. & Casella, G. (2008). The Bayesian Lasso. Journal of the American Statistical Association 103, 681–686.
- Petrone et al. (2009) Petrone, S., Guindani, M. & Gelfand, A. E. (2009). Hybrid dirichlet mixture models for functional data. Journal of the Royal Statistical Society Series B: Statistical Methodology 71, 755–782.
- Ramsay & Silverman (2005) Ramsay, J. & Silverman, B. (2005). Functional Data Analysis.
- Rand (1971) Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association 66, 846–850.
- Ray & Mallick (2006) Ray, S. & Mallick, B. (2006). Functional clustering by bayesian wavelet methods. Journal of the Royal Statistical Society Series B: Statistical Methodology 68, 305–332.
- Reiss et al. (2015) Reiss, P. T., Huo, L., Zhao, Y., Kelly, C. & Ogden, R. T. (2015). Wavelet-domain regression and predictive inference in psychiatric neuroimaging. The annals of applied statistics 9, 1076.
- Rodriguez & Dunson (2014) Rodriguez, A. & Dunson, D. B. (2014). Functional clustering in nested designs: Modeling variability in reproductive epidemiology studies. Annals of Applied Statistics 8, 1416–1442.
- Rodríguez et al. (2010) Rodríguez, A., Dunson, D. B. & Gelfand, A. E. (2010). Latent stick-breaking processes. Journal of the American Statistical Association 105, 647–659.
- Rousseeuw (1987) Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics 20, 53–65.
- Schwartz (1965) Schwartz, L. (1965). On Bayes procedures. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 4, 10–26.
- Sethuraman (1994) Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica 4, 639–650.
- Shang & Zhou (2022) Shang, L. & Zhou, X. (2022). Spatially aware dimension reduction for spatial transcriptomics. Nature Communications 13.
- Song et al. (2020) Song, P. N., Mansur, A., Dugger, K. J., Davis, T. R., Howard, G., Yankeelov, T. E. & Sorace, A. G. (2020). CD4 t-cell immune stimulation of HER2 + breast cancer cells alters response to trastuzumab in vitro. Cancer Cell Int. 20, 544.
- Suarez & Ghosal (2016) Suarez, A. J. & Ghosal, S. (2016). Bayesian clustering of functional data using local features. Bayesian Analysis 11, 71–98.
- Wade (2023) Wade, S. (2023). Bayesian cluster analysis Subject Areas :. Philosophical Transactions of the Royal Society A Volume 381.
- Wang et al. (2015) Wang, J.-L., Chiou, J.-M. & Muller, H.-G. (2015). Review of functional data analysis. Annual Reviews , 1–14.
- Wu & Ghosal (2008) Wu, Y. & Ghosal, S. (2008). Kullback leibler property of kernel mixture priors in bayesian density estimation. Electronic Journal of Statistics 2, 298–331.
- Ye et al. (2021) Ye, Y., Xu, C., Chen, F., Liu, Q. & Cheng, N. (2021). Targeting Innate Immunity in Breast Cancer Therapy: A Narrative Review. Front Immunol 12, 771201.
- Zhang & Parnell (2023) Zhang, M. & Parnell, A. (2023). Review of Clustering Methods for Functional Data. ACM Transactions on Knowledge Discovery from Data 17.