Topology-Preserving Dimensionality Reduction via Interleaving Optimization
Abstract
Dimensionality reduction techniques are powerful tools for data preprocessing and visualization which typically come with few guarantees concerning the topological correctness of an embedding. The interleaving distance between the persistent homology of Vietoris-Rips filtrations can be used to identify a scale at which topological features such as clusters or holes in an embedding and original data set are in correspondence. We show how optimization seeking to minimize the interleaving distance can be incorporated into dimensionality reduction algorithms, and explicitly demonstrate its use in finding an optimal linear projection. We demonstrate the utility of this framework to data visualization.
Keywords Dimensionality Reduction Interleaving Distance Persistent Homology
1 Introduction
Dimension reduction is an important component of many data analysis tasks, but can be potentially problematic as it may “reveal” structure in data which is not truly present. In inference this can be addressed by principled use of a withheld test set or an analysis which addresses model selection more directly. However, in exploratory data analysis it can be difficult to address selection problems incurred by exploration of different dimension reduction techniques, such as whether visualized structures are really present or an artifact of the chosen embedding. In this paper, we develop the use of the interleaving distance for the purpose of quantifying the extent to which topological features of an embedding relate to features in the original data set. Explicitly, we can compute a threshold after which features of a certain size in the persistent homology of the Vietoris-Rips filtration are in one-to-one correspondence between the data set before and after dimension reduction. Furthermore, we show how to find local minima of this threshold through optimization and demonstrate this on the task of finding optimal projections of a data set.
1.1 Related Work
Optimization of persistent homology-based objective functions has attracted much recent attention [29, 17, 19, 22, 6, 5, 26] with a particular focus on applications in computational geometry and deep learning. Of particular relevance is the work of [27] which uses a persistence-based objective to preserve critical edges in the persistent homology of the Vietoris-Rips filtration in a learned latent space of an autoencoder.
The idea of using persistent homology to compare different dimension reduction schemes was initiated by Rieck and Leitte [30, 31], which uses the -Wasserstein distance to compare the persistence diagrams of a data set before and after dimension reduction. Several non-differentiable methods incorporating persistent homology have been developed [12, 36, 13]. With the development of optimization techniques for persistent homology, several differentiable methods have been proposed based on optimization of the -Wasserstein metric on persistence diagrams [20, 34] and an approach based on simulated annealing [37]. While not employed on Vietoris-Rips filtrations, the work of [29] uses the bottleneck distance for optimization of functional maps on shapes.
The interleaving/bottleneck distance has long been a key tool developed for the study of persistent homology under perturbation of the input [9]. The interleaving distance was first introduced by [7], and applied to the study of Vietoris-Rips filtrations in [8] to bound the distance of persistent homology by the Gromov-Hausdorff distance between the input point clouds. The idea of developing confidence regions for persistence pairs was developed by [15] in the context of sampling.
1.2 Contributions
This work presents a novel approach to dimension reduction using optimization of the interleaving/bottleneck distance between the persistent homology of Vietoris-Rips Filtrations of an original data set and the data set after dimensionality reduction.
-
1.
We show how the interleaving distance can be used to quantify a scale at which topological features in and features in are in correspondence, and be used to select homological features of in correspondence with features in .
-
2.
We show how to incorporate the interleaving distance explicitly into the optimization of the embedding and prove the existence of descent directions under mild conditions.
-
3.
We demonstrate this technique in finding optimal linear projections of the data set to preserve the bottleneck distance on several examples111Our implementations are made publicly available at https://github.com/CompTop/Interleaving-DR. with interesting topology.
2 Background
2.1 Persistent Homology of Vietoris-Rips Filtrations
We are interested in discovering and preserving topological features of a point cloud together with a notion of dissimilarity , and refer to the combination of these two data as a dissimilarity space . A dissimilarity is a function , with for any . We will typically consider dissimilarities that are metrics (in particular which satisfy triangle inequality), but many of the bounds here hold more generality. Examples of topological features of the space include clusters formed through single-linkage clustering or “holes” forming loops in the -nearest neighbors graph of .
Vietoris-Rips filtrations are commonly used in conjunction with persistent homology to create features for finite dimensional metric spaces (point clouds) [2]. Given a dissimilarity space , a Vietoris-Rips complex consists of simplices with a maximum pairwise dissimilarity between vertices is less than some threshold :
A Vietoris-Rips filtration is a nested sequence of Vietoris-Rips complexes if .
Homology is a functor from the category of topological spaces and continuous maps to the category of vector spaces and linear maps (for a general introduction see [18]). The dimension of the -dimensional homology vector space of a topological space counts -dimensional topological features of : is the number of connected components, counts loops, and generally counts -dimensional voids. A continuous map has an induced map which maps vectors associated with topological features in to vectors associated with topological features in . The computation of begins with the construction of a chain complex where is a vector space with a basis element for each -simplex in , and the boundary map sends each basis element to a linear combination of basis elements of faces in the boundary of the associated simplex. The boundary maps satisfy , and is the quotient vector space .
Persistent homology [14, 38] is an algebraic invariant of filtrations which captures how the topology of a filtration changes using homology. The output of persistent homology is a persistence vector space , consisting of vector spaces and linear maps induced by inclusion which satisfy a consistency condition for all .
Persistence vector spaces are classified up to isomorphism by birth-death pairs , or equivalently their persistence barcode [38] or interval indecomposables [3]. Each pair is associated to the appearance of a new homology vector at filtration parameter (meaning it is not in the image of an induced map), which maps through the persistence vector space until it enters the kernel of an induced map at filtration parameter . The length of the pair is the difference .
Every birth and death in persistent homology is associated with the addition of a particular simplex in the filtration. This follows from the definition of homology of the quotient vector space . The addition of a new -simplex increases the dimension of by one, and either increases the dimension of by one, causing a birth in , or increases the dimension of by one, causing a death in . Because the Vietoris-Rips filtration is determined by its edges, the filtration value of every simplex can be mapped to the largest pairwise distance. This provides a way to map the gradient of a function with respect to births and deaths to a gradient with respect to each pairwise distance – see [17] for additional details.
2.2 The Interleaving and Bottleneck Distances
Interleavings allow for the comparison of two persistence vector spaces [7], as well as other objects filtered by some partially ordered set [1]. Let and be 1-parameter persistence vector spaces. An -shift map is a collection of maps so that the following diagram commutes for all parameters
(1) |
An -interleaving between and is a pair of -shift maps and so that and for all parameters .
The interleaving distance [7] on persistence modules and is
(2) |
This notion of distance satisfies triangle inequality through the composition of interleavings. Note that the addition or removal an arbitrary number of zero-length pairs to a persistence vector space to obtain results in .
The construction of general interleavings, let alone those that would realize the interleaving distance, can be a daunting task. Fortunately, for 1-parameter persistent homology the interleaving distance is equivalent to the geometric (and easily computable) bottleneck distance on persistence diagrams [25].
The bottleneck distance considers the birth-death pairs as points in the 2-dimensional plane. The persistence diagram is the union of this discrete multi-set of the points with the diagonal where points in are counted with infinite multiplicity.
Definition 2.1.
A matching between two persistence diagrams and is a subset such that every points in and appears exactly once in .
Definition 2.2.
The Bottleneck distance between and is then defined by
(3) |
The bottleneck distance on persistence diagrams is isometric to the interleaving distance on persistence vector spaces, which is a result known as the isometry theorem:
Theorem 2.3.
[25] Let and be the persistent diagrams of and respectively. Then
The matching in the bottleneck distance actually gives an interleaving which maps a vector associated to a persistence pair in to the vector associated with the matched pair in which realizes the interleaving distance.
2.3 Bounds on the Interleaving Distance
In practice, persistent homology of the Vietoris-Rips filtration can be quickly approximated using sub-sampling. Bounds on this approximation come from the Hausdorff or Gromov-Hausdorff distance on between point clouds [8].
Definition 2.4.
The Hausdorff distance between two subsets and within the same metric space is
Definition 2.5.
A correspondence between two sets and is a subset such that: s.t. , and s.t. . The set of all correspondences between and is denoted by .
Definition 2.6.
The Gromov-Hausdorff distance between compact metric spaces is:
where is defined by and the notation stands for .
Theorem 2.7.
[8] For any finite metric spaces and , for any , the bottleneck distance between two -th persistent diagrams of Rips filtrations is bounded by the Gromov-Hausdorff between two spaces
The theorem above can also provide us an interleaving/bottleneck distance bound for sub-sampled data sets only if we can find their Gromov-Hausdorff distance.
Corollary 2.8.
Let be a dataset and be a low-dimensional embedding of , and and are their sub-sampled data sets. Then
The proof only needs the triangle inequality of metrics.
3 Measuring Embedding Distortion through Interleavings
The interleaving distance provides a natural way to measure how accurately a transformation of a data set with dissimilarity into a low-dimensional embedding with dissimilarity distorts topology. In particular, it provides a way to eliminate topological type-I errors and reduce topological type-II errors when inferring information about the space via the embedding , as might be done in data visualization.
In order to have a notion of topological error, we must select topological features of which are believed to be significant, meaning that they are believed to correspond to topological structures in .
Definition 3.1.
A topological type-I error in is the selection of a feature in which has no corresponding feature in .
For example, if the embedding splits a single cluster in into two clusters, then making a distinction between the two clusters by selecting both pairs in would be a topological type-I error.
Definition 3.2.
A topological type-II error in is made when a structure which corresponds to a structure in is not selected as significant.
For example, if two clusters in merge into a single cluster in then we are forced to make a topological type-II error since we can select at most one persistent feature in .
3.1 Selection of Homological Features
Let denote the -dimensional homology of the Vietoris-Rips complex at parameter , and denote the -dimensional persistent homology of the Vietoris-Rips filtration. Similarly, we have and . We refer to each persistence pair in or as a homological feature of or respectively. We would like to select features of which are in correspondence with features of . A simple selection procedure is to compute the interleaving distance , and to select any homology class with .
Proposition 3.3.
No type-I errors are made using this selection procedure.
Proof.
Suppose that a selected homology vector has birth and death in with . We consider the -shift maps that realize the interleaving
(4) |
Because and the above diagram commutes, the selected vector must have a non-zero image in .
Furthermore, if two selected vectors have the same image in then their difference must be zero in the image back in . ∎
Proposition 3.4.
Every persistence pair in with has a corresponding persistence pair in which is selected.
Proof.
Suppose that a homology vector in has birth and death , with . Then in the following diagram the associated vector must have non-zero image in each vector space in the -interleaving
(5) |
Which implies that the persistence pair in is in correspondence with a persistence pair , where and . This pair has length , so is selected by our procedure. ∎
As a result, not only does this selection procedure guarantee that we will make no topological type-I errors, but we also will not make any type-II errors involving a homological feature of of sufficient size. Note that if we lower our threshold for selecting pairs of then we could introduce the possibility of homological type-I errors, and that there is always the possibility of homological type-II errors when neither pair in the correspondence has sufficient length.
3.2 Application to General Dimension Reduction



This selection procedure can be used to assess how well any transformation of point cloud data preserves topology. In particular, we can compute the bottleneck distance , the number of features of with and the number of features of with . In the context of dimension reduction, it would be desirable to minimize the interleaving distance in order to maximize the number of features we can identify which are in correspondence with features in the original data set. This is the approach we pursue in Section 4.
In table Table 1 we compare several algorithms for dimension reduction on a set of images from the Columbia Object Image Library (COIL-100) [28] taken of a tomato at various angles in a circle which has a single large homological feature in the original data. Methods compared include PCA, MDS [23], and ISOMAP [32] with a method based on minimizing the bottleneck distance developed in Section 4, PH, and a hybrid PH + PCA. Because every Vietoris-Rips filtration has a single pair with death at , at least one feature will always be selected. Only PH + PCA allows for the selection of an feature. Visualization of each embedding can be found in the appendix, and a more detailed visualization of the PH + PCA embedding can be found in Figure 1.
Method | max | ||||
---|---|---|---|---|---|
PCA | 4.854 | 5.148 | 10.852 | 1 | 0 |
MDS | 5.626 | 4.782 | 10.217 | 1 | 0 |
ISOMAP | 113.968 | 1.935 | 108.623 | 1 | 0 |
PH | 7.543 | 5.295 | 5.295 | 1 | 0 |
PH + PCA | 9.234 | 4.689 | 4.532 | 1 | 1 |



3.3 Homological Caveats
Some care must be taken in interpreting the interleaving as presented here. Importantly, the correspondence between persistence pairs of and is entirely algebraic; there are not necessarily topological shift maps and which induce the interleaving on homology. Even if the interleaving distance between and is zero, there is no guarantee that there is a natural topological map between the two spaces. There are many possible spaces which have identical persistent homology [10], and to maintain some level of geometric interpretability of persistence pairs of in terms of the persistence pairs of , it is desirable to incorporate additional constraints onto the embedding , as is often the case in dimension reduction algorithms.
4 Optimizing Interleaving Distance
We now turn to explicitly optimizing an embedding to minimize the interleaving distance to the original data set .
4.1 Optimizing Persistent Homology
Gradient-based optimization techniques can be applied to persistent homology by backpropagating the gradient of a function of the persistence pairs back to the input values of a filtration. This is often done by considering a featurization of the persistence pairs such as algebraic functions of the pairs [17] or persistence landscapes [22, 6], but in our situation, we will use the bottleneck distance .
Optimization with Vietoris-Rips filtrations is described in detail in [17], which we summarize here. The key is that every simplex addition in the filtration either creates or destroys homology and the corresponding birth or death takes that filtration value. If is a function of persistence pairs, this allows for the mapping of or to where is the filtration value of the unique simplex . In the case of Vietoris-Rips filtrations, the filtration value of a simplex is the maximum pairwise distance where are vertices in the simplex, so this can then be backpropagated to a gradient . There is a potential issue here which is that a single edge may map to multiple higher-order simplices, but Theorem 4.2 indicates that this is not generally a problem. Finally, if we choose a differentiable metric on such as the Euclidean metric, we can backpropagate the gradient to point locations in the embedding.
4.2 Optimizing the Bottleneck Distance
We are interested in optimizing the embedding to minimize the interleaving distance via the bottleneck distance . Because the original data set is fixed, we have a function which we can fit into the optimization framework for persistent homology.
First, we recall that the bottleneck distance uses a matching between persistence pairs of and and the diagonal representing potential zero-length pairs, and that the distance is computed from the maximum-weight matching. This leads to three possibilities.
-
1.
The maximum weight matching occurs between two non-diagonal pairs.
-
2.
The maximum weight matching occurs between a pair in and the diagonal in .
-
3.
The maximum weight matching occurs between the diagonal in and and a pair in .
The bottleneck distance does not consider matchings between two diagonal points. In the first two cases, it is possible to find a descent direction. In the third case, is at a local saddle point of the bottleneck distance.
We will show that under the condition that a single pair attains the maximum-weight matching, we may obtain a descent direction for with respect to the point locations in . This is a mild condition observed in practice in our empirical experiments.
Proposition 4.1.
If a single pair in the maximum-weight matching of and realizes the bottleneck distance, then the backpropagated is non-zero for at most two pairwise distances in .
Proof.
Because a single pair realizes the distance, it must have non-zero length. In this case, the associated simplex filtration values must map to distinct distances in . Let denote the matched point in , so , one or both distances may admit a non-zero gradient. ∎
Theorem 4.2.
If a single pair in realizes the bottleneck distance, then admits a descent direction on the point embedding .
Proof.
From Proposition 4.1, there are at most two pairwise distances which have a non-zero backpropagated gradient. Let these distances be and . If the points are distinct, then we can simply backpropagate the gradient to each point location.
One point might possibly participate in both distances – if two points are redundant, then the distances are not distinct. In this case, let us have and . In this case, we can take a step in the directions , , and leave unchanged. Because and are non-zero, and at least one of or is non-zero, this step direction will decrease the bottleneck distance. ∎
Proposition 4.3.
admits a generalized subdifferential.
Proof.
Any persistence pair in that does not participate in the bottleneck matching can be perturbed without affecting the bottleneck distance. ∎
Proposition 4.3 implies that we have great freedom to perturb the embedding while minimizing the bottleneck distance. In the third case above, when the diagonal of is involved in the maximum weight matching, we have the freedom to perturb in any direction. This allows for easy optimization of the bottleneck distance in conjunction with a secondary objective.
5 Experiments



In this section, we will introduce the experiments and results using our dimension reduction method. For convenience, we will call our method PH optimization as it based on persistent homology.
We implement a form of projection pursuit [16] which seeks to find a linear projection which minimizes the bottleneck distance between and . We add the orthogonality constraint to the optimization candidate space (also called Stiefel manifold) and the optimization gradient descent algorithm will use the Cayley transform introduced in [35].
In our dimension reduction method, since optimization requires repeated computation of persistent homology, controlling the number of points is crucial. In an effort to obtain a sample with close persistent homology, we use a greedy strategy based on minimizing the Hausdorff distance from the sample to the full point cloud. However, recall that Corollary 2.8 requires Gromov–Hausdorff distance which requires solving quadratic assignment problem. We will bound the Gromov–Hausdorff distance by Hausdorff returned from the greedy sampling algorithm.
Our procedure is implemented in Pytorch, which supports automatic differentiation without explicitly passing gradients once we have defined two layers: a Vietoris-Rips layer and a bottleneck distance layer. The Rips layer based on BATS 222https://bats-tda.readthedocs.io/ will compute persistent homology of a Rips filtration and find the inverse map described in Section 4.1. The bottleneck distance layer is supported by Hera [21], which can efficiently find the matching for bottleneck distance.
5.1 5-Tendril
We generate a data set, 5-Tendril consisting of 500 points in 5 dimensions sampled along 5 tendrils, each of which consists of 50 randomly generated points on a canonical basis vector of Euclidean space. Here, our PH optimization method will first sample 100 points and then optimize the bottleneck distance on H0 persistent diagram. We perform 3 different dimension reduction algorithms: PCA, MDS and PH optimization. In Figure 3, the results show that PCA and MDS can only see 3 branches, while PH optimization can see 5.
5.2 Orthogonal Cycle
We generate a data set of 500 samples in 5 dimensions with cycles, each of which consists of 50 points and lies in a 2-dimensional plane spanned by two canonical bases and with center + and radius one. For speedup, our PH optimization method samples 200 points and then optimizes bottleneck distance on .
Since the dataset consists of cycles in orthogonal planes and our PH optimization method will also purse an orthonormal projection, there is no way to find a projection that can show and divide all cycles apart within the orthogonality constraint. In Figure 4, we show the dimension reduction results with 4 different methods: PCA will lead to a tangle where cycles cross with each other, and if without labels, one cannot tell how many cycles exist in this plot; ISOMAP will provide us a five-star with half of orthogonal cycles missed, but without labels, it is hard to determine if there are cycles exists; MDS can provide a 5-ring, but some of are not comprised of a single orthogonal cycle, for example, the yellow orthogonal cycle lies on two rings; our PH method can provide 3 cycles and the three straight lines between can indicate the collapse of orthogonal structure in the projection.




5.3 COIL-100
The Columbia Object Image Library (COIL-100) [28] dataset contains 7200 colorful images of 100 objects, where each object has 72 images with 3 color channels taken at pose intervals of 5 degrees. We pick the tomato dataset (see Figure 2) with shape from COIL-100 and perform our PH optimization algorithm on both and persistent diagrams. Although the number of points is only 72, the high feature dimension makes the direct running of our procedure on Pytorch prohibitive due to the memory issue of Cayley transform. To solve the problem, we devise two procedures: a) Optimizing the projection directly using the Cayley transform to handle orthogonality; b) first we perform PCA on the tomato dataset to reduce dimension to 10 and then keep reducing the dimension to 2 by our Pytorch PH optimization algorithm, denoted by PH+PCA.
We include visualization of the results using methods PH, PH+PCA, PCA, ISOMAP, MDS into the Appendix and show the result of PH+PCA in Figure 1. In Figure 1a), we can see that the 72 points in constitute a great circle, which demonstrate that the dataset consists of pictures taken at 360 degrees at pose interval 5 around the tomato. In Figure 1b), we draw the persistence diagram of the projected data set and a unconfident band with width 9.065, which is the twice of bottleneck distance between the original and projected tomato dataset. By the discussion in Section 3, there is only one H1 class in the projected data that we can ensure also exists in the original one. In Figure 1c), we then draw the certain H1 representative in red and the longest uncertain H1 representative in blue.
5.4 Natural Image Patches
Natural image patches are a well-studied data set with interesting topological structures at various densities [4]. We follow the data processing procedure of [24] to sample patches from the van Hateren natural images database [33]. We further refine a sub-sample of 50,000 patches using the co-density estimator of [11] with to obtain a data set of 20,000 patches which resembles the “three-circle” model of [11].
In Figure 5 we apply our procedure to two initial projections. In both cases, we use a greedy subsampling of 100 points in the data which contains 5 robust pairs, agreeing with the three-circle model. We first initialize the projection with the first two principal components of the data and then refine by optimizing the bottleneck distance on pairs. The initial projection onto principal components displays a clear “primary circle,” with the two secondary circles projecting onto two chords. Our procedure decreases the bottleneck distance on the 100 sampled points from to but there is minimal visual difference between the two projections, indicating that the initialization was near a local optimum.




The second row of Figure 5 starts with a random projection into two dimensions. We again optimize to minimize the bottleneck distance on the pairs, and decrease the bottleneck distance on the sampled points from to . In this case, there is a noticeable visual difference between the two projections. In the first projection, a noisy projection of the primary circle is visible, and in the second projection, we see a clear visualization of one of the two secondary circles, with the primary circle and other secondary circle collapsed to a chord.
In both these experiments, our bottleneck distance bounds do not allow us to confidently select any features in the visualization. As with the orthogonal cycle data, the reason is fundamental: the three circle model embeds into a Klein bottle [4], and any projection to fewer than 4-dimensions (necessary to embed the Klein bottle) will result in spurious intersections of at least two of the three circles. Despite these limitations, our method is still able to present a subset of the important features of this data.
6 Discussion
In this paper, we propose the use of the interleaving distance in dimensionality reduction. We show that this distance can be used to identify topological features in correspondence between a full data set and a low dimensional embedding using any dimension reduction procedure. We also demonstrate how optimization of the equivalent bottleneck distance can increase the significance of important topologcial features in in the embedding .
We incorporate bottleneck distance optimization into projection pursuit and find that our method can preserve topological information when projecting from high dimensional spaces to two dimensions for visualization. We find in several cases, our method will focus on visualizing a subset of the important topological structures as orthogonality of subspaces in the full data set prohibit the visualization of all structures using a single projection.
Our method could be combined with other optimization objectives such as the maximization of variance in the projection as in PCA. One limitation of our method is that the bottleneck distance is non-smooth and has many local minima. Hybrid schemes which combine bottleneck distance optimization with other objectives may generally help with optimization.
One direction of future work that could help improve the ability to detect if topological features in embeddings has correspondence with the original data set would be to develop interleaving techniques based on non-linear shift maps. Because the interleaving distance focuses on the worst possible distortion between persistent homologies, a more fine-grained analysis may reveal that more information is preserved in practice.
Acknowledgements:
BN was supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112190040.
References
- [1] Bubenik, P., and Scott, J. A. Categorification of Persistent Homology. Discrete & Computational Geometry 51, 3 (Apr. 2014), 600–627.
- [2] Carlsson, G. Topological pattern recognition for point cloud data. Acta Numerica 23 (May 2014), 289–368.
- [3] Carlsson, G., and de Silva, V. Zigzag Persistence. Foundations of Computational Mathematics 10, 4 (Aug. 2010), 367–405.
- [4] Carlsson, G., Ishkhanov, T., de Silva, V., and Zomorodian, A. On the Local Behavior of Spaces of Natural Images. International Journal of Computer Vision 76, 1 (Jan. 2008), 1–12.
- [5] Carriere, M., Chazal, F., Glisse, M., Ike, Y., Kannan, H., and Umeda, Y. Optimizing persistent homology based functions. In Proceedings of the 38th International Conference on Machine Learning (July 2021), PMLR, pp. 1294–1303.
- [6] Carriere, M., Chazal, F., Ike, Y., Lacombe, T., Royer, M., and Umeda, Y. PersLay: A Neural Network Layer for Persistence Diagrams and New Graph Topological Signatures. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (June 2020), PMLR, pp. 2786–2796.
- [7] Chazal, F., Cohen-Steiner, D., Glisse, M., Guibas, L. J., and Oudot, S. Y. Proximity of persistence modules and their diagrams. In Proceedings of the 25th Annual Symposium on Computational Geometry - SCG ’09 (Aarhus, Denmark, 2009), ACM Press, p. 237.
- [8] Chazal, F., Cohen-Steiner, D., Guibas, L. J., Mémoli, F., and Oudot, S. Y. Gromov-Hausdorff Stable Signatures for Shapes using Persistence. In Computer Graphics Forum (2009), vol. 28, Wiley Online Library, pp. 1393–1403.
- [9] Cohen-Steiner, D., Edelsbrunner, H., and Harer, J. Stability of Persistence Diagrams. Discrete & Computational Geometry 37, 1 (Jan. 2007), 103–120.
- [10] Curry, J. The Fiber of the Persistence Map. Journal of Applied and Computational Topology 2, 3 (2018), 301–321.
- [11] de Silva, V., and Carlsson, G. Topological estimation using witness complexes. In Proc. Sympos. Point-Based Graphics (June 2004).
- [12] de Silva, V., Morozov, D., and Vejdemo-Johansson, M. Persistent Cohomology and Circular Coordinates. Discrete & Computational Geometry 45, 4 (June 2011), 737–759.
- [13] Doraiswamy, H., Tierny, J., Silva, P. J. S., Nonato, L. G., and Silva, C. TopoMap: A 0-dimensional Homology Preserving Projection of High-Dimensional Data. IEEE Transactions on Visualization and Computer Graphics 27, 2 (Feb. 2021), 561–571.
- [14] Edelsbrunner, Letscher, and Zomorodian. Topological Persistence and Simplification. Discrete & Computational Geometry 28, 4 (Nov. 2002), 511–533.
- [15] Fasy, B. T., Lecci, F., Rinaldo, A., Wasserman, L., Balakrishnan, S., and Singh, A. Confidence Sets for Persistence Diagrams. The Annals of Statistics 42, 6 (2014), 2301–2339.
- [16] Friedman, J. H., and Tukey, J. W. A Projection Pursuit Algorithm for Exploratory Data Analysis. IEEE Transactions on Computers C-23, 9 (Sept. 1974), 881–890.
- [17] Gabrielsson, R. B., Nelson, B. J., Dwaraknath, A., and Skraba, P. A Topology Layer for Machine Learning. In International Conference on Artificial Intelligence and Statistics (June 2020), PMLR, pp. 1553–1563.
- [18] Hatcher, A. Algebraic Topology. Cambridge University Press, Cambridge ; New York, 2002.
- [19] Hofer, C., Graf, F., Rieck, B., Niethammer, M., and Kwitt, R. Graph Filtration Learning. In Proceedings of the 37th International Conference on Machine Learning (Nov. 2020), PMLR, pp. 4314–4323.
- [20] Kachan, O. Persistent Homology-based Projection Pursuit. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (Seattle, WA, USA, June 2020), IEEE, pp. 3744–3751.
- [21] Kerber, M., Morozov, D., and Nigmetov, A. Geometry helps to compare persistence diagrams. Journal of Experimental Algorithmics (JEA) 22 (2017), 1–20.
- [22] Kim, K., Zaheer, M., Chazal, F., Kim, J., Kim, J. S., and Wasserman, L. PLLay: Efficient Topological Layer based on Persistence Landscapes. In Advances in Neural Information Processing Systems (2020), vol. 33, p. 13.
- [23] Kruskal, J. B. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29, 1 (Mar. 1964), 1–27.
- [24] Lee, A. B., Pedersen, K. S., and Mumford, D. B. The Nonlinear Statistics of High-Contrast Patches in Natural Images. International Journal of Computer Vision 54, 1-3 (2003), 83–103.
- [25] Lesnick, M. The Theory of the Interleaving Distance on Multidimensional Persistence Modules. Foundations of Computational Mathematics 15, 3 (June 2015), 613–650.
- [26] Leygonie, J., Oudot, S., and Tillmann, U. A Framework for Differential Calculus on Persistence Barcodes. Foundations of Computational Mathematics (July 2021).
- [27] Moor, M., Horn, M., Rieck, B., and Borgwardt, K. Topological Autoencoders. In Proceedings of the 37th International Conference on Machine Learning (Nov. 2020), PMLR, pp. 7045–7054.
- [28] Nene, S. A., Nayar, S. K., and Murase, H. object image library (coil-100. Tech. rep., 1996.
- [29] Poulenard, A., Skraba, P., and Ovsjanikov, M. Topological Function Optimization for Continuous Shape Matching. Computer Graphics Forum 37, 5 (Aug. 2018), 13–25.
- [30] Rieck, B., and Leitte, H. Persistent Homology for the Evaluation of Dimensionality Reduction Schemes. Computer Graphics Forum 34, 3 (2015), 431–440.
- [31] Rieck, B., and Leitte, H. Agreement Analysis of Quality Measures for Dimensionality Reduction. In Topological Methods in Data Analysis and Visualization IV, H. Carr, C. Garth, and T. Weinkauf, Eds. Springer International Publishing, Cham, 2017, pp. 103–117.
- [32] Tenenbaum, J. B., de Silva, V., and Langford, J. C. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 290, 5500 (Dec. 2000), 2319–2323.
- [33] van Hateren, J. H., and van der Schaaf, A. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings. Biological Sciences 265, 1394 (Mar. 1998), 359–366.
- [34] Wagner, A., Solomon, E., and Bendich, P. Improving Metric Dimensionality Reduction with Distributed Topology. arXiv:2106.07613 [cs, math] (Sept. 2021).
- [35] Wen, Z., and Yin, W. A feasible method for optimization with orthogonality constraints. Mathematical Programming 142 (2013), 397–434.
- [36] Yan, L., Zhao, Y., Rosen, P., Scheidegger, C., and Wang, B. Homology-Preserving Dimensionality Reduction via Manifold Landmarking and Tearing. arXiv:1806.08460 [cs] (June 2018).
- [37] Yu, B., and You, K. Shape-Preserving Dimensionality Reduction : An Algorithm and Measures of Topological Equivalence. arXiv:2106.02096 [cs, stat] (June 2021).
- [38] Zomorodian, A., and Carlsson, G. Computing Persistent Homology. Discrete & Computational Geometry 33, 2 (Feb. 2005), 249–274.
Appendix A Appendix
A.1 Additional Dimension Reduction Results
Method | Bottleneck Distances (B0, B1) | Dimension reduced data |
---|---|---|
PH + PCA | 4.689, 4.532 |
![]() |
PH | 5.295, 12.661 |
![]() |
PCA | 5.148, 10.852 |
![]() |
ISOMAP | 1.935, 108.623 |
![]() |
MDS | 4.782, 10.217 |
![]() |