Comparing Foundation Models using Data Kernels
Abstract
Recent advances in self-supervised learning and neural network scaling have enabled the creation of large models, known as foundation models, which can be easily adapted to a wide range of downstream tasks. The current paradigm for comparing foundation models involves evaluating them with aggregate metrics on various benchmark datasets. This method of model comparison is heavily dependent on the chosen evaluation metric, which makes it unsuitable for situations where the ideal metric is either not obvious or unavailable. In this work, we present a methodology for directly comparing the embedding space geometry of foundation models, which facilitates model comparison without the need for an explicit evaluation metric. Our methodology is grounded in random graph theory and enables valid hypothesis testing of embedding similarity on a per-datum basis. Further, we demonstrate how our methodology can be extended to facilitate population level model comparison. In particular, we show how our framework can induce a manifold of models equipped with a distance function that correlates strongly with several downstream metrics. We remark on the utility of this population level model comparison as a first step towards a taxonomic science of foundation models.
1 Introduction
In 2019, Devlin et al. introduced BERT [8], a neural language model trained on massive amounts of unlabeled data which produces general purpose representations of language called embeddings. BERT’s embeddings can be used to dramatically reduce the amount of data and compute required to train a model on a downstream task. BERT was the first of a class of models that have come to be known as foundation models, or large models trained in a self-supervised fashion which can be readily adapted to downstream tasks. Recently, advances in language model scaling [6], prompt design [24], and modality fusion [19] have lead to the rapid and near ubiquitous adoption of foundation models across industry and academia alike [4].
Principled evaluation methodologies for foundation models have not kept pace with this mass adoption. In particular, the most extensive attempts to characterize and evaluate large language models [14, 17] involve benchmarking their performance on a wide variety of datasets using a set of aggregate performance metrics. Unfortunately, this method of model comparison is unsuitable in situations where the ideal metric is either not obvious nor unavailable. Further, even when an evaluation metric is known, it may provide an incomplete characterization of model performance. For example, comparing models based on aggregate perplexity alone is not sufficient for understanding if they are disproportionately underperformant on a particular subgroup of data [3].
An ideal model comparison methodology would surface exactly the set of data that differs between two models. This would enable practitioners to discover systematic differences between the models being compared, without having to define a metric which captures those differences a priori. Towards this end, we propose a framework for directly comparing the geometry of the embedding spaces learned by different models on a per-datum basis. The framework enables, for the first time, a comparison of the representations learned by foundation models that is agnostic to any particular performance metric. We test our framework using a controlled training data ablation experiment, and find that it is able to surface changes in the representations of documents corresponding to the ablated class.
Due to the already large and continually increasing size of the foundation model design space, an ideal model comparison would also facilitate population level model comparison. Accordingly, we demonstrate how to extend our framework to enable multi-model comparison by inducing a manifold of models. We find that the distance between two models on this manifold is correlated strongly with the similarity of their performance on downstream tasks. In future work, we aim to scale our population level comparison methodology to induce an empirical manifold of models. Such a manifold would allow practitioners to frame model evaluation as a taxonomic problem, and generalize findings about the performance of a particular model to the performance of a family of models.
Notation. We use lower case letters for vectors, upper case letters for matrices, to denote the Euclidean norm on vectors, and for the spectral norm on matrices. We use to denote an estimate of the population parameter .
2 Modeling Embedding Space with the Data Kernel
2.1 Defining the Data Kernel
Before we can compare the embedding spaces of multiple foundation models, we require a model of the embedding space of a single foundation model. Consider a foundation model , where is the model’s input space (e.g., the space of text, images, etc.), and is the model’s embedding space (e.g., pooled word embeddings, image embeddings, etc.). For input examples , we compute the embeddings of these examples . Let be the set and be the matrix whose rows are . We define the data kernel of on as
(1) |
where is a function applied to every cell in its input matrix that returns if that cell is in the top k values of its row (and not on the diagonal) and otherwise. In other words, returns the hollow adjacency matrix of the -nearest neighbor graph of its argument.
We interpret the data kernel as the adjacency matrix of a graph where each node is an element in our input data, and an edge exists between nodes and if is one of ’s nearest neighbors. Critically, the notion of distance used to define proximity when constructing this graph is induced by the foundation model .
Modeling data geometry with a neighborhood graph is an established technique with known desirable properties in the dimensionality reduction literature. For example, it is known that that the shortest paths on the weighted neighbor graph approximate the geodesic distances between datapoints under the metric induced by the model. [22] The popular UMAP algoroithm [16] utilize the weighted neighbor graph to compute visualizations of high dimensional manifolds. The unweighted neighbor graph, as described in in Eq. (1), retains many desirable limiting statistical properties, such as consistent estimation of the true-but-unknown underlying manifold density and metric [15, 10]. These properties motivate our use of the data kernel as a measurement of the geometry induced on a set of data by a particular model.
2.2 Modeling Data Kernels as RDPGs
The Random Dot Product Graph (RDPG) [27] is a flexible model for undirected, hollow, symmetric random graphs that has been applied successfully in a variety of domains [12, 7, 25]. RDPGs posit that each node in a graph has an associated latent position , and that the probability of an edge existing between nodes and in any realization of the RDPG is exactly the dot product of their corresponding latent positions . We write to refer to a graph sampled from an RDPG with latent position matrix .
For analysis related to a single data kernel, methods developed under the RDPG assumption can rely on the consistent estimation of the latent positions via the adjacency spectral embedding [1]. The consistency result holds up to the non-identifiability of an orthogonal transformation of the true-but-unknown latent positions, since for any orthogonal matrix .
In the context of modeling and comparing the embedding spaces of foundation models using the RDPGs, the orthogonal non-identifiability of the parameters of the RDPG is quite natural. For example, foundation models which employ cosine similarity to measure embedding proximity such as CLIP [19] and SBERT [20] will give rise to identical data kernels under orthogonal transformation of their embedding spaces. Vertex-level inference methods designed for observed graphs assumed to be an RDPG will thus be appropriate for datum-level analysis in the embedding spaces of the foundation models.
2.3 Jointly Embedding Data Kernels
Multi-graph analysis under the RDPG avoids the estimation of the unknown orthogonal matrix by utilizing joint embedding techniques such as the omnibus embedding [18]. In particular, let and be the symmetrized data kernels of a fixed input dataset embedded by foundation models and , respectively, and let be the omnibus matrix defined as follows:
We define as the first rows of the adjacency spectral embedding of and as the remaining rows. When and are instances of the same RDPG model, both and are consistent estimates for the true-but-unknown underlying latent positions [13]. As a result, if two data kernels are drawn from the same underlying RDPG, the estimates of after the omnibus embedding are such that:
(2) |
This convergence property is the key to our ability to compare model representation spaces, which we develop as a hypothesis test in the next section. Intuitively, we can think of the omnibus embedding as a theoretically justified way to align different foundation models’ embedding spaces for subsequent comparison.
3 Datum Level Hypothesis Testing
3.1 A Bootstrap Hypothesis Test

Under the null hypothesis that the two data kernels are realizations of the same underlying RDPG, the limit theory outlined in [13] and briefly introduced in Eq. (2) suggest a statistical hypothesis testing procedure for the embeddings induced by foundation models that is sensitive to changes in representations on a per-datum basis. In particular, if the two data kernels represent document similarly then the distance should be “small” for large enough . Conversely, if is “too large” we may reconsider that the two foundation models represent document differently.
We formalize this statement via the following hypothesis test for document :
(3) |
The hypothesis test requires defining a distance such that we can control for the Type 1 error and still reject the null when appropriate. To this end we propose estimating the null distribution of the distance via a RDPG bootstrap. Algorithm 1 describes the bootstrap procedure for generating the null distribution of the for each document in . From the null distribution for document and the observed distance we obtain a -value by subtracting the percentile at which falls in the null distribution from 1 and reject for -values smaller than a pre-specified threshold.
Figure 1 visualizes several steps of the bootstrap when applied to the mean-pooled word embeddings of a pretrained BERT model. In particular, we visualize the bootstrap of a data kernel that BERT induces on English Wikipedia articles. The left panel shows the UMAP [16] projections of , the latent positions estimated from BERT’s data kernel. The center-left panel shows the UMAP projections of the latent position estimates from of the bootstrap data kernels, . Note that the left and center visualizations exhibit landmark level similarities (e.g. the long peninsula), but are not aligned. The center-right panel shows the UMAP projection of , one of the omnibus latent position estimates. Note that the center-right panel has aligned the previously unaligned landmarks. This alignment allows us to build a null distribution which captures how far latent position estimates from typically are from . The percentiles of selected null distributions are visualized using concentric circles in the far right panel.

3.2 Training Data Ablation Study
Under the emerging data-centric AI paradigm [28], a model’s behavior is largely determined by the data it is trained on. If this is the case, we would expect interventions into a model’s training data to affect its embedding space, and thus, its data kernel.
To test the effect of a training data intervention on the data kernel, we trained two randomly initialized BERT models – a “baseline" model and a “plant-ablated” model – on two different versions of the DBPedia14 corpus [9]. The DBPedia14 training set consists of 40,000 examples of 14 different classes of documents such as Plants, Animals, Educational Institutions, etc. The baseline model is trained using masked language modeling (MLM) on all classes in the DBPedia14 train set. The plant-ablated model is trained using MLM on all documents in the DBPedia14 training set except those belonging to the plant class. We downsample the total size of the baseline model’s train set uniformly across classes to ensure both models see the same number of tokens during training. Both the plant-ablated model and the baseline model are trained for 3 epochs using the Adam optimizer [11] with a learning rate of and achieve comparable terminal losses. The data kernels we refer to below are defined on a random 10,000 article sample from the DBPedia14 evaluation set with .
Figure 2 shows the result of applying the datum level hypothesis test to the baseline and plant-ablated models. The top left panel of Figure 2 shows the UMAP visualizations of the individual data kernels induced by each of the two models. Note that, while there are some structural similarities between the visualizations of the kernels, they are not directly comparable. The bottom left panel of Figure 2 shows the UMAP visualization of the aligned embeddings from both models. The top center panel of Figure 2 highlights the regions of the aligned visualization that correspond to plant documents. The bottom center panel of Figure 2 shows colors the aligned baseline embeddings by 1-minus the datum-level -value from Algorithm 1. Larger values (represented here by hotter colors) indicate a higher probability that the baseline and plant ablated models’ representations differ on a given document. Note that the two peninsulas consisting predominantly of plant articles appear “hotter" than other regions.
The top right panel of Figure 2 compares the -values under the null distribution calculated using Algorithm 1 to the distribution of a uniform random variable. A hypothesis test is valid (i.e., has Type I error less than a pre-specified threshold) for all pre-specified thresholds if and only if the null distribution of -values is uniform. In our case, the hypothesis test is valid. The bottom right panel compares the cumulative distribution of -values for different classes of DBPedia14 documents. The -values corresponding to plant documents have a left shifted distribution when compared to the -values corresponding to the other classes. Additionally, the -value distributions corresponding to animals and natural places, classes which are semantically related to plants, are also shifted left relative to the other classes. This indicates that the baseline BERT used information contained in the articles about plants to inform its representations of articles about animals and natural places.
This ablation study highlights the ability of our datum-level test to identify systematic differences between models without a predefined metric. Further, this procedure can be used to surface exactly the set of documents whose representations were affected by a data or model intervention. These properties are useful in situations where an intervention systematically affects a class of data (e.g., data corresponding to a particular gender, race, or idea) that is neither explicitly modeled in the existing data ontology nor surfaced via a predefined metric.
4 Multi-Model Comparison
4.1 Inducing a Model Manifold
The framework we have developed thus far enables datum level pairwise model comparison. However, given the sheer size and variety of the model design space, it is desirable to be able to compare multiple models at once. We can easily extend our pairwise comparison framework to allow for this multi-model comparison.
In section 3, we defined a notion of distance between datum representations based on the Euclidean distance between their aligned latent position estimates. We can extend this notion of distance between datums to a notion of distance between models by considering the spectral norm of the difference between all of the aligned latent position estimates: . Under the assumption that and are realizations from the same underlying RDPG model, we have
(4) |
where and are the corresponding aligned embeddings [13].
Given foundation models and a document corpus , we can calculate the spectral norm of the differences between aligned latent position estimates of the data kernels for each pair of models with the understanding that if then is in some sense “closer" to than (with respect to ). Further, we can infer the relative positions of these models on a model-manifold from their pairwise distances using classical multi-dimensional scaling.
Classical multi-dimensional scaling [23] recovers the unknown relative positions of a collection objects given a pairwise Euclidean distance matrix . In particular, for objects where is a pairwise distance matrix on the ’s, classical multi-dimensional scaling will recover the relative positions of each up to an orthogonal transformation. Recovery of the relative positions of objects with multi-dimensional scaling in more general contexts is possible when the objects and dissimilarity used to construct are Euclidean realizable [5].
In our case, we assume that there is a true-but-unknown vector representation of the the foundational model in model space with respect to and that is a matrix-valued representation of . If we consider the space as a parameterization of the space of RDPGs, the dissimilarity matrix with entries is Euclidean realizable given some regularity conditions on [2]. The multi-dimensional scaling of yields vectors that are Euclidean approximations of the foundation models relative positions in model space with respect to . We refer to the space spanned by as the “model manifold" with respect to . This process – with inputs of a collection of foundation models and a document corpus – is described in Algorithm 2.
4.2 The Manifold of Partially Ablated Models

To investigate the properties of the model manifold, we build on the experiment described in Section 3.2. Instead of considering just a plant-ablated model and a baseline model, we now consider models that are trained on various ablated subsets of DBPedia14. We find that the relative positions of models on the manifold induced by our procedure are approximately parameterized by the relative concentrations of DBPedia14 classes in the model training sets. Further, we find that the distance between models on the manifold correlates strongly with the difference in the models’ performance on several downstream metrics. This indicates that proximity on the model manifold may be used as an effective proxy measure for similarity in model performance.
Again, we trained all models on ablated subsets of DBPedia14 using the masked language modeling objective for 3 epochs using the Adam optimizer and a learning rate of . This time, each model had the training data corresponding to some combination of the Plant, Artist, and Educational Institution classes ablated. Recall that each class in the DBPedia14 dataset has 40,000 total documents in it. To determine the particular ablation profile for a model , we first selected an element of the two-dimensional simplex (i.e., elements of whose elements are all non-negative and that sum to 1). The model then had access to documents from the Educational Institution class, , documents from the Artist class, and documents from the Plant class. The documents from the remaining classes in the training data were then uniformly downsampled to ensure that all models had access to the same number of training tokens. Hence, each model’s training set is naturally parameterized by its corresponding element of the 2-d simplex .
The left panel in Figure 3 shows the theoretical manifold (the two-dimensional simplex which defines the class concentrations in each model’s training set) and the empirical manifold found when applying Algorithm 2 with and aligning the empirical manifold with the theoretical manifold via the Procrustes algorithm. Here, models parameterized by and are used to estimate the and visualize model manifold. The empirical manifold is visually similar to the theoretical manifold, and we interpret this as evidence that the models, through their data kernels, exist on a low dimensional structure that (in this case) is parameterized by the concentrations of classes in each model’s training set. More succinctly, in this experiment.
To understand how different downstream metrics correlate with distance on the recovered model manifold, we compute a metric relevant to document classification and a metric relevant to language modeling and regress them with respect to the manifold distance to a landmark model. For both tasks we select the model parameterized by (i.e. the model trained on all of the Educational Institution articles, but none of the Artist or Plant Articles) as the landmark model.
For the classification task we consider a uniform subsample (500 documents per class) of the DBPedia14 evaluation set. We train a linear support vector machine (SVM) on the mean-pooled embeddings from each model using a random 80% of the selected evaluation data, and then infer the class membership of the remaining 20%. ion for a document agrees with the prediction from the classifier trained on embeddings from the landmark model. We then compute the correlation between the predictions of each model’s SVM and the landmark model’s SVM. In the right panel of Figure 3, we show that manifold distance between models is a strong indicator of their prediction correlation.
For the language modeling task, we randomly sampled documents in the Educational Institution class from the DBPedia14 evaluation set. We compute the average pseudo-perplexity [21] of each document in our evaluation set, and report the average pseudo-perplexity across all evaluation documents for each model in the center panel of figure 3. We find that models that were trained with a higher concentration of Educational Institution documents have both a lower average pseudo-perplexity and are closer on the manifold We find that the concentration of Educational Institution documents in the training set correlates negatively with both average pseudo-perplexity and manifold distance to the landmark model
The linear goodness-of-fit () for both the relationship between manifold distance and prediction correlations as well as the relationship between manifold distance and pseudo-perplexity are above 0.8, and the -values corresponding to Kendall’s tau rank correlation test are less than 0.01. We interpret these results as evidence that the manifold distance and metrics related to important downstream tasks – classification and language modeling – are strongly correlated.
5 Limitations and Broader Impact
5.1 Limitations
While we are excited about the potential of data kernel based foundation model comparison, there are several important limitations of our current work.
The most salient limitation is the scope of our experiments – we have opted to evaluate the behavior of our comparison framework in several highly controlled situations. The benefit of this choice is that we can ensure that the framework behaves appropriately under known conditions. The limitation of this choice is that it provides limited evidence that the framework will generalize to less controlled, empirical situations. We aim to address this question of generalization in subsequent work on the empirical model manifold.
An additional limitation of our work is the computational burden required for data kernel based comparison. Each iteration of our bootstrap procedure requires the computation of a large singular value decomposition as part of the adjacency spectral embedding. This presents a significant challenge to practitioners who wish to access our comparison methodology, but have a limited compute budget. We encourage future work on more efficient bootstrap methods that can help to alleviate this burden, and aim to address it as part of our future research agenda.
5.2 Broader Impact
While we view the development of higher fidelity model evaluation procedures as generally positive from a broader impact point of view, there are a few important caveats to note. As is the case for any evaluation, there is a risk that our framework fails to surface a systematic bias present in a model. This risk is particularly present in our work, as a failures of our framework may lead practitioners to have a false sense of security regarding the fairness of their models. We emphasize that model evaluation is a nuanced and complex question, and that the best approaches combine several model evaluation strategies to paint a maximally holistic picture of model behavior.
Moreover, the computationally intensive nature of the work has a direct environmental impact. In particular, we estimate that this work produced at most 3 metric tons of carbon dioxide emissions, which is roughly the equivalent of burning 350 gallons of gasoline. Nomic is committed to tracking and mitigating the environmental impact of its research, and has offset 3 metric tons of emissions for this project.
6 Future Work
In this work, we introduce a methodology to compare foundation models by directly comparing their embedding space geometry. We demonstrate how to apply this methodology to surface the populaton of data points whose representations differ between models. We then extend this framework to multi-model comparison, and show that distance on the resulting model manifold correlates with common downstream performance metrics. We envision several extensions of this work, some of which are outlined below.
6.1 An Empirical Model Manifold
Given the large and growing variety of foundation model variants, we are interested in using our procedure to build a manifold of empirical models. Technologies such as the Hugging Face Hub [26] enable us to build model manifolds on large collections of models and datasets. Having access to a large empirical model manifold will allow researchers to focus their analysis on models that exist in key locations on the manifold (e.g. centers of mass, branch points, etc.). Additionally, having a notion of distance between empirical models will allow researchers to generalize their findings regarding particular models to families of models that are proximal on the manifold.
6.2 Model Selection
The model manifold comes naturally equipped with a distance that captures information pertinent to relative performances of the models. We can leverage this fact for a variety of model selection tasks. In particular, the model manifold allows model selection to be framed as a search task over the model manifold itself. For example, users can select the best model for a particular task with far fewer evaluations than an exhaustive search by employing a locally sensitive optimizer on the model manifold. Additionally, the task of selecting the best model for a task subject to a constraint (e.g., a compute constraint) given a known good model that does not fulfill that constraint is akin to projecting the known good model onto a constrained subset of the model manifold.
6.3 Privacy Aware User Modeling
The model manifold can be employed to study populations of users communicating on a platform without requiring direct access to the content of their communications. One such procedure would involve fine tuning an edge language model to an individual user’s communications, and relaying only the embeddings of the fine tuned model on a public corpus of interest back to the central server. A model manifold where each model corresponds to a user could then be constructed and utilized for subsequent inference.
6.4 Efficient RDPG Bootstrap
One of the fundamental challenges of scaling up our work is the computationally intensive nature of our bootstrap procedure. In particular, the singular value decomposition required for each iteration of our bootstrap presents a significant computational bottleneck. There is a rich and active literature on efficient singular value decomposition algorithms, and we hope to evaluate the viability of ideas from this literature in future versions of our bootstrap.
References
- Athreya et al. [2017] Avanti Athreya, Donniell E. Fishkind, Keith Levin, Vince Lyzinski, Youngser Park, Yichen Qin, Daniel L. Sussman, Minh Tang, Joshua T. Vogelstein, and Carey E. Priebe. Statistical inference on random dot product graphs: a survey. 2017. doi: 10.48550/ARXIV.1709.05454. URL https://arxiv.org/abs/1709.05454.
- Athreya et al. [2022] Avanti Athreya, Zachary Lubberts, Youngser Park, and Carey E Priebe. Discovering underlying dynamics in time series of networks. arXiv preprint arXiv:2205.06877, 2022.
- Bender et al. [2021] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623, 2021.
- Bommasani et al. [2021] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, and et al. On the opportunities and risks of foundation models. CoRR, abs/2108.07258, 2021. URL https://arxiv.org/abs/2108.07258.
- Borg & Groenen [2005] Ingwer Borg and Patrick JF Groenen. Modern multidimensional scaling: Theory and applications. Springer Science & Business Media, 2005.
- Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165.
- Chen et al. [2022] Guodong Chen, Hayden S Helm, Kate Lytvynets, Weiwei Yang, and Carey E Priebe. Mental state classification using multi-graph features. Frontiers in Human Neuroscience, 16:930291, 2022.
- Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
- Dojchinovski et al. [2018] Milan Dojchinovski, Julio Hernandez, Markus Ackermann, Amit Kirschenbaum, and Sebastian Hellmann. Dbpedia nif: Open, large-scale and multilingual knowledge extraction corpus. arXiv preprint arXiv:1812.10315, 2018.
- Hashimoto et al. [2015] Tatsunori Hashimoto, Yi Sun, and Tommi Jaakkola. Metric recovery from directed unweighted graphs. In Guy Lebanon and S. V. N. Vishwanathan (eds.), Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pp. 342–350, San Diego, California, USA, 09–12 May 2015. PMLR. URL https://proceedings.mlr.press/v38/hashimoto15.html.
- Kingma & Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Larson et al. [2021] J Larson, T Zuzul, EC Pahnke, NP Shah, P Bourke, N Caurvina, F Amini, Y Park, J Vogelstein, J Weston, et al. Dynamic silos: Modularity in intra-organizational communication networks during the covid-19 pandemic. 2021.
- Levin et al. [2017] Keith Levin, Avanti Athreya, Minh Tang, Vince Lyzinski, Youngser Park, and Carey E. Priebe. A central limit theorem for an omnibus embedding of multiple random graphs and implications for multiscale network inference, 2017. URL https://arxiv.org/abs/1705.09355.
- Liang et al. [2022] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models, 2022. URL https://arxiv.org/abs/2211.09110.
- Luxburg & Alamgir [2013] Ulrike von Luxburg and Morteza Alamgir. Density estimation from unweighted k-nearest neighbor graphs: A roadmap. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS’13, pp. 225–233, Red Hook, NY, USA, 2013. Curran Associates Inc.
- McInnes et al. [2018] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction, 2018. URL https://arxiv.org/abs/1802.03426.
- Muennighoff et al. [2022] Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. Mteb: Massive text embedding benchmark, 2022. URL https://arxiv.org/abs/2210.07316.
- Priebe et al. [2013] Carey E Priebe, David J Marchette, Zhiliang Ma, and Sancar Adali. Manifold matching: Joint optimization of fidelity and commensurability. Brazilian Journal of Probability and Statistics, 27(3):377–400, 2013.
- Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020, 2021. URL https://arxiv.org/abs/2103.00020.
- Reimers & Gurevych [2019] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
- Salazar et al. [2020] Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.acl-main.240. URL https://doi.org/10.18653%2Fv1%2F2020.acl-main.240.
- Tenenbaum et al. [2000] Joshua B Tenenbaum, Vin de Silva, and John C Langford. Global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
- Torgerson [1952] Warren S Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17(4):401–419, 1952.
- Wei et al. [2022] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903, 2022. URL https://arxiv.org/abs/2201.11903.
- Winding et al. [2023] Michael Winding, Benjamin D Pedigo, Christopher L Barnes, Heather G Patsolic, Youngser Park, Tom Kazimiers, Akira Fushiki, Ingrid V Andrade, Avinash Khandelwal, Javier Valdes-Aleman, et al. The connectome of an insect brain. Science, 379(6636):eadd9330, 2023.
- Wolf et al. [2020] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Huggingface’s transformers: State-of-the-art natural language processing, 2020.
- Young & Scheinerman [2007] Stephen J. Young and Edward R. Scheinerman. Random dot product graph models for social networks. In Workshop on Algorithms and Models for the Web-Graph, 2007.
- Zha et al. [2023] Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023.