This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\stackMath

A Dataset Similarity Evaluation Framework for Wireless Communications and Sensing

João Morais1, Sadjad Alikhani1, Akshay Malhotra2, Shahab Hamidi-Rad2, Ahmed Alkhateeb1
1{joao, alikhani, alkhateeb}@asu.edu , 2{akshay.malhotra, shahab.hamidi-rad}@interdigital.com
Abstract

This paper introduces a task-specific, model-agnostic framework for evaluating dataset similarity, providing a means to assess and compare dataset realism and quality. Such a framework is crucial for augmenting real-world data, improving benchmarking, and making informed retraining decisions when adapting to new deployment settings, such as different sites or frequency bands. The proposed framework is employed to design metrics based on UMAP topology-preserving dimensionality reduction, leveraging Wasserstein and Euclidean distances on latent space KNN clusters. The designed metrics show correlations above 0.85 between dataset distances and model performances on a channel state information compression unsupervised machine learning task leveraging autoencoder architectures. The results show that the designed metrics outperform traditional methods.

I Introduction

Machine learning (ML) applications in wireless communications have seen significant growth, driven by the need for enhanced spectral efficiency and system optimization [1, 2, 3, 4, 5]. While considerable attention has been given to developing advanced learning models, there has been less focus on the data required for training and generalization. This imbalance limits the transition of ML models from research to real-world deployment in wireless systems. To address this, several key challenges must be resolved:

  • How to choose adequate datasets for training models?

  • How to predict model performance in real deployments?

  • How to measure and ensure model generalization across different datasets?

This work delves into the development and application of dataset similarity metrics in the context of wireless communications. These metrics quantify the degree of similarity between datasets, allowing for the estimation of a model’s generalization performance on unseen data without requiring explicit retraining. Beyond generalization prediction, dataset similarity metrics are also useful for detecting distributional shifts, improving transfer learning by selecting the most relevant datasets, and augmenting real-world data with synthetic datasets that are well-matched to the task. In the domain of wireless communications, these metrics offer valuable insights into how datasets relate to one another, guiding efficient model selection and enhancing performance, particularly in scenarios where labeled data is limited or unavailable.

In wireless communications, obtaining real-world data is challenging, making simulated datasets essential for ML development. Simulations are getting increasingly realistic, and some studies already permit capturing real world environments by finding the right materials via channel measurements [6]. However, the simulated datasets of wireless channels are often complex to interpret and manage. This highlights the need for robust data engineering practices, guiding the entire data lifecycle from acquisition to transformation and management. Despite the advances in advanced learning techniques, such as the Large Wireless Models [7], the lack of tools for dataset management is still responsible for slowing the development of large-scale ML models in wireless systems.

Real-world datasets, such as those from OTA measurements by NYU [8] and testbeds like PAWR [9], offer valuable insights but are limited in scale and diversity for machine learning model training. To address this, simulated data is widely used, falling into two main categories: stochastic and deterministic. Stochastic models, such as 3GPP’s 38.901 specification [10] (e.g., CDL, TDL, UMa, UMi), simulate probabilistic channel conditions and are accessible via tools like Matlab’s 5G Toolbox. Other models include NYUSIM [11] and tools by Fraunhofer [12]. Deterministic approaches, such as ray tracing tools like Wireless InSite [13] and SionnaRT [14], provide high-fidelity, site-specific simulations. Datasets like DeepMIMO [15] combine real and synthetic data to bridge this gap. However, the wireless community still lacks robust methods to assess and manage datasets for large-scale generative models, which are crucial for improving model generalization and real-world deployment.

To overcome these limitations, leveraging existing datasets that are distributionally similar to target environments can reduce the need for creating new data from scratch. This highlights the importance of measuring dataset similarity to ensure effective augmentation and improve model performance.

Refer to caption

Figure 1: Applications enabled by dataset distance computation.

Refer to caption

Figure 2: Framework system for assessing the suitability of a distance function to task, model, and a set of datasets in terms of how such a function outputs distances that correlate with performances in the specific task.

Contribution: This work introduces a framework for assessing dataset similarity, allowing researchers to evaluate datasets before training and determine the need for retraining, among several other application, as shown in Fig. 1. The main contributions are:

  • We develop a task-driven, model-agnostic framework that evaluates similarity between datasets, without the need for training additional models.

  • We design two distance metrics using UMAP topology-preserving dimensionality reduction: One computes the Euclidean distances on KNN clusters, the other calculates the Wasserstein distances across data dimensions.

  • We demonstrate that our framework achieves a strong correlation between dataset distances and model performances, offering the potential to bypass redundant computations on new datasets and allowing more effective and efficient model training, among several other applications.

The proposed framework is open-source, with full documentation and reproducibility resources available. 111Documentation, artifacts, and reproducibility resources can be found at: https://wi-lab.net/research/dataset_similarity

II Framework for Evaluating Dataset Similarities and Model Performance Correlation

We propose a framework to correlate dataset distances with model performance metrics, enabling the selection of distance functions that predict how well models trained on one dataset perform on others, depicted in Fig. 2. This framework can detect dataset shifts, guide data augmentation, and rank the usefulness of datasets for specific tasks.

Framework Overview: The framework consists of two steps: distance computation and performance evaluation. For distance computation, a metric dd is calculated between pairs of datasets, resulting in a distance matrix 𝐃\mathbf{D}. Performance evaluation involves training a model on one dataset and testing it on others, generating a performance matrix 𝐏\mathbf{P}. Correlating 𝐃\mathbf{D} and 𝐏\mathbf{P} identifies how well distances predict performance drops between datasets.

Performance Computation: Models are trained on each dataset DiD_{i}, and their performance PiiP_{ii} is evaluated on the same dataset as a baseline. The trained model is then tested on other datasets DjD_{j} to record performance PijP_{ij}. If DiD_{i} and DjD_{j} are similar based on the task, PijP_{ij} should be close to PiiP_{ii}; significant performance drops ΔPij\Delta P_{ij} indicate dataset dissimilarity.

Distance Computation: Various distance metrics, including geometric, statistical, subspace, and manifold distances, as well as those based on dimensionality reduction (e.g., UMAP), can be applied within our framework. High-dimensional datasets present challenges due to the curse of dimensionality, but techniques like PCA and UMAP can project data into lower-dimensional spaces, making distances more meaningful and computationally efficient. Our framework is flexible, allowing preprocessing steps like dimensionality reduction or clustering to be adapted based on the task and dataset characteristics.

Correlation Between Distance and Performance: We use the Pearson correlation coefficient to quantify the relationship between the dataset distance matrix 𝐃\mathbf{D} and the performance drop matrix 𝐏\mathbf{P}. This helps us evaluate how well distances predict model performance degradation across datasets. The goal is to find distance metrics that align closely with model behavior, enabling more effective data selection and transfer learning strategies. By applying this framework to unsupervised tasks, we aim to identify effective distance metrics that generalize across domains.

III Dataset Similarity: A Novel Approach

In high-dimensional datasets, computing distances that accurately reflect the relationships between datasets and correlate with model performance is a challenging task, largely due to the presence of noise, irrelevant features, and the complexity of the data. Traditional distance metrics, when applied in their native high-dimensional space, frequently fail to capture the crucial underlying structures. To address this, it is essential to map the datasets into a transformed space that emphasizes the most relevant features for the task, preserving local proximity (datasets close in terms of task-relevant features remain close) and global structure (broader relationships between datasets are maintained). This transformed space allows us to compute distances that are more meaningful, leading to higher correlations with model performance across datasets.

We can achieve this transformation using a graph-based approach, where the relationships between datasets are modeled based on their local neighborhoods and global connections. This method creates a manifold-like representation, reflecting both spatial proximity and structural properties, allowing distances to better represent model performance across datasets, ultimately improving transfer learning, domain adaptation, and model selection.

III-A Uniform Manifold Approximation and Projection (UMAP)

The key tool for this transformation is Uniform Manifold Approximation and Projection (UMAP) [16]. Let Di={𝐱j(i)}j=1MiD_{i}=\{\mathbf{x}_{j}^{(i)}\}_{j=1}^{M_{i}} be a dataset with MiM_{i} datapoints, where 𝐱j(i)N\mathbf{x}_{j}^{(i)}\in\mathbb{R}^{N}. UMAP projects this dataset into a lower-dimensional space, represented as D~i={𝐱~j(i)=fUMAP(𝐱j(i))}j=1Mi\tilde{D}_{i}=\{\tilde{\mathbf{x}}_{j}^{(i)}=f_{\text{UMAP}}(\mathbf{x}_{j}^{(i)})\}_{j=1}^{M_{i}}, where the distances better capture local and global structures compared to the raw feature space. UMAP is a dimensionality reduction technique that can be used for general non-linear dimensionality reduction. The algorithm relies on three key assumptions about the data: i) The data is uniformly distributed on a Riemannian manifold, ii) the Riemannian metric is locally constant (or can be approximated as such), iii) the manifold is locally connected. From these assumptions, it is possible to model the manifold with a fuzzy topological structure. UMAP finds a low-dimensional projection of the data that preserves this fuzzy topological structure as closely as possible. The process is provided below.

  • Constructing a Fuzzy Simplicial Set: UMAP builds a weighted k-nearest neighbor graph in the high-dimensional space, capturing local neighborhood relationships based on connection probabilities between points.

  • Fuzzy Topological Representation: These probabilities are used to construct a topological space, reflecting both local and global structures of the dataset.

  • Low-Dimensional Embedding Optimization: UMAP finds a low-dimensional embedding that minimizes the cross-entropy between fuzzy representations in the high-dimensional and low-dimensional spaces.

Preserving both local and global relationships, UMAP provides a better alternative to PCA (which struggles with non-linear structures) and t-SNE [17] (which distorts global structures). This makes it ideal for computing distances that reflect how models trained on one dataset perform on another.

III-B Dataset Similarity Metrics in UMAP Spaces

Euclidean in UMAP Spaces: In the UMAP-encoded latent space, various forms of Euclidean distances are utilized to quantify dataset separation, providing computational simplicity and interpretability:

Pairwise Euclidean Distance: The distance between all pairs of points across two datasets D1D_{1} and D2D_{2}, where M1M_{1} and M2M_{2} are the number of data points in the datasets, is given by

dpairwise=1M1M2j=1M1k=1M2𝐱~j(1)𝐱~k(2)2.d_{\text{pairwise}}=\frac{1}{M_{1}M_{2}}\sum_{j=1}^{M_{1}}\sum_{k=1}^{M_{2}}\left\|\tilde{\mathbf{x}}_{j}^{(1)}-\tilde{\mathbf{x}}_{k}^{(2)}\right\|_{2}.

Here, 𝐱~j(1)\tilde{\mathbf{x}}_{j}^{(1)} and 𝐱~k(2)\tilde{\mathbf{x}}_{k}^{(2)} represent the encoded points from datasets D1D_{1} and D2D_{2}, respectively.

Cluster-Based Euclidean Distance: After clustering each dataset into K1K_{1} and K2K_{2} clusters, the centroid-based Euclidean distance is computed as

dcluster=1K1K2l=1K1m=1K2𝐜l(1)𝐜m(2)2,d_{\text{cluster}}=\frac{1}{K_{1}K_{2}}\sum_{l=1}^{K_{1}}\sum_{m=1}^{K_{2}}\left\|\mathbf{c}_{l}^{(1)}-\mathbf{c}_{m}^{(2)}\right\|_{2},

where 𝐜l(1)\mathbf{c}_{l}^{(1)} and 𝐜m(2)\mathbf{c}_{m}^{(2)} are the centroids of clusters ll and mm in datasets D1D_{1} and D2D_{2}, respectively.

Average Euclidean Distance: This distance simplifies to the Euclidean distance between the mean vectors 𝐱¯(1)\bar{\mathbf{x}}^{(1)} and 𝐱¯(2)\bar{\mathbf{x}}^{(2)} of the two datasets

daverage=𝐱¯(1)𝐱¯(2)2,d_{\text{average}}=\left\|\bar{\mathbf{x}}^{(1)}-\bar{\mathbf{x}}^{(2)}\right\|_{2},

where 𝐱¯(1)=1M1j=1M1𝐱~j(1)\bar{\mathbf{x}}^{(1)}=\frac{1}{M_{1}}\sum_{j=1}^{M_{1}}\tilde{\mathbf{x}}_{j}^{(1)} and similarly for 𝐱¯(2)\bar{\mathbf{x}}^{(2)}.

The advantage of Euclidean distances in UMAP spaces lies in their computational efficiency and ability to clearly reflect local dataset structure after dimensionality reduction, which is crucial for fast and intuitive dataset similarity evaluation.

Wasserstein in UMAP Spaces: The Wasserstein distance [18, 19, 20], also known as the Earth Mover’s distance, is a more powerful metric for measuring differences between two distributions. It calculates the minimal effort needed to transport the mass of one distribution to match the other across all dimensions of the latent space. For each dimension nn, the one-dimensional Wasserstein distance between the cumulative distribution functions (CDFs) Fn(1)F_{n}^{(1)} and Fn(2)F_{n}^{(2)} of datasets D1D_{1} and D2D_{2} is

Wn=01|Fn(1)1(t)Fn(2)1(t)|𝑑t,W_{n}=\int_{0}^{1}\left|F_{n}^{(1)^{-1}}(t)-F_{n}^{(2)^{-1}}(t)\right|dt,

where Fn(i)1(t)F_{n}^{(i)^{-1}}(t) is the inverse CDF (quantile function) for the nn-th dimension of dataset DiD_{i}.

The overall Wasserstein distance across all dimensions dd of the latent space is then the average of these one-dimensional Wasserstein distances as follows

W=1dn=1dWn.W=\frac{1}{d}\sum_{n=1}^{d}W_{n}.

Wasserstein distance is especially advantageous for capturing the global structure of datasets, providing superior sensitivity to both local and global distributional differences. This makes it highly effective in UMAP spaces, where preserving both local and global features is critical for accurate dataset similarity evaluation.

UMAP Considerations: UMAP requires careful tuning of parameters, such as the number of neighbors and minimum distance in the latent space. A balance between local and global structures must be achieved for effective embedding. For wireless datasets, correlation distance has proven effective for UMAP, but Euclidean distance can be used when scalability is prioritized over performance. Leveraging UMAP for distance computation offers improved accuracy in assessing dataset similarities, providing a robust foundation for dataset distancing tasks.

Refer to caption

Figure 3: Real (left) and rendered (right) top views of the ASU campus from the DeepMIMO dataset. The rendered view shows received power distribution using a standard DFT codebook at the base station, highlighting key effects like roof diffraction.

IV Dataset Similarity For CSI Compression Task

This section explores how various distance metrics correlate with model performance in the unsupervised CSI compression task, which is essential in wireless communications. CSI compression reduces high-dimensional channel matrices into low-dimensional representations to facilitate efficient feedback between user equipments and base stations. Since the same data serves as both input and output, this task is ideal for assessing correlations between dataset distances and model performance. We discuss the CSI compression task, the autoencoder model, and the dataset before presenting results of different distance metrics in both raw and latent spaces.

IV-A CSI Compression Task

CSI compression involves transforming a high-dimensional wireless channel matrix, 𝐇NBS×Nsub\mathbf{H}\in\mathbb{C}^{N_{BS}\times N_{sub}}, into a compact, low-dimensional representation. Here, we consider NBSN_{BS} basestation antennas and NsubN_{sub} subcarriers in the channel matrix. After applying a Fourier transform and truncating the last 16 delay taps, the matrix is reduced to 32×1632\times 16. The goal is to compress this matrix to Nenc=32N_{enc}=32 dimensions, achieving a 6464-fold reduction. The compression performance is measured using the normalized mean squared error (NMSE) between the original matrix 𝐇\mathbf{H} and the reconstruction 𝐇^\hat{\mathbf{H}} as follows

NMSEdB(𝐇,𝐇^)=10log10𝐇𝐇^F2𝐇F2.\text{NMSE}_{\text{dB}}(\mathbf{H},\hat{\mathbf{H}})=10\log_{10}\frac{\|\mathbf{H}-\hat{\mathbf{H}}\|^{2}_{F}}{\|\mathbf{H}\|^{2}_{F}}.

IV-B Autoencoder Model

Refer to caption

Figure 4: Architecture of the autoencoder model used for the unsupervised CSI compression task, inspired by CSINet+ [21].

The autoencoder (AE) used for CSI compression processes the input 32×1632\times 16 matrix, split into real and imaginary parts, through convolutional layers to reduce it to a 3232-dimensional latent space. The decoder reconstructs the high-dimensional matrix from the latent space. The model is trained using MSE loss and is configured in two ways:

  • AEs are trained separately on each area dataset. Each model achieves an NMSE below 20dB-20\text{dB} when tested on the trained area and higher NMSE on untrained areas. These 20 models are used to access performance drops that should be correlated with our distance metrics.

  • A single AE with five refinement nets (instead of three) is trained on data from all areas, resulting in an average NMSE of 20dB-20\text{dB} across areas. This model latent space is used for computing dataset distances later on when analyzing the impact of dimensionality reduction methods.

IV-C Dataset Description

The dataset used in this work is a raytraced ASU campus dataset generated with the DeepMIMO framework, containing around 90K90K users across an area of 410×320410\times 320 meters with 1-meter resolution. The raytracing simulation captures various propagation effects, including LoS, reflections, diffuse scattering, and diffraction, providing a highly realistic environment. Figure 3 compares the real geographic view of the ASU campus with the digitally rendered version, illustrating the dataset’s suitability for evaluating distance metrics in wireless channel compression tasks.

TABLE I: Correlation between dataset distances and model performance in the input space.
Category Distance Name Correlation
Compute
Time (s)
Geometric Pairwise Euclidean 0.36 11
Clustered Euclidean 0.37 166
Average Euclidean 0.34 3
Cosine -0.07 5255
Statistical Jensen-Shannon 0.14 80
Hellinger 0.15 81
Wasserstein 0.52 562
Kolmogorov-Smirnov 0.47 381
Total Variation 0.15 78
MMD (linear) -0.08 79
MMD (RBF) -0.06 135
Energy 0.55 735
Subspace Grassmann -0.11 15223
Chordal -0.10 14810
Asimov -0.03 15594
Other PAD 0.64 952

IV-D Results: Distances in Input Space

Table I presents the correlations between different distance metrics and model performance for the CSI compression task in the raw input space. We evaluate three categories of distance metrics: geometric, statistical, and subspace-based. Key findings include:

  • Statistical distances outperform geometric ones. Wasserstein (0.520.52 correlation) and Energy (0.550.55 correlation) distances perform best, because they capture distributional differences, unlike point-to-point metrics that struggle with high-dimensional data.

  • Geometric distances show lower correlations. Averaged and pairwise Euclidean distances achieve roughly 0.350.35 correlation, due to their limitations for high-dimensional wireless data.

  • Subspace distances performed poorly Grassmann, Chordal, and Asimov metrics perform the worst and even yield negative correlations. This can be associated with the similarity between projected subspaces, resulting in practically constant principal angles and, thus, distances.

  • Computation time matters. While Wasserstein and Energy perform well, they are computationally expensive (562s and 735s, respectively). Simpler methods like centroid Euclidean (3s) are faster but offer lower correlations.

These results highlight that while statistical distances are more effective, their computational cost motivates the need for dimensionality reduction to make distance computations more practical.

IV-E UMAP for Improved Distance Computation

Dimensionality reduction helps mitigate the high computational costs of raw space distance computation by removing noise and redundancies. Fig. 5 compares several techniques, including PCA, t-SNE, and UMAP.

UMAP was found to be the most effective among dimensionality reduction techniques, striking a balance between preserving local and global structures. PCA, being linear, fails to differentiate between similar datasets, while t-SNE distorts global relationships by focusing on local clusters. UMAP retains essential geometric features, resulting in a latent space where distances better reflect dataset similarities.

IV-F Results: Distances in Latent Space

Refer to caption

Figure 5: Visualization of latent spaces generated by different dimensionality reduction techniques. From left to right: the original space with proximity-based clustering, followed by PCA, t-SNE, UMAP, and a combination of PCA and UMAP.

Table  II summarizes the performance of various distance metrics in different latent spaces, using PCA, t-SNE, UMAP, and autoencoder (AE).

TABLE II: Correlation of distances with model performance in different latent spaces.
Category Distance Dimensionality Reduction
PCA 32 TSNE 2 UMAP 2 AE 32
Geometric Pairwise Euclidean 0.37 0.58 0.83 0.87
Clustered Euclidean 0.41 0.59 0.84 0.91
Centroid Euclidean 0.35 0.59 0.86 0.93
Cosine 0.30 0.41 0.42 0.94
Statistical KL Divergence 0.32 0.62 0.52 0.85
Jensen-Shannon -0.08 0.15 0.12 0.07
Hellinger -0.08 0.17 0.13 0.07
Wasserstein 0.47 0.68 0.85 0.92
Kolmogorov-Smirnov 0.57 0.32 0.46 0.22
Total Variation -0.06 0.13 0.10 0.04
MMD (Linear) -0.17 0.10 0.04 0.04
MMD (RBF) -0.13 0.06 0.04 -0.02
Energy 0.56 0.42 0.60 0.25
Subspace Grassmann -0.14 0.02 -0.22 -0.05
Chordal -0.14 0.02 -0.22 -0.05
Asimov -0.12 0.03 -0.23 -0.06
Other PAD 0.75 0.68 0.71 0.66

AE as an upper bound: The AE-based latent space achieves the highest correlation (0.940.94) between distances and model performance. This is unsurprising, as the AE architecture used for dimensionality reduction is similar to the model used for performance evaluation. However, the downside of this approach is its impracticality. Training an AE for all datasets is computationally expensive and time-consuming, making it an infeasible solution for real-world applications.

UMAP as a practical alternative: UMAP provides a close approximation to the AE’s performance, with correlations of around 0.85 for both Euclidean and Wasserstein distances. UMAP’s ability to balance local and global structures makes it a highly effective dimensionality reduction technique. Importantly, UMAP drastically reduces the computational complexity of distance computations, making it a practical choice for large-scale applications.

Euclidean and Wasserstein metrics perform best: In both the UMAP and AE spaces, Euclidean-based distances and Wasserstein distance achieve the highest correlations. This suggests that these metrics are well-suited to the latent spaces generated by UMAP and AEs, capturing meaningful dataset similarities that correlate with model performance.

Dimensionality reduction reduces computational costs: By projecting the data into lower-dimensional latent spaces, we reduce the computational burden of distance computation. For example, computing Wasserstein distance in the raw space takes 562s, but this time is significantly reduced when using UMAP. This makes UMAP a practical choice for improving both performance and computational efficiency.

The choice of dimensionality reduction method plays a crucial role in improving both the accuracy and efficiency of distance computations. While AE-based latent spaces provide the highest correlation with model performance, UMAP emerges as a more practical alternative, offering strong correlations with much lower computational overhead. UMAP, combined with simple Euclidean or Wasserstein distances, achieves correlations of around 0.85 with model performance, providing a close approximation to the upper bound set by the AE. This makes UMAP an attractive choice for real-world applications, where computational efficiency and scalability are critical considerations.

V Conclusion

In this work, we introduced a novel framework for dataset similarity evaluation in wireless communications, establishing one of the first links between dataset distances and model performance. By utilizing latent spaces derived from UMAP non-linear dimensionality reduction, we captured essential data structures for precise distance measurements in unsupervised tasks. Our results showed that the proposed metrics outperformed traditional methods, particularly in CSI compression. This framework enables smarter data selection, reduces model retraining, and supports more efficient machine learning deployment in wireless systems. Future work will extend these insights to broader wireless tasks and real-world datasets.

References

  • [1] C.-K. Wen, W.-T. Shih, and S. Jin, “Deep learning for massive mimo csi feedback,” IEEE Wireless Communications Letters, vol. 7, no. 5, pp. 748–751, 2018.
  • [2] A. Taha, M. Alrabeiah, and A. Alkhateeb, “Enabling large intelligent surfaces with compressive sensing and deep learning,” IEEE Access, vol. 9, pp. 44 304–44 321, 2021.
  • [3] A. Alkhateeb, S. Alex, P. Varkey, Y. Li, Q. Qu, and D. Tujkovic, “Deep learning coordinated beamforming for highly-mobile millimeter wave systems,” IEEE Access, vol. 6, pp. 37 328–37 348, 2018.
  • [4] M. Alrabeiah and A. Alkhateeb, “Deep learning for mmwave beam and blockage prediction using sub-6 ghz channels,” IEEE Transactions on Communications, vol. 68, no. 9, pp. 5504–5518, 2020.
  • [5] F. B. Mismar, B. L. Evans, and A. Alkhateeb, “Deep reinforcement learning for 5g networks: Joint beamforming, power control, and interference coordination,” IEEE Transactions on Communications, vol. 68, no. 3, pp. 1581–1592, 2020.
  • [6] S. Jiang, Q. Qu, X. Pan, A. Agrawal, R. Newcombe, and A. Alkhateeb, “Learnable wireless digital twins: Reconstructing electromagnetic field with neural representations,” 2024. [Online]. Available: https://arxiv.org/abs/2409.02564
  • [7] S. Alikhani, G. Charan, and A. Alkhateeb, “Large wireless model (lwm): A foundation model for wireless channels,” 2024. [Online]. Available: https://arxiv.org/abs/2411.08872
  • [8] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” IEEE Journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1164–1179, 2014.
  • [9] “Platforms for advanced wireless research (pawr),” https://www.advancedwireless.org/, accessed: 2024-10-06.
  • [10] 3rd Generation Partnership Project (3GPP), “3GPP TR 38.901 V16.1.0: Study on channel model for frequencies from 0.5 to 100 ghz,” ETSI, Technical Report V16.1.0, Dec. 2019, available online at: https://www.etsi.org/deliver/etsi_tr/138900_138999/138901/16.01.00_60/tr_138901v160100p.pdf, accessed on 6 Oct. 2024.
  • [11] S. Sun, T. S. Rappaport, M. Shafi, J. Tang, H. Zhang, Y. X. Azar, and K. Wang, “A novel millimeter-wave channel simulator and applications for 5g wireless communications,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2410–2435, 2017.
  • [12] F. ISE, “Fraunhofer ise annual report 2023-2024,” 2024, accessed: 2024-10-05. [Online]. Available: https://www.ise.fraunhofer.de/content/dam/ise/en/documents/annual_reports/fraunhofer-ise-annual-report-2023-2024.pdf
  • [13] “Wireless insite ray-tracing software,” https://www.remcom.com/wireless-insite-em-propagation-software, accessed: 2024-10-06.
  • [14] “Sionnart: High-fidelity ray tracing simulator for 6g research,” https://nvlabs.github.io/sionna-ray-tracing/, accessed: 2024-10-06.
  • [15] A. Alkhateeb, “Deepmimo: A generic dataset for millimeter wave and massive mimo applications,” arXiv preprint arXiv:1902.06435, 2019. [Online]. Available: https://arxiv.org/abs/1902.06435
  • [16] J. M. L. McInnes, J. Healy, “Umap: Uniform manifold approximation and projection for dimension reduction,” arXiv preprint arXiv:1802.03426, 2018.
  • [17] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Machine Learning Research, 2008.
  • [18] C. Villani, Optimal Transport: Old and New.   Springer-Verlag Berlin Heidelberg, 2008.
  • [19] G. Peyré and M. Cuturi, “Computational optimal transport,” Foundations and Trends in Machine Learning, vol. 11, no. 5-6, pp. 355–607, 2019.
  • [20] Y. Rubner, C. Tomasi, and L. Guibas, “The earth mover’s distance as a metric for image retrieval,” International Journal of Computer Vision, vol. 40, pp. 99–121, 11 2000.
  • [21] J. Guo, C.-K. Wen, S. Jin, and G. Y. Li, “Convolutional neural network based multiple-rate compressive sensing for massive mimo csi feedback: Design, simulation, and analysis,” 2019. [Online]. Available: https://arxiv.org/abs/1906.06007