A new classification framework for high-dimensional data
Abstract
Classification, a fundamental problem in many fields, faces significant challenges when handling a large number of features, a scenario commonly encountered in modern applications, such as identifying tumor subtypes from genomic data or categorizing customer attitudes based on online reviews. We propose a novel framework that utilizes the ranks of pairwise distances among observations and identifies consistent patterns in moderate- to high- dimensional data, which previous methods have overlooked. The proposed method exhibits superior performance across a variety of scenarios, from high-dimensional data to network data. We further explore a typical setting to investigate key quantities that play essential roles in our framework, which reveal the framework’s capabilities in distinguishing differences in the first and/or second moment, as well as distinctions in higher moments.
Keywords: curse of dimensionality, ranks of pairwise distances, multi-class classification, network data
1 Introduction
In recent decades, the presence of high-dimensional data in classification has become increasingly common. For example, gene expression data with thousands of genes has been used to classify tumor subtypes (Golub et al., 1999; Ayyad et al., 2019), online review data with hundreds of features has been used to classify reviewers’ attitudes (Ye et al., 2009; Bansal and Srivastava, 2018), and speech signal data with thousands of utterances has been used to classify speakers’ sentiment (Burkhardt et al., 2010). To address the challenges in classifying high-dimensional data, many methods have been proposed.
One of the early methods used for high-dimensional classification was the support vector machine (SVM) (Boser et al., 1992; Brown et al., 1999; Furey et al., 2000; Schölkopf et al., 2001). Recently, Ghaddar and Naoum-Sawaya (2018) combined the SVM model with a feature selection approach to better handle high-dimensional data, and Hussain (2019) improved the traditional SVM by using a new semantic kernel. Another well-known approach is linear discriminant analysis (LDA) (Fisher, 1936; Rao, 1948), which has been extended to high-dimensional data by addressing the singularity of the covariance matrix. Extensions include generalized LDA (GLDA) (Li et al., 2005), dimension reduction with PCA followed by LDA in a low-dimensional space (Paliwal and Sharma, 2012), and regularized LDA (RLDA) (Yang and Wu, 2014). The -nearest neighbor classification is also a common approach (Cover and Hart, 1967) with many variants (Liu et al., 2006; Tang et al., 2011). Recently, Pal et al. (2016) applied the -nearest neighbor criterion based on an inter-point dissimilarity measure that utilizes the mean absolute difference to address the issue of the concentration of pairwise distances. Other methods include the nearest centroids classifier with feature selection (Fan and Fan, 2008) and ensemble methods such as boosting and random forest (Freund et al., 1996; Buehlmann, 2006; Mayr et al., 2012; Breiman, 2001; Ye et al., 2013). There are also other methods available, such as partial least squares regression and multivariate adaptive regression splines, as reviewed in Fernández-Delgado et al. (2014).
In addition, neural network classification frameworks have shown promising results in various kind of tasks, such as convolutional neural networks and their variants (LeCun et al., 1989; Ranzato et al., 2006; Shi et al., 2015; Cao et al., 2020) in image processing tasks, recurrent neural networks in sound classification (Deng et al., 2020; Zhang et al., 2021) and text classification (Liu et al., 2016), and generative adversarial networks in semi-supervised learning (Kingma et al., 2014; Zhou et al., 2020) and unsupervised learning (Radford et al., 2015; Kim and Hong, 2021).
While numerous methods exist for classification, a common underlying principle is that observations or their projections tend to be closer if they belong to the same class than if they come from different classes. This principle is effective in low-dimensional spaces but can fail in high-dimensional scenarios due to the curse of dimensionality. Consider a typical classification problem where observations are drawn from two distributions: and . Here and are unknown for the classification task, while and are observed and labeled by classes. The task is to classify a future observation, , as belonging to either class or class . In our simulation, we set , where , , , and , with . Similarly, , where , , and , with being a random vector from . By varying and , we can generate distributions that differ in mean and/or variance. Fifty new observations are generated from each of the two distributions and are classified in each trial. The average misclassification rate is calculated over fifty trials. While many classifiers are tested in this simple setting, we show results for a few representative ones from different categories that are either commonly used or haven shown good performance: Generalized LDA (GLDA) (Li et al., 2005), supporter vector machine (SVM) (Schölkopf et al., 2001), random forest (RF) (Breiman, 2001), boosting (Freund et al., 1996), FAIR (Features Annealed Independence Rules) (Fan and Fan, 2008), NN-MADD (-Nearest Neighbor classifier based on the Mean Absolute Difference of Distances) (Pal et al., 2016), and several top rated methods based on the results in the review paper ((Fernández-Delgado et al., 2014)): decision tree (DT) (Salzberg, 1994), multivariate adaptive regression splines (MARS) (Leathwick et al., 2005), partial least squares regression (PLSR) (Martens and Naes, 1992), and extreme learning machine (ELM) (Huang et al., 2011). Among the various kinds of deep neural network structures, we choose the multilayer perceptron (MLP) (Popescu et al., 2009), which is not task-specific but also illustrate the core idea of the deep neural networks. The structure of the MLP is discussed in Appendix A.
GLDA | SVM | RF | Boosting | FAIR | NN-MADD | DT | MARS | PLSR | ELM | MLP | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|
4 | 1 | 0.328 | 0.430 | 0.327 | 0.504 | 0.476 | 0.512 | 0.316 | ||||
0 | 1.05 | 0.498 | 0.381 | 0.476 | 0.498 | 0.505 | 0.532 | 0.524 | 0.480 | 0.452 | 0.483 |
Table 1 presents the average misclassification rate for these classification methods when the distributions differ either in mean or variance. When there is only a mean difference, PLSR, ELM, SVM and GLDA perform the best. When there is only a variance difference, NN-MADD performs the best. However, NN-MADD performs poorly when there is only a mean difference. Therefore, there is a need to devise a new classification rule that works more universally.
The organization of the rest of the paper is as follows. In Section 2, we investigate the above example in more detail and propose new classification algorithms that is capable of distinguishing between the two classes in both scenarios. In Section 3, we evaluate the performance of the new algorithm in various simulation settings. To gain a deeper understanding of the new method, Section 4 explores key quantities and mechanisms underlying the new approach. The paper concludes with a brief discussion in Section 5.
2 Method and theory
2.1 Intuition
It is well-known that the equality of the two multivariate distributions, and , can be characterized by the equality of three distributions: the distribution of the inter-point distance between two observations from , the distribution of the inter-point distance between two observations from , and the distribution of the distance between one observation from and one from (Maa et al., 1996). We utilize this fact as the basis for our approach.
We begin by examining the inter-point distances of observations in both settings shown in Table 1. Heatmaps of these distances in a typical simulation run are shown in the top panel of Figure 1, where the data is arranged in the order of and . To better visualize the patterns, we also include data with larger differences in the bottom panel of Figure 1: (left) and (right).




We denote the distance between two observations both from class as , the distance between one observation from class and the other from class as , and the distance between two observations both from class as . We see that, under the mean difference setting, the between-group distance () tend to be larger than the within-group distance (left panel of Figure 1). However, under the variance difference setting where class Y has a larger variance difference, we see that in general (right panel of Figure 1). This phenomenon is due to the curse of dimensionality, where the volume of the -dimensional space increases exponentially in , causing observations from a distribution with a larger variance to scatter farther apart compared to those from a distribution with a smaller variance. This behavior of high-dimensional data has been discussed in Chen and Friedman (2017).
Based on the observations above, we propose using and as summary statistics for class and , respectively. Specially, under the mean difference setting, the between-group distance () tends to be larger than the within-group distance . This creates differences in both dimensions: between and , and between and ). Under the variance difference setting, we observe (or if class has a larger variance), so differences exist in both dimensions as well. In either scenario, the summary statistic is distinguishable between the two classes.
This idea can also be extended to -class classification problems. For a -class classification problem with class labels , let be the distance between one observation from the class and one from the class, and let be the inter-point distance between two observations from the class. We can use as the summary statistic for the class. For two class and differ in distribution, and also differ in distribution. Hence, the summary statistic can distinguish class from other classes.
2.2 Proposed method
Let be a training set consisting of observations, where represents the -th observation and is its corresponding class label. We assume that all ’s are independent and unique, and that if , then is drawn from the distribution . Let denote the number of observations in the training set that belong to class . The goal is to classify a new observation that is generated from one of the distributions to one of the classes.
The distance-based approach and rank-based approach are described in Algorithm 1 and Algorithm 2, respectively. In the distance-based approach, we first compute the pairwise distance matrix with the training set and the distance mean matrix . Then, we compute a distance vector , which contains all the pair-wise distance from the new observation to the training set, and the group distance mean vector . The last step is to classify using the group distance mean vector and compare it with . In the rank-based approach, we add steps to compute the rank matrix() and vector(), along with the rank mean matrix() and vector(), and classify the rank mean vector based on the rank mean vector in the last step.
-
1.
Construct a distance matrix :
-
2.
Construct a distance mean matrix :
-
3.
Construct a distance vector , where
-
4.
Construct a group distance mean vector , where
-
5.
Use QDA to classify :
where
-
1.
Construct a distance matrix , where
-
2.
Construct a distance rank matrix , where
-
3.
Construct a rank mean matrix :
-
4.
Construct a distance vector , where
-
5.
Construct a rank vector , where
-
6.
Construct a group rank mean vector , where
-
7.
Use QDA to classify :
where ,
Remark 1.
In the first step of both algorithms, the distance can be chosen as the Euclidean distance or any other suitable distance measure. For this paper, we default to using the Euclidean distance. In the last step of both algorithms, the task is to classify based on all . Since is -dimensional, where is the number of classes, other low-dimensional classification methods can also be used.
To illustrate how the first three steps perform in separating the different classes, for the four datasets in Figure 1, we plot the distance mean vector in Figure 2 and the rank mean vector in Figure 3. As shown in Figures 2 and 3, the classes are well separated, indicating the effectiveness of the first three steps.








Dist | Rank | GLDA | SVM | RF | FAIR | NN-MADD | PLSR | ELM | MLP | ||
---|---|---|---|---|---|---|---|---|---|---|---|
4 | 1 | 0.273 | 0.328 | 0.327 | 0.504 | 0.316 | |||||
0 | 1.05 | 0.498 | 0.381 | 0.476 | 0.505 | 0.357 | 0.480 | 0.452 | 0.483 |
We applied Algorithms 1 and 2 to the two same settings as in Table 1, and the results are presented in Table 2. We see that under the mean difference setting, the new method performs similarly to PLSR, ELM, SVM and GLDA, which are the best performers in this setting. Under the variance difference setting, the new approaches outperform all other methods.
3 Performance comparisons
Here, we examine the performance of the new methods by comparing them to other classification methods under various settings. Due to the fact that DT and MARS are not much better than random guessing for either mean or variance differences, as indicated in Table 1, they are omitted in the following comparison.
3.1 Two-class classification
In each trial, we generate two independent samples, and , with , and , , to be the training set. We set , with , , and with a random vector generated from . The testing samples are and . We consider a few different scenarios:
-
•
Scenario S1: , ;
-
•
Scenario S2: , ;
-
•
Scenario S3: , ;
-
•
Scenario S4: , .
Table 3 shows the average misclassification rate from trials. We see that when there is only a mean difference, the new methods (‘Dist’ and ‘Rank’) have a misclassfication rate close to the best method among all other methods. When there is only a variance difference, the new methods perform the best among all the methods. When there are both mean and variance differences, the new methods again have the lowest misclassification rate.
Dist | Rank | GLDA | SVM | RF | Boosting | FAIR | NN-MADD | PLSR | ELM | MLP | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S1 | 6 | 1 | 0.115 | 0.354 | 0.077 | 0.343 | 0.146 | ||||||
S1 | 0 | 1.1 | 0.506 | 0.106 | 0.425 | 0.465 | 0.507 | 0.412 | 0.440 | 0.472 | |||
S1 | 6 | 1.1 | 0.044 | 0.107 | 0.368 | 0.107 | 0.024 | 0.064 | 0.076 | 0.201 | |||
S2 | 6 | 1 | 0.109 | 0.161 | 0.385 | 0.170 | 0.476 | 0.116 | 0.218 | ||||
S2 | 0 | 1.1 | 0.483 | 0.172 | 0.424 | 0.473 | 0.493 | 0.145 | 0.536 | 0.548 | 0.534 | ||
S2 | 6 | 1.1 | 0.128 | 0.162 | 0.389 | 0.197 | 0.140 | 0.176 | 0.188 | 0.249 | |||
S3 | 6 | 1 | 0.198 | 0.184 | 0.378 | 0.335 | 0.261 | 0.487 | 0.336 | 0.422 | |||
S3 | 0 | 1.1 | 0.478 | 0.141 | 0.345 | 0.460 | 0.495 | 0.092 | 0.504 | 0.532 | 0.473 | ||
S3 | 6 | 1.1 | 0.417 | 0.124 | 0.278 | 0.431 | 0.434 | 0.096 | 0.428 | 0.472 | 0.431 | ||
S4 | 0 | 1 | 0.500 | 0.380 | 0.483 | 0.487 | 0.506 | 0.361 | 0.476 | 0.536 | 0.485 |
3.2 Multi-class classification
In each trial, we randomly generate observations from four distributions , , to be the training set, with , , , . We set , , and , with a random vector generated from . The and are set as follow: , ; , , . Under those settings, the four distributions have two different means and two different variances. The testing samples are , , . We consider the following scenarios:
-
•
Scenario S5: ;
-
•
Scenario S6: ;
-
•
Scenario S7: .
Dist | Rank | GLDA | SVM | RF | Boosting | NN-MADD | PLSR | ELM | MLP | |
---|---|---|---|---|---|---|---|---|---|---|
S5 | 0.495 | 0.118 | 0.444 | 0.502 | 0.640 | 0.498 | 0.479 | 0.482 | ||
S6 | 0.495 | 0.190 | 0.456 | 0.513 | 0.603 | 0.506 | 0.488 | 0.478 | ||
S7 | 0.584 | 0.306 | 0.482 | 0.583 | 0.696 | 0.505 | 0.573 | 0.585 |
The average misclassification rate of trials are shown in Table 4 (FAIR can only be applied to the two-class problem and is not included here). We see that, the new method has the lowest misclassification rate across all these scenarios.
3.3 Network data classification
We generate random graphs using the configuration model , where is the number of vertices and is a vector containing the degrees of the vertices, with assigned to vertex . In each trial, we generate two independent samples, and , with degree vectors and , respectively. The testing samples are and . We consider the following scenarios:
-
•
Scenario S8: , ; , , , ;
-
•
Scenario S9: , , .
When comparing the performance of the methods, we convert the network data with 40 nodes into adjacency matrices, which are further converted into -dimensional vectors. We do this conversion so that all methods in the comparison can be applied. While for our approach, it can be applied to network data directly by using a distance on network data in step 1 of Algorithms 1 and 2.
The results are presented in Table 5. We see that the new method is among the best performers in all settings, while other good performers could work well under some settings but fail for others.
Dist | Rank | GLDA | SVM | RF | Boosting | NN-MADD | PLSR | ELM | MLP | ||
---|---|---|---|---|---|---|---|---|---|---|---|
S8 | 5 | 0.194 | 0.377 | 0.382 | 0.405 | 0.445 | 0.119 | 0.318 | 0.411 | 0.433 | |
S8 | 10 | 0.089 | 0.342 | 0.321 | 0.371 | 0.462 | 0.306 | 0.352 | 0.411 | ||
S8 | 15 | 0.017 | 0.331 | 0.326 | 0.401 | 0.483 | 0.376 | 0.294 | 0.427 | ||
S8 | 20 | 0.315 | 0.290 | 0.380 | 0.470 | 0.476 | 0.482 | 0.504 | |||
S9 | 4 | 0.118 | 0.151 | 0.220 | 0.404 | 0.386 | 0.254 | 0.458 | 0.482 | ||
S9 | 8 | 0.020 | 0.026 | 0.127 | 0.344 | 0.164 | 0.203 | 0.077 | 0.392 | ||
S9 | 12 | 0.059 | 0.289 | 0.080 | 0.088 | 0.208 | |||||
S9 | 16 | 0.041 | 0.271 | 0.045 | 0.059 | 0.213 |
3.4 Robustness analysis
If there are outliers in the data, the distance-based approach is much less robust. Considering the simulation setting scenario S1 (multivariate normal distribution) in Section 3.1 but contaminated by outliers , , where is the number of outliers, and all other observations are simulated in the same way as before. Table 6 shows the misclassification rate of the two approaches. We see that with outliers, the distance-based has a much higher misclassification rate than the rank-based approach. Therefore, the rank-based approach is recommended in practice for its robustness. However, under the ideal scenario of no outliers, the performance of the two approaches is similar, and we could study the distance-based approach to approximate the rank-based version as the former is much easier to analyze.
Rank | Dist | Rank | Dist | ||||||
---|---|---|---|---|---|---|---|---|---|
0 | 1.1 | 1 | 0.0243 | 0.3081 | 0 | 1.1 | 3 | 0.0350 | 0.3246 |
6 | 1 | 1 | 0.0303 | 0.1275 | 6 | 1 | 3 | 0.0439 | 0.1410 |
6 | 1.1 | 1 | 0.0114 | 0.1269 | 6 | 1.1 | 3 | 0.0262 | 0.1510 |
0 | 1.1 | 5 | 0.0433 | 0.3352 | 0 | 1.1 | 7 | 0.0396 | 0.3378 |
6 | 1 | 5 | 0.0530 | 0.1694 | 6 | 1 | 7 | 0.0708 | 0.2287 |
6 | 1.1 | 5 | 0.0451 | 0.1747 | 6 | 1.1 | 7 | 0.0657 | 0.2265 |
4 Explore quantities that play important roles
In this section, we aim to explore the key factors that play important roles in the framework. We consider the following two-class setting ():
We use this setting () as the difference between the two classes can be controlled via , , and . Our objective is to approximate the misclassification rate using quantities derived from , , and . Directly handling the rank-based approach is challenging due to the complexities related to ranks; therefore, we work on the distance-based approach, which offers similar performance but is more manageable.
Define
where , , , . Under Setting (), we can compute the expectation and covariance matrix of and through the following theorems. The proofs for these theorems are provided in the Supplemental materials.
Theorem 4.1.
Let , be generated from Setting (), we have
where the and :
and and defined in Lemma 4.2.
Lemma 4.2.
For , , where , with , and , with , , , , , with , , , we have
For a testing sample, suppose (the distribution of ’s) and (the distribution of ’s), we can also obtain the expectation and covariance matrix of and in a similar way, where with and .
Theorem 4.3.
Under Setting (), the the expectation and covariance matrix of and are given by:
If we further add constraints on and , we can obtain the asymptotic distribution of the distance mean vector .
Theorem 4.4.
In Setting (), let , , ,
and . If and are band matrix, where ; , with is a fixed number, , , with and fixed numbers, and , then converges to a normal distribution as .
Remark 2.
A special case for condition to hold is when all ’s are the same: non-zero elements in ’s are the same and non-zero elements in ’s are the same, i.e. and , , , .
Remark 3.
We can also prove the asymptotic normality of , and by changing the conditions in 4.4. By substituting with , with , with , with and with , we can obtain the conditions for ; by substituting with , we can obtain the conditions for ; by substituting with , with , with , with and with , we can obtain the conditions for .
It should be noted that the distance vector is dependent on the pairwise distance in the distance matrix. Therefore, is dependent on the ’s. We studied an independent version of the distance-based approach in Supplement S3 and found that the dependency has minimal impact on the misclassification rate. Therefore, we continue to sample independently in our estimation. By sampling from and applying the decision rule to each with
we can estimate the misclassification rate by simulating the last step.
We now test our estimation through numerical simulations. Under setting (), we set , , , , , with , where , is a random vector generated from . The testing samples and . We consider the following scenarios:
-
•
Scenario S10: , ; fix and change ;
-
•
Scenario S11: , ; fix and change .
-
•
Scenario S12: , ; fix and change ;
-
•
Scenario S13: , ; fix and change .




Figure 4 compares the analytic misclassification rate, calculated using the formulas from this section, with the simulated misclassification rate, obtained from 50 simulation runs of Algorithms 1 and 2 under these scenarios. We see that the analytic misclassification rates closely match the simulated ones. This suggests that the formulas used for estimating the misclassification rate effectively capture the key quantities from the distributions that are critical for the proposed approaches.
By examining these formulas, we see that the first and second moments of and are particularly significant (through the function), This likely explains why the proposed approaches performs well in scenarios with differences in means and/or variances. Additionally, the third and fourth moments of and also contribute, albeit to a lesser extent, through the function.
5 Conclusion
We propose a novel framework for high-dimensional classification that utilizes common patterns found in high-dimensional data under both the mean difference and variance difference scenarios. This framework exhibits superior performance in high-dimensional data classification and network data classification. Additionally, we provide theoretical analysis to understand the key quantities that influence the method’s performance.
While the distance is the default choice for computing pairwise distances in the proposed algorithms, it is not limited to this measure and can be extended to other types of distance measures. Exploring the performance of the method with different distances is an interesting avenue for future research, and we plan to investigate this further.
Appendix A Discussion about multilayer perceptron (MLP)
In this section, we examine the impact of different parameters in a multilayer perceptron (MLP) on high-dimensional classification tasks. All simulations are conducted under Setting () with , , , , , and . Here, , and is a random vector generated from . The testing samples are and .
For the MLP structure, we used three hidden layers, each with the same number of nodes and activation function, and an output layer with the softmax function as the activation function. We first tested the performance of different activation functions with nodes in each layer and training samples. The results are shown in Table 7. We observed that the “relu”() and “softplus” () activation functions slightly outperformed the “gelu”() function when classifying mean differences. However, only the “gelu” function can work for classifying variance differences. Therefore, we selected the “gelu” function for the activation function in subsequent simulations.
gelu | relu | tanh | softplus | selu | ||
---|---|---|---|---|---|---|
6 | 1 | 0.028 | 0.195 | 0.173 | ||
0 | 1.1 | 0.502 | 0.496 | 0.498 | 0.488 | |
0 | 1.4 | 0.496 | 0.490 | 0.471 | 0.385 |
Next, we examine the impact of different training set sizes on the performance of the MLP with 1,000 nodes per layer and the “gelu” activation function. The results are shown in Table 8. We observe that, compared to other classification methods, the MLP requires significantly larger sample sizes to achieve optimal performance. In practice, the performance of MLP may be constrained by the available training set size.
100 | 200 | 400 | 800 | ||
---|---|---|---|---|---|
6 | 1 | 0.146 | 0.055 | 0.035 | 0.028 |
0 | 1.1 | 0.610 | 0.495 | 0.470 | 0.462 |
0 | 1.4 | 0.410 | 0.275 | 0.130 | 0.057 |
Table 9 presents the performance of other methods under the same settings as in Table 7. A comparison of these results reveals that the MLP perform comparably to the best-performing methods when classifying mean differences. However, when classifying variance differences, the MLP fails to perform effectively when the signal is already large enough for other effective methods (second row of Table 9).
Dist | Rank | GLDA | SVM | RF | Boosting | FAIR | NN-MADD | PLSR | ELM | ||
---|---|---|---|---|---|---|---|---|---|---|---|
6 | 1 | 0.115 | 0.354 | 0.077 | 0.343 | ||||||
0 | 1.1 | 0.506 | 0.106 | 0.425 | 0.465 | 0.507 | 0.412 | 0.440 | |||
0 | 1.4 | 0.457 | 0.106 | 0.339 | 0.500 | 0.459 | 0.470 |
References
- Golub et al. [1999] Todd R Golub, Donna K Slonim, Pablo Tamayo, Christine Huard, Michelle Gaasenbeek, Jill P Mesirov, Hilary Coller, Mignon L Loh, James R Downing, Mark A Caligiuri, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. science, 286(5439):531–537, 1999.
- Ayyad et al. [2019] Sarah M Ayyad, Ahmed I Saleh, and Labib M Labib. Gene expression cancer classification using modified k-nearest neighbors technique. Biosystems, 176:41–51, 2019.
- Ye et al. [2009] Qiang Ye, Ziqiong Zhang, and Rob Law. Sentiment classification of online reviews to travel destinations by supervised machine learning approaches. Expert systems with applications, 36(3):6527–6535, 2009.
- Bansal and Srivastava [2018] Barkha Bansal and Sangeet Srivastava. Sentiment classification of online consumer reviews using word vector representations. Procedia computer science, 132:1147–1153, 2018.
- Burkhardt et al. [2010] Felix Burkhardt, Martin Eckert, Wiebke Johannsen, and Joachim Stegmann. A database of age and gender annotated telephone speech. In LREC. Malta, 2010.
- Boser et al. [1992] Bernhard E Boser, Isabelle M Guyon, and Vladimir N Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152, 1992.
- Brown et al. [1999] M Brown, William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, and David Haussler. Support vector machine classification of microarray gene expression data. University of California, Santa Cruz, Technical Report UCSC-CRL-99-09, 1999.
- Furey et al. [2000] Terrence S Furey, Nello Cristianini, Nigel Duffy, David W Bednarski, Michel Schummer, and David Haussler. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics, 16(10):906–914, 2000.
- Schölkopf et al. [2001] Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 13(7):1443–1471, 2001.
- Ghaddar and Naoum-Sawaya [2018] Bissan Ghaddar and Joe Naoum-Sawaya. High dimensional data classification and feature selection using support vector machines. European Journal of Operational Research, 265(3):993–1004, 2018.
- Hussain [2019] Syed Fawad Hussain. A novel robust kernel for classifying high-dimensional data using support vector machines. Expert Systems with Applications, 131:116–131, 2019.
- Fisher [1936] Ronald A Fisher. The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2):179–188, 1936.
- Rao [1948] C Radhakrishna Rao. The utilization of multiple measurements in problems of biological classification. Journal of the Royal Statistical Society. Series B (Methodological), 10(2):159–203, 1948.
- Li et al. [2005] Haifeng Li, Keshu Zhang, and Tao Jiang. Robust and accurate cancer classification with gene expression profiling. In 2005 IEEE Computational Systems Bioinformatics Conference (CSB’05), pages 310–321. IEEE, 2005.
- Paliwal and Sharma [2012] Kuldip K Paliwal and Alok Sharma. Improved pseudoinverse linear discriminant analysis method for dimensionality reduction. International Journal of Pattern Recognition and Artificial Intelligence, 26(01):1250002, 2012.
- Yang and Wu [2014] Wuyi Yang and Houyuan Wu. Regularized complete linear discriminant analysis. Neurocomputing, 137:185–191, 2014.
- Cover and Hart [1967] Thomas Cover and Peter Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21–27, 1967.
- Liu et al. [2006] Ting Liu, Andrew W Moore, Alexander Gray, and Claire Cardie. New algorithms for efficient high-dimensional nonparametric classification. Journal of Machine Learning Research, 7(6), 2006.
- Tang et al. [2011] Lu-An Tang, Yu Zheng, Xing Xie, Jing Yuan, Xiao Yu, and Jiawei Han. Retrieving k-nearest neighboring trajectories by a set of point locations. In International Symposium on Spatial and Temporal Databases, pages 223–241. Springer, 2011.
- Pal et al. [2016] Arnab K Pal, Pronoy K Mondal, and Anil K Ghosh. High dimensional nearest neighbor classification based on mean absolute differences of inter-point distances. Pattern Recognition Letters, 74:1–8, 2016.
- Fan and Fan [2008] Jianqing Fan and Yingying Fan. High dimensional classification using features annealed independence rules. Annals of statistics, 36(6):2605, 2008.
- Freund et al. [1996] Yoav Freund, Robert E Schapire, et al. Experiments with a new boosting algorithm. In icml, volume 96, pages 148–156. Citeseer, 1996.
- Buehlmann [2006] Peter Buehlmann. Boosting for high-dimensional linear models. The Annals of Statistics, 34(2):559–583, 2006.
- Mayr et al. [2012] Andreas Mayr, Nora Fenske, Benjamin Hofner, Thomas Kneib, and Matthias Schmid. Generalized additive models for location, scale and shape for high dimensional data—a flexible approach based on boosting. Journal of the Royal Statistical Society: Series C (Applied Statistics), 61(3):403–427, 2012.
- Breiman [2001] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
- Ye et al. [2013] Yunming Ye, Qingyao Wu, Joshua Zhexue Huang, Michael K Ng, and Xutao Li. Stratified sampling for feature subspace selection in random forests for high dimensional data. Pattern Recognition, 46(3):769–787, 2013.
- Fernández-Delgado et al. [2014] Manuel Fernández-Delgado, Eva Cernadas, Senén Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? The journal of machine learning research, 15(1):3133–3181, 2014.
- LeCun et al. [1989] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989.
- Ranzato et al. [2006] Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra, and Yann Cun. Efficient learning of sparse representations with an energy-based model. Advances in neural information processing systems, 19, 2006.
- Shi et al. [2015] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28, 2015.
- Cao et al. [2020] Xiangyong Cao, Jing Yao, Zongben Xu, and Deyu Meng. Hyperspectral image classification with convolutional neural network and active learning. IEEE Transactions on Geoscience and Remote Sensing, 58(7):4604–4616, 2020.
- Deng et al. [2020] Muqing Deng, Tingting Meng, Jiuwen Cao, Shimin Wang, Jing Zhang, and Huijie Fan. Heart sound classification based on improved mfcc features and convolutional recurrent neural networks. Neural Networks, 130:22–32, 2020.
- Zhang et al. [2021] Zhichao Zhang, Shugong Xu, Shunqing Zhang, Tianhao Qiao, and Shan Cao. Attention based convolutional recurrent neural network for environmental sound classification. Neurocomputing, 453:896–903, 2021.
- Liu et al. [2016] Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101, 2016.
- Kingma et al. [2014] Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. Advances in neural information processing systems, 27, 2014.
- Zhou et al. [2020] Huaji Zhou, Licheng Jiao, Shilian Zheng, Lifeng Yang, Weiguo Shen, and Xiaoniu Yang. Generative adversarial network-based electromagnetic signal classification: A semi-supervised learning framework. China Communications, 17(10):157–169, 2020.
- Radford et al. [2015] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Kim and Hong [2021] Dahye Kim and Byung-Woo Hong. Unsupervised feature elimination via generative adversarial networks: application to hair removal in melanoma classification. IEEE Access, 9:42610–42620, 2021.
- Salzberg [1994] Steven L Salzberg. C4. 5: Programs for machine learning by j. ross quinlan. morgan kaufmann publishers, inc., 1993, 1994.
- Leathwick et al. [2005] JR Leathwick, D Rowe, J Richardson, Jane Elith, and T Hastie. Using multivariate adaptive regression splines to predict the distributions of new zealand’s freshwater diadromous fish. Freshwater Biology, 50(12):2034–2052, 2005.
- Martens and Naes [1992] Harald Martens and Tormod Naes. Multivariate calibration. John Wiley & Sons, 1992.
- Huang et al. [2011] Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang. Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(2):513–529, 2011.
- Popescu et al. [2009] Marius-Constantin Popescu, Valentina E Balas, Liliana Perescu-Popescu, and Nikos Mastorakis. Multilayer perceptron and neural networks. WSEAS Transactions on Circuits and Systems, 8(7):579–588, 2009.
- Maa et al. [1996] Jen-Fue Maa, Dennis K Pearl, and Robert Bartoszyński. Reducing multidimensional two-sample data to one-dimensional interpoint comparisons. The annals of statistics, 24(3):1069–1074, 1996.
- Chen and Friedman [2017] Hao Chen and Jerome H Friedman. A new graph-based two-sample test for multivariate and object data. Journal of the American statistical association, 112(517):397–409, 2017.