Nonlinear subspace clustering by functional link neural networks
Abstract
Nonlinear subspace clustering based on a feed-forward neural network has been demonstrated to provide better clustering accuracy than some advanced subspace clustering algorithms. While this approach demonstrates impressive outcomes, it involves a balance between effectiveness and computational cost. In this study, we employ a functional link neural network to transform data samples into a nonlinear domain. Subsequently, we acquire a self-representation matrix through a learning mechanism that builds upon the mapped samples. As the functional link neural network is a single-layer neural network, our proposed method achieves high computational efficiency while ensuring desirable clustering performance. By incorporating the local similarity regularization to enhance the grouping effect, our proposed method further improves the quality of the clustering results. We name our method as Functional Link Neural Network Subspace Clustering (FLNNSC). Furthermore, we propose a convex combination subspace clustering scheme that combines a linear subspace clustering method with the functional link neural network subspace clustering approach. This combination method is named as Convex Combination Subspace Clustering (CCSC), which allows for a dynamic balance between linear and nonlinear representations. Extensive experiments conducted on four widely used datasets, including Extended Yale B, USPS, COIL20, and ORL, demonstrate that both FLNNSC and CCSC outperform several state-of-art subspace clustering methods in terms of clustering accuracy. Our affinity graph experiments reveal that FLNNSC exhibits clear block diagonal structures. We provide recommendations for hyperparameters in FLNNSC by performing a parameter sensitivity analysis, and empirically verify the convergence of FLNNSC. Additionally, we show that FLNNSC has a lower computational cost compared to two high-performing methods.
keywords:
subspace clustering , functional link neural networks , grouping effect , nonlinear space , self-representation matrix , convex combination1 Introduction
Subspace clustering involves the objective of identifying clusters within data points distributed across distinct subspaces in a high-dimensional space [1, 2]. Its practical applications encompass various scenarios, including image segmentation [3, 4], motion segmentation [5, 6], face grouping [7, 8], character identification [9, 10], among others. The categorization of subspace clustering methods generally comprises four classes: algebraic approaches [11], iterative techniques [12], statistical methodologies [13, 14], and methods based on spectral clustering [15, 16]. Notably, spectral clustering-based techniques have garnered considerable attention due to their superior clustering performance. In essence, spectral clustering-based methods entail fundamental stages: 1) initial acquisition of a representation matrix to establish a similarity matrix derived from original data points, 2) subsequent execution of spectral decomposition and -means clustering to derive the ultimate cluster assignments.
Over the last decade, several subspace clustering techniques have emerged in literature, all aiming to uncover the underlying data subspace structure. The most popular spectral clustering algorithms are Sparse Subspace Clustering (SSC) [3] and Low Rank Representation (LRR) [5]. These methods enable sparse or minimum rank representation of data samples, respectively. Specifically, SSC operates under the assumption that each subspace can be expressed as a sparse linear combination of other points, while LRR aims to create a low-rank matrix that captures the overall data structure. To preserve both the global and local data structures, some researchers have explored the integration of both the low rank and sparse regularizations. Typical works in this domain can be found in references [17] and [18]. Brbic et al. conducted an in-depth investigation of the -norm on sparse regularization to address the over-penalization problem induced by the -norm [19]. The Frobenius norm is applied to the representation matrix in the Least Squares Regression (LSR) algorithm [20], with the goal of establishing the block diagonal characteristic within both inter-cluster and intra-cluster affinities. [21] merges SSC and LSR through the utilization of the trace Lasso technique. This adaptive approach enables the dynamic selection and clustering of correlated data. Lu et al. put forth an innovative regularization strategy relying on the block diagonal matrix, directly imposing the block diagonal structure onto the Block Diagonal Representation (BDR) [9]. Hu et al presented the grouping effect as the extent to which a representation matrix reflects the cluster structure of data points and developed the Smooth Representation Clustering (SMR) model [22], which incorporates a smoothness regularizer to impose the grouping effect and a least squares error term to fit the data. Moreover, the author also introduced a novel affinity measure that leverages the grouping effect, which shows better performance than the conventional one. To deal with complex noise that often occurs in real-world scenarios, many subspace clustering methods takes into account robust cost functions to enhance the algorithm robustness. Correntropy, particularly the Maximum Correntropy Criterion (MCC), stands out as a popular robust criterion. It has found successful applications in signal processing and machine learning [23], owing to its ability to leverage high-order statistical information [24]. He et al. applied MCC directly to subspace clustering [25], while Guo et al. incorporated correntropy with low-rank regularization to enable robust motion segmentation [26]. Furthermore, Zhang et al. considered the schatten p-norm and correntropy together for kernel subspace clustering [27]. In addition to the aforementioned research, more recent interest on subspace clustering has shifted towards the multi-view field, involving aspects such as tensor technique [28], information bottleneck principle [29], latent representation [30].
Many existing techniques for subspace clustering rely on leveraging the linear connections between samples to learn the affinity matrix. In deed, in many practical scenarios, data samples often exhibit nonlinear relationships. The previous mentioned subspace clustering methods may experience performance degradation when dealing with such nonlinear data samples, thereby motivating the development of nonlinear subspace clustering methods. As a typical category of non-linear techniques, kernel-based subspace clustering methods have gained popularity. One such example is the Kernel Sparse Subspace Clustering (KSSC) [31], which incorporates the kernel method into SSC, enabling the algorithm to find a nonlinear decision boundary that can separate the data more effectively. Yin et al. explored the self-expressive principle for symmetric positive definite matrices in the kernel subspace [32]. Yang et al considered the benefits of correntropy and block diagonal regularization when designing the kernel subspace clustering method [33]. Zhen et al enhanced the algorithm robustness and guaranteed the block diagonal property by truncating the trivial coefficients of kernel matrix [34]. Wang et al. integrated coefficient discriminant information and kernel subspace clustering into a unified process [35]. Kang et al introduced a comprehensive framework that unifies graph construction and kernel learning, harnessing the resemblance of the kernel matrix to enhance clustering performance [36]. Multi-kernel learning methods have been used to find the optimal kernel combination for better clustering results [37, 38]. In addition to the kernel method, Zhu et al employed a feed-forward neural network for executing nonlinear mapping and subsequently introduced the Nonlinear Subspace Clustering (NSC) technique [39]. Deep Subspace Clustering (DSC) techniques that combine the advantages of deep neural networks and subspace clustering methods have drawn much attention [40], which introduce a self-expression layer between the encoder and the decoder to learn a similarity graph. However, subspace clustering methods based on neural network have limitations in computational cost when the layer number is large, and may also be sensitive to the setting of hyperparameters [41]. For a clear comparison, we summarize the advantages and shortcomings of several typical methods in Table 1.
To address the aforementioned challenge, this paper introduces an innovative subspace clustering technique that utilizes the functional link neural network (FLNN). This approach is termed Functional Link Neural Network Subspace Clustering (FLNNSC). FLNN is a single-layer artificial neural network that applies nonlinear transformations to the input features to strengthen the learning capability of the network [42]. This network has found applications in diverse areas such as filter design [43], resolution of the van der Pol-Duffing oscillator [44]. Compared with the conventional neural network, FLNN is able to achieve comparable function approximation performance with faster convergence and lower computational complexity. In practical scenarios, such as in the case of biological signals [45, 46], data often exhibit both linear and nonlinear structures. To fully exploit both the linear and nonlinear features of the data, we propose an extension of FLNNSC, introducing a convex combination subspace clustering (CCSC) method that simultaneously integrates linear and nonlinear representations. Our main contributions are summarized as follows:
-
1.
Compared to existing subspace clustering methods, FLNNSC is computationally effective in exploiting the nonlinear relations among data points, thanks to the lightweight structural features of FLNN.
-
2.
FLNNSC is capable of capturing local similarity through the grouping effect, as well as avoiding the overfitting issue by imposing a weight regularization mechanism.
-
3.
Our subsequent proposed CCSC method allows for a more comprehensive exploration of the data’s diverse features, due to the integration of both linear and nonlinear representations.
The organization of the paper is as follows. Section 2 offers a review of relevant subspace clustering methods. In Section 3, we introduce our novel subspace clustering approach. The evaluation of our methods’ performance across diverse datasets is conducted in Section 4. Conclusions are drawn in Section 5. The main notations used in the paper are summarized in Table 2.
Methods | Advantages | Shortcomings |
SSC [3] | sparse data selection | grouping effect/nonlinear data proficiency |
LRR [5] | grouping effect | deficiency in subset selection/nonlinear data proficiency |
LSR [20] | grouping effect | deficiency in subset selection/nonlinear data proficiency |
CASS [21] | balance between SSC and LSR | nonlinear data proficiency |
BDR [9] | block diagonal property | nonlinear data proficiency |
SMR [22] | grouping effect | nonlinear data proficiency |
KSSC [28] | nonlinear mapping capability | performance sensitive with kernel function |
NSC [36] | nonlinear mapping capability | performance sensitive with hyperparameters |
DSC [37] | strong capabilities for diverse data | high computational cost |
Notation | Description |
data matrix | |
self-representation matrix | |
-norm of a matrix | |
diagonal entries of a matrix | |
-norm of a matrix | |
nuclear norm | |
-norm of a matrix | |
trace of a matrix | |
element-wise product | |
tranpose of a vector or a matrix | |
2 Preliminaries
In this section, we briefly review SSC [3] and LRR [5]. Then, we presents the basic principle of functional link neural network.
2.1 Review of SSC and LRR
To facilitate our review, we first present some relevant definitions and notations. Let and denote the data matrix and self-representation matrix, respectively, where is the th sample of , denotes the data dimension, and is the data number.
The objective of SSC is to uncover the sparse linear structure of data points. This is achieved by imposing the constraint of sparsity upon the representation matrix, which can be expressed as follows:
(1) |
where the utilization of the -norm serves the purpose of pursuing a sparse representation, the vector is constructed from the diagonal entries of , and the constraint is imposed to prevent the emergence of trivial solutions. Solving the optimization problem as stated in (1) can be achieved by implementing the Alternating Direction Method of Multipliers (ADMM) [47].
LRR utilizes the inherent property of low rank to capture the global features of data. The formulation of the corresponding objective function is presented as:
(2) |
where the -norm is employed to enforce row sparsity on the error matrix, enabling the identification and removal of outliers that diverge from the underlying subspaces, computes the summation of singular values within , and is a balance parameter. A notable distinction between SSC and LRR lies in their methodologies: SSC relies on the -norm to induce sparsity, whereas LRR utilizes the nuclear norm to encourage a low-rank structure.
2.2 Fundamentals of FLNN
It is seen from the second panel of Figure 1 that the FLNN is a single-layer neural network, comprised of an input layer that is functionally expanded and an accompanying output layer. Notably, there is an absence of a hidden layer within this configuration. The input layer can be expanded by some nonlinear functions [42], such as Chebyshev polynomials, Legendre polynomials, trigonometric functions, etc, which is useful to strengthen the learning capability of the network. For example, consider the th element in a vector , an enhanced representation by using a second-order trigonometric polynomial is given by . The FLNN has a simpler structure than the traditional feed-forward neural network, as it does not have any hidden layers. This reduces the computational complexity and the number of parameters that need to be trained.
3 FLNN subspace clustering
This section first introduces the formulation of our novel model. Then, we present the procedures for optimization. Finally, the computationally complexity of FLNNSC is analyzed. The framework of our proposed FLNNSC method is shown in Figure 1.

3.1 Model formulation
As described above, we denote and as the data matrix and self-representation matrix, respectively. The input sample is expanded by a nonlinear function , and then pass through a layer of multiple neurons. With an activation function, the output is provided by
(3) |
where represents the output vector with being the expanded dimension via functional expansion, denotes the activation function, is the parameter matrix to be trained, which plays a pivotal role in generating outputs that closely approximate the actual results, and is the input vector after functional expansion. In this paper, we select a typical expansion function, namely trigonometric polynomial, for . The choice of trigonometric polynomials is motivated by their inherent simplicity and computational efficiency. Moreover, they provide orthonormal bases, a feature that greatly simplifies the process of calculating expansion coefficients, thereby enhancing the overall computational performance. Specifically, we employ a second-order expansion because it achieves a balance between performance and computational complexity. While higher-order expansions could potentially be used, they would significantly increase the computational complexity. We find that a second-order expansion is sufficient to achieve good performance while maintaining an acceptable level of complexity. Consequently, the dimension of the transformed data is .
By stacking each , the output matrix of the output layer is defined as
(4) |
Using the output matrix , our approach conducts subspace clustering in the nonlinear domain.
The objective function of FLNNSC is comprised of three components, i.e.,
(5) |
where is served to learn the self-representation matrix, is used to impose the grouping effect, and is employed to avoid the model over-fitting.
Specifically, we define as
(6) |
The second component is defined as
(7) |
where is a balance parameter, the Laplacian matrix is determined through the calculation with measuring the data similarity, and being a diagonal matrix with its diagonal entries calculated by . A prevalent approach to construct is to use the nearest neighbor (-nn) graph with a heat kernel or 0-1 weights [22]. Generally, 0-1 weighted -nn graph is most commonly used due to its simplicity and desirable performance. Typically, a default value of is set to 4 [22], and we follow this setting in our experiments. The third component is defined as
(8) |
where is a regularization parameter.
3.2 Optimization
In this section, we comprehensively detail the process of addressing the optimization problem specified in equation (9). Note that the optimization problem depends on two variables, and . In the subsequent derivation, we update and iteratively.
1) Update : To update , we keep and fixed and eliminate the irrelevant term, and arrive at
(10) |
A direct method to tackle the optimization problem stated in (10) is by employing the gradient descent algorithm. Upon calculating the derivative of the aforementioned objective function with respect to , we obtain
(11) |
where represents the derivative of the activation function . As a result, the parameter matrix is updated as
(12) |
where is the learning rate.
2) Update : To update , we keep and constant and ignore the irrelevant term, and arrive at
(13) |
Additionally, (13) can be written into an equivalent form
(14) |
Upon setting the derivative of (14) with respect to to zero, the result is as follows.
(15) |
It is noted that (15) is a continuous Lyapunov equation which can be solved by recalling the “lyap” function in MATLAB.
We execute iterative updates of and until the objective function converges to a stable state. Subsequent to this, we attain the self-representation and proceed with the construction of the graph as outlined in [22]. Finally, we apply spectral clustering on the resulted graph . A comprehensive description of the FLNNSC approach is presented in Table 3
Input: |
The data matrix |
The parameters and |
Output: |
The neural network |
The self-representation matrix |
Initialize , and |
Compute the Laplacian matrix |
while not convergence do |
for do |
randomly select a sample and let ; |
compute , update , compute ; |
end |
update |
end |
3.3 Computational complexity analysis
To initiate the convergence loop, we first compute the Laplacian matrix in time using the formula . Within the loop, we compute , update , and compute for all data points at each iteration. Given that a second-order trigonometric polynomial expansion for is used, the dimensionality after the expansion is , and the complexity of computing is . The update of involves two terms: ( and , with complexities of and , respectively. The complexity of computing is . For data points, the overall complexity within the loop is . The update of is based on solving (15) by invoking the “lyap” function. This requires a complexity of . Assuming iterations, the final complexity corresponding to the convergence loop is . When the condition is satisfied, the model complexity can be approximated as .
Upon reviewing the relevant literature, it becomes clear that several existing methods, including SSC, LRR, LRSC, and LSR, primarily differ in the norm constraint imposed on the regularization term. These methods share a similar structure of objective function, which leads to an approximate computational complexity of when . However, in comparing actual running time, SSC and LRR generally take longer than LSR. This is because problems induced by sparse and nuclear norms typically require more computational resources. From the corresponding literature of SMR, we know that its complexity is also . The computational cost of NSC is heavily dependent on the choice of feed-forward neural networks, with more layers resulting in higher costs. The KSSC method actually has a slightly higher complexity than SSC due to the additional kernel mapping. The BDR method, which introduces a block diagonal regularization, has better learning efficiency than SSC or LRR in some data scenarios. Despite this, both KSSC and BDR can be considered as having a complexity of from a qualitative perspective.
Therefore, we can qualitatively argue that many subspace clustering methods have a complexity of , including our proposed method. However, it is worth noting that they exhibit discrepancy in their actual execution time.
4 Convex combination subspace clustering
Subspace clustering datasets often exhibit linear and nonlinear data structures. In order to exploit the full characteristics of the data, we in this section present a CCSC method that integrates linear and nonlinear representations. In Section 3, we have presented the model formulation and optimization procedures in detail, accompanied by essential concepts and definitions. Therefore, we directly start from formulating a novel objective function, given by
(16) | ||||
where denotes the combination parameter, and represent the self-representation matrices corresponding to nonlinear and linear structures, respectively. The optimization problem in (16) depends on , and . In the following derivation, we update , and iteratively.
1) Update : To update , we maintain and as fixed variables while eliminating the irrelevant terms, and arrive at
(17) |
It is noted that (17) closely resembles (10) except for extra combination parameter . By following a similar derivation process, we easily obtain the update for as follows:
(18) |
2) Update : To update , we maintain and as fixed variables while eliminating the irrelevant terms, leading to
(19) |
The optimization in (19) is fundamentally analogous to (15). Consequently, solving (19) yields a result that is equivalent to (15), such that
(20) |
3) Update : To update , we maintain as fixed variables while ignoring the irrelevant terms, results in
(21) |
Solving (22) yields
(22) |
By invoking the “lyap” function in MATLAB, we obtain matrices and . The final self-representation matrix, which comprehensively considers both linear and nonlinear features, can then be expressed as
(23) |
Remark 1. The implementation procedures of the proposed CCSC method can be referenced from Table 3, with the exception being updates for , , and need to be replaced with their corresponding updated versions. From (16), the CCSC method can effectively capture diverse data features by adjusting the combination parameter . A large value of focuses on exploiting the nonlinear data structure while reducing exploration of the linear data structure. Conversely, a small value of prioritizes exploiting the linear data structure while minimizing exploration of the nonlinear data structure. In particular, when , the CCSC method simplifies to the FLNNSC method. Thus, the CCSC method can be regarded as a generalized advancement of FLNNSC.
5 Experiments
In this section, we conduct extensive experiments on diverse benchmark datasets to assess the performance of the FLNNSC and CCSC methods. This evaluation is carried out in contrast to several baseline methods. Experiments are carried out on a computer equipped with a th Generation Intel(R) Core(TM) i5-12490F CPU. MATLAB R2023a is employed as the primary software tool.
5.1 Experimental settings
1) Datasets: We consider four publicly available datasets: Extended Yale B [48], USPS [49], COIL20 and and ORL [9]. We use PCA to perform dimensionality on the raw datasets. For implementing PCA, we reduce the data dimensionality to a default dimensionality. Since the determination of the number of principal components is influenced by the number of cluster, we set the number of principal components (default dimensionality) to be . A brief description of each dataset is provided below:
1) Extended Yale B: Extended Yale B is a widely known and challenging face recognition dataset extensively utilized in computer vision research. It comprises 2414 frontal face images categorized into 38 subjects, with each subject containing approximately 64 images taken under different illumination conditions.
2) USPS: USPS is a popular handwritten digit dataset widely used for handwriting recognition tasks. It consists of 92898 handwritten digit images covering ten digits (0–9), each image sized pixels, with 256 gray levels per pixel.
3) COIL20: COIL20 has images of 20 different objects observed from various angles. Each object has 72 gray scale images, resulting in a total of 1440 images in the dataset.
4) ORL: ORL comprises 400 images of 40 distinct individuals. These images, taken at different times, showcase varying lighting conditions, facial expressions (open/closed eyes, smiling/not smiling), and facial details (with/without glasses).
2) Compared methods: We compare FLNNSC and CCSC with several baselines, including SSC [3], LRR [5], LRSC [50], NSC [39], SMR [22], KSSC [31], LSR1 and LSR2 [20], Kernel Truncated Regression Representation (KTRR) [34], BDR-B and BDR-Z [9].
3) Evaluation metrics: We evaluate the clustering performance by clustering accuracy (CA), normalized mutual information (NMI), adjusted rand index (ARI) , and F1 score. All methods are performed 20 times, and the resulting mean values of the metrics are recorded.
4) Parameter settings: For a fair comparison, we use the code (if available as open source) provided on the respective open source platform for existing subspace clustering algorithms. For all competing algorithms, we carefully adjust the parameters to achieve optimal performance or adopt the recommended parameters as stated in the corresponding papers. For all the methods, we turn the parameters in the range , and then select the parameter with the best performance. The optimal parameter selections of various algorithms are presented in Table 4.
Datasets | SSC | LRR | LRSC | NSC | SMR | KSSC | LSR1 | LSR2 | KTRR | BDR-B | BDR-Z | FLNNSC |
Extended Yale B | ||||||||||||
USPS | ||||||||||||
COIL20 | ||||||||||||
ORL | ||||||||||||
5.2 Evaluation of clustering performance
In the following, we will conduct extensive experiments to validate our proposed model, including clustering performance comparison, affinity graph analysis, parameter sensitivity and convergence analysis, as well as execution time analysis.
5.2.1 Performance comparison
Datasets | Metrics | SSC | LRR | LRSC | NSC | SMR | KSSC | LSR1 | LSR2 | KTRR | BDR-B | BDR-Z | FLNNSC | CCSC |
10 subjects | CA | 82.66 | 88.02 | 91.34 | 81.72 | 88.95 | 85.41 | 82.73 | 82.05 | 93.74 | 94.30 | 90.15 | 95.16 | 96.88 |
NMI | 85.08 | 83.61 | 84.64 | 82.34 | 83.90 | 79.03 | 82.43 | 82.32 | 89.31 | 90.95 | 85.43 | 90.29 | 94.20 | |
ARI | 77.51 | 77.87 | 80.56 | 72.69 | 78.63 | 73.34 | 74.11 | 73.55 | 85.57 | 86.47 | 75.60 | 89.28 | 92.89 | |
F1 | 79.86 | 80.11 | 82.51 | 75.48 | 80.78 | 76.06 | 76.75 | 76.25 | 87.02 | 87.83 | 78.12 | 90.34 | 93.60 | |
15 subjects | CA | 76.71 | 81.21 | 74.50 | 74.14 | 81.18 | 78.90 | 74.40 | 84.10 | 80.00 | 92.14 | 84.48 | 93.71 | 96.08 |
NMI | 84.17 | 87.86 | 71.66 | 77.17 | 87.70 | 79.14 | 79.00 | 83.65 | 85.80 | 89.89 | 82.92 | 90.97 | 93.91 | |
ARI | 73.27 | 77.98 | 48.99 | 62.88 | 77.66 | 65.19 | 65.79 | 73.18 | 74.34 | 80.14 | 62.31 | 84.71 | 90.89 | |
F1 | 75.17 | 79.53 | 52.80 | 65.41 | 79.24 | 67.72 | 68.14 | 75.01 | 76.12 | 81.85 | 65.12 | 85.74 | 91.50 | |
20 subjects | CA | 74.33 | 84.71 | 70.44 | 75.31 | 85.54 | 74.68 | 74.77 | 80.91 | 90.02 | 84.90 | 82.09 | 90.49 | 90.49 |
NMI | 83.67 | 88.16 | 72.77 | 77.50 | 88.54 | 81.05 | 78.84 | 83.78 | 91.07 | 83.02 | 82.91 | 92.15 | 92.15 | |
ARI | 68.71 | 78.11 | 47.92 | 60.25 | 78.97 | 62.69 | 63.62 | 72.28 | 84.35 | 68.51 | 57.56 | 81.49 | 81.49 | |
F1 | 70.44 | 79.25 | 50.83 | 62.33 | 80.06 | 64.84 | 65.50 | 73.69 | 85.13 | 70.18 | 57.52 | 82.49 | 82.49 | |
Datasets | Metrics | SSC | LRR | LRSC | NSC | SMR | KSSC | LSR1 | LSR2 | KTRR | BDR-B | BDR-Z | FLNNSC | CCSC |
USPS | CA | 84.52 | 86.89 | 77.10 | 84.74 | 88.37 | 82.26 | 82.54 | 83.27 | 84.96 | 89.29 | 89.52 | 99.70 | 99.70 |
NMI | 77.35 | 79.38 | 78.29 | 77.12 | 80.02 | 80.98 | 71.92 | 72.53 | 82.75 | 81.34 | 82.26 | 99.27 | 99.27 | |
ARI | 69.75 | 73.66 | 69.11 | 70.37 | 76.16 | 72.88 | 65.76 | 66.98 | 76.89 | 76.87 | 77.14 | 99.33 | 99.33 | |
F1 | 72.81 | 76.32 | 72.36 | 73.34 | 78.54 | 75.77 | 69.20 | 70.29 | 79.26 | 79.20 | 79.46 | 99.40 | 99.40 | |
COIL20 | CA | 73.92 | 75.92 | 68.05 | 71.83 | 69.69 | 77.36 | 67.62 | 68.04 | 84.38 | 78.13 | 73.51 | 87.29 | 87.29 |
NMI | 87.79 | 84.98 | 76.31 | 81.64 | 77.27 | 94.03 | 78.08 | 78.15 | 93.25 | 87.92 | 81.37 | 92.80 | 92.80 | |
ARI | 65.90 | 68.80 | 59.50 | 65.73 | 58.05 | 77.87 | 61.75 | 62.15 | 80.20 | 70.12 | 64.73 | 81.90 | 81.90 | |
F1 | 67.87 | 70.47 | 61.52 | 67.46 | 60.30 | 79.11 | 63.68 | 64.06 | 81.29 | 71.76 | 66.56 | 82.85 | 82.85 | |
ORL | CA | 67.00 | 71.25 | 72.75 | 77.50 | 75.50 | 74.75 | 76.50 | 72.75 | 80.00 | 80.25 | 79.50 | 82.25 | 82.25 |
NMI | 85.02 | 85.33 | 85.35 | 88.96 | 90.19 | 87.60 | 88.20 | 86.12 | 91.93 | 90.06 | 89.95 | 91.43 | 91.43 | |
ARI | 56.13 | 61.25 | 62.34 | 68.80 | 69.16 | 65.33 | 66.37 | 62.19 | 74.63 | 70.41 | 71.97 | 74.92 | 74.92 | |
F1 | 57.31 | 62.15 | 63.22 | 68.54 | 69.93 | 66.18 | 67.20 | 63.11 | 75.22 | 71.13 | 72.64 | 75.52 | 75.52 | |
1) Performance on the Extended Yale B: We begin by comparing the proposed algorithms with existing subspace clustering methods on the Extended Yale B dataset for 10 subjects, 15 subjects, and 20 subjects, as shown in Table 5. To highlight significant results, we use bold font to indicate the best performance, and underline font for the second-best result. Notably, for our proposed FLNNSC and CCSC algorithms, the selection of and remains consistent. However, CCSC introduces an additional balance parameter, , which requires adjustment. Specifically, the optimal values for in CCSC are fixed at for 10 subjects and for 15 subjects.
It is seen that two variants based on BDR, namely BDR-B and BDR-Z, outperform other existing subspace clustering methods in performance. This superior performance can be attributed to their direct enforcement of a block diagonal structure, which theoretically assures desirable clustering results. Meanwhile, the SMR method, which incorporates a regularization for the grouping effect, also demonstrates admirable performance. Furthermore, we notice that KSSC outperforms SSC, which verifies the efficacy of kernel mapping in effectively exploring the nonlinear structure among data.
Across all experiments with different subjects, the proposed FLNNSC and CCSC algorithms consistently achieve higher clustering accuracy than all other compared subspace clustering methods. This can be attributed to the fact that both of our proposed methods incorporate considerations for nonlinear transformation and the grouping effect during the modeling process. As a result, they theoretically ensure optimal clustering performance when dealing with practical datasets. Furthermore, CCSC exhibits superior performance compared to FLNNSC, with an accuracy improvement of and for 10 subjects and 15 subjects, respectively. This enhanced performance is attributable to the combination parameter , which effectively balances the influence of linear and nonlinear representations. When considering 20 subjects, both CCSC and FLNNSC achieve the same clustering accuracy, indicating that the optimal performance of CCSC is attained when , thus demonstrating that CCSC reduces to the FLNNSC algorithm in this case. Regarding other metrics, such as NMI, ARI and F1 score, our proposed algorithms consistently outperform other subspace clustering methods, except that the ARI and F1-score metrics are inferior to those of KTRR in the case of 20 subjects.
2) Performance on the USPS, COIL20 and ORL: We subsequently compare our proposed algorithms with existing subspace clustering methods on the USPS, COIL20 and ORL datasets, as shown in Table 6. Experiment results demonstrate that our proposed algorithms consistently outperform all other compared subspace clustering methods across all evaluation metrics on the USPS and COIL20 datasets. Furthermore, they nearly outperform other algorithms on the ORL dataset. Particularly noteworthy is the performance of CCSC, which attains the same level of clustering accuracy as FLNNSC, indicating that CCSC fully exploits the nonlinear representations of the data on both datasets.


For the purpose of exploring the influence of the combination parameter on CCSC, we plot the evolutionary curves of CCSC with respect to , while maintaining and fixed at the optimal parameters of FLNNSC. The plots are shown in Figure 2. From Figure 2(a), we observe that CCSC exhibits relatively low clustering accuracy for small values, where linear representations predominantly contribute to the results. As increases, the performance improves, reaching its peak when , where the nonlinear representations are fully exploited without any influence from linear representations. In this particular case where , CCSC actually reduces to FLNNSC. This implies that, on the USPS dataset, the optimal performance of CCSC aligns precisely with that of FLNNSC. This result further emphasizes the significance of investigating the nonlinear structure within data. Moreover, a similar observation can be made from Figure 2(b), except NMI experiencing a slight decline from to . Overall, focusing more on uncovering the nonlinear characteristics of data is beneficial to subspace clustering.
5.2.2 Affinity graphs analysis

To gain a visual understanding of the advantages of our proposed FLNNSC, we plot the affinity graphs of all compared subspace clustering algorithms, as depicted in Figure 3. The figures reveal that many of the compared subspace clustering methods exhibit issues with unclear block-diagonal structures or noisy off-block-diagonal components. For instance, Figures 3(a), (d), (e), (g), and (h) clearly show the presence of noisy off-block-diagonal components, while Figure 3(b) illustrates an unclear block-diagonal structure. In contrast, Figure 3(l) highlights the distinct block-diagonal structure of our proposed FLNNSC algorithm, along with its off-block-diagonal components appearing less noisy, which is attributed to the eventual desirable clustering performance.
5.2.3 Parameter sensitivity and convergence analysis
1) Parameter sensitivity analysis: In order to investigate the sensitivity of parameters in the FLNNSC algorithm, we analyze the influences of the parameters and . By varying these parameters over a range of values, we examine the variability of FLNNSC. Specifically, we set and to vary from to . The experimental results are depicted in Figure 4.
On the Extended Yale B dataset, we observe that the parameter , when chosen within the range , exhibits little influence on the algorithm’s performance. However, when is set to be larger than 1, the clustering accuracy experiences significant variations. In contrast, FLNNSC shows little sensitivity to the parameter when its value is chosen within small values. On the USPS dataset, the selection of has no obvious effects in terms of CA and NMI. However, FLNNSC demonstrates sensitivity to large values of . Similarly, on the COIL20 dataset, we find that FLNNSC is not sensitive to the selection of , but it is sensitive to large values of .
Based on our observations, we recommend choosing small values for to achieve better clustering performance in FLNNSC.
2) Convergence analysis: We conduct an experiment analysis to examine the convergence behavior of the proposed algorithm, utilizing the metric , as shown in Figure 5. It is observed that FLNNSC achieves a steady-state within 10 iterations. Specifically, on the Extended Yale B and COIL20 datasets, it reaches this state in fewer than 5 iterations. The obtained results demonstrate that the proposed FLNNSC algorithm exhibits rapid convergence across all three datasets, reaching a steady-state in a remarkably short period. This rapid convergence feature not only significantly reduces time consumption but also caters to the real-time requirements in practical scenarios, especially when processing large-scale data or dealing with complex tasks.

5.2.4 Execution time analysis
In order to further validate the efficiency of our method, we choose two benchmark methods for comparison: NSC and BDR. We select NSC for comparison as our method is developed in response to the limitations inherent in this particular approach. On the other hand, BDR is selected as it is a well-established baseline method known for its impressive performance across a variety of datasets. Figure 6 shows the comparison results of execution time on different datasets. It is seen that as the number of clusters increases, our method consistently outperforms both NSC and BDR in terms of execution time, thereby validating the efficiency of our proposed method. It is expected that FLNNSC will exhibit promising performance in terms of execution efficiency in real-world applications.

5.3 Friedman test
We consider Friedman test [51] to validate the superiority of the proposed model. Specifically, in terms of datasets, we select Extended Yale B with 10 subjects, USPS, COIL20, and ORL. Regarding methods, we choose SSC, LRR, LRSC, NSC, SMR, KSSC, LSR1, KTRR, BDR-B, and our FLNNSC. For each dataset, we sort various models in ascending order based on their accuracy and use the index of the sorted values as the rank for the corresponding model, and the rank matrix is shown in Table 7.
Datasets | SSC | LRR | LRSC | NSC | SMR | KSSC | LSR1 | KTRR | BDR-B | FLNNSC |
Extended Yale B | 2 | 5 | 7 | 1 | 6 | 4 | 3 | 8 | 9 | 10 |
USPS | 4 | 7 | 1 | 5 | 8 | 2 | 3 | 6 | 9 | 10 |
COIL20 | 5 | 6 | 2 | 4 | 3 | 7 | 1 | 9 | 8 | 10 |
ORL | 1 | 2 | 3 | 7 | 5 | 4 | 6 | 8 | 9 | 10 |
After obtaining the rank matrix, we calculate the rank value for each method. Let be the rank of the -th method on the -th dataset, then the rank of the -th model is:
(24) |
where denotes the number of datasets. Assuming that rank of each method follows a normal distribution, the corresponding chi-square statistic is [52]:
(25) |
where is the number of methods. Finally, we can use the chi-square distribution to compute the -value. By applied the friedman function in Matlab, we calculate a -value of . As this value is lower than the significance level of , we reject the null hypothesis. This suggests that there is a significant difference in the performance of the various methods. To visually represent these differences, we have plotted a bar chart of mean rank, as shown in Fig. 7. It is evident from the chart that KTRR, BDR-B, and FLNNSC significantly outperform the other methods in terms of mean rank (calculated using equation (24)). Notably, our method, FLNNSC, achieves a higher rank value than both KTRR and BDR-B.

5.4 Discussion
In subspace clustering, when the data exhibits evident nonlinearity characteristics, the combination method, i.e., CCSC, shows comparable clustering performance to FLNNSC under the optimal parameter settings. If one select CCSC, it is crucial to carefully tune an extra balance parameter . This is because the selection of impacts the performance. Therefore, in such cases, we recommend FLNNSC. If the data also exhibits significant linear characteristics that cannot be ignored, CCSC outperforms FLNNSC due to its ability to dynamically adjust the balance between linear and nonlinear representations. In this case, we recommend CCSC, with the tuning of over the range of (0.5, 1).
Moreover, the proposed FLNNSC algorithm, using a functional link neural network for nonlinear representation, has higher computational efficiency compared to traditional neural network-based subspace clustering methods. Remarkably, to enhance the capacity for capturing the data’s nonlinearity, one can employ a higher-order functional expansion for FLNN. This may lead to performance improvements, but this enhancement introduces a trade-off by increasing computational intricacy.
6 Conclusions
We proposed a novel nonlinear subspace clustering method named as FLNNSC in this paper. By utilizing a single-layer FLNN with remarkable nonlinearity approximation capability, FLNNSC demonstrates the ability to exploit the nonlinear features of data, resulting in enhanced clustering performance. Moreover, the proposed FLNNSC algorithm was regularized to enable the grouping effect. To exploit both linear and nonlinear characteristics, we further introduced the CCSC algorithm, which dynamically adjusts the balance between linear and nonlinear representations via the combination parameter. Experiments on benchmark datasets have shown that our methods outperform several advanced subspace clustering methods. Additionally, affinity graph results have highlighted the grouping effect of FLNNSC. We conducted a parameter sensitivity analysis, yielding valuable suggestions for parameter selection. Furthermore, we empirically demonstrated the rapid convergence behavior of FLNNSC. It should be noted in the CCSC method, the combination parameter , which balances linear and nonlinear representations, plays a crucial role. However, selecting an optimal still imposes a challenge, as an ideal choice would depend on the underlying data features, which are typically unavailable in practice. In our future work, we will investigate a dynamic scheme to adaptively adjust .
Acknowledgment
We sincerely thank Professor Chunguang Li from Beijing University of Posts and Telecommunications for his expert insights that significantly enhanced the quality of this academic work.
The work of Long Shi was partially supported by the National Natural Science Foundation of China under Grant 62201475 and Natural Science Foundation of Sichuan Province under Grant 2024NSFSC1436. The work of Badong Chen was supported by the National Natural Science Foundation of China under Grants U21A20485 and 61976175. The work of Yu Zhao was supported by the Sichuan Science and Technology Program under Grant No. 2023NSFSC0032.
References
- [1] R. Vidal, Subspace clustering, IEEE Signal Process. Mag. 28 (2) (2011) 52–68.
- [2] J. Tang, Q. Yi, S. Fu, Y. Tian, Incomplete multi-view learning: Review, analysis, and prospects, Applied Soft Computing (2024) 111278.
- [3] E. Elhamifar, R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Trans. Pattern Anal. Mach. Intell. 35 (11) (2013) 2765–2781.
- [4] L. Cao, L. Shi, J. Wang, Z. Yang, B. Chen, Robust subspace clustering by logarithmic hyperbolic cosine function, IEEE Signal Process. Lett. 30 (2023) 508–512.
- [5] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Trans. Pattern Anal. Mach. Intell. 35 (1) (2012) 171–184.
- [6] P. Favaro, R. Vidal, A. Ravichandran, A closed form solution to robust subspace estimation and clustering, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1801–1807.
- [7] G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation, in: Proceedings of the 27th International Conference on Machine Learning, 2010, pp. 663–670.
- [8] Y. Qin, X. Zhang, L. Shen, G. Feng, Maximum block energy guided robust subspace clustering, IEEE Trans. Pattern Anal. Mach. Intell. 45 (2) (2022) 2652–2659.
- [9] C. Lu, J. Feng, Z. Lin, T. Mei, S. Yan, Subspace clustering by block diagonal representation, IEEE Trans. Pattern Anal. Mach. Intell. 41 (2) (2018) 487–501.
- [10] V. M. Patel, H. Van Nguyen, R. Vidal, Latent space sparse subspace clustering, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 225–232.
- [11] R. Vidal, Y. Ma, S. Sastry, Generalized principal component analysis (GPCA), IEEE Trans. Pattern Anal. Mach. Intell. 27 (12) (2005) 1945–1959.
- [12] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, D. Kriegman, Clustering appearances of objects under varying illumination conditions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2003, pp. 11–18.
- [13] M. E. Tipping, C. M. Bishop, Mixtures of probabilistic principal component analyzers, Neural Comput. 11 (2) (1999) 443–482.
- [14] S. Rao, R. Tron, R. Vidal, Y. Ma, Motion segmentation in the presence of outlying, incomplete, or corrupted trajectories, IEEE Trans. Pattern Anal. Mach. Intell. 32 (10) (2009) 1832–1845.
- [15] C. Li, C. You, R. Vidal, Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework, IEEE Trans. Image Process. 26 (6) (2017) 2988–3001.
- [16] J. Liu, Y. Chen, J. Zhang, Z. Xu, Enhancing low-rank subspace clustering by manifold regularization, IEEE Trans. Image Process. 23 (9) (2014) 4022–4030.
- [17] J. Wang, D. Shi, D. Cheng, Y. Zhang, J. Gao, Lrsr: low-rank-sparse representation for subspace clustering, Neurocomputing 214 (2016) 1026–1037.
- [18] X. Zhu, S. Zhang, Y. Li, J. Zhang, L. Yang, Y. Fang, Low-rank sparse subspace for spectral clustering, IEEE Trans. Knowl. Data Eng. 31 (8) (2018) 1532–1543.
- [19] M. Brbić, I. Kopriva, -motivated low-rank sparse subspace clustering, IEEE Trans. Cybern. 50 (4) (2018) 1711–1725.
- [20] C. Lu, H. Min, Z. Zhao, L. Zhu, D. Huang, S. Yan, Robust and efficient subspace segmentation via least squares regression, in: Proceedings of the 12th European Conference on Computer Vision, 2012, pp. 347–360.
- [21] C. Lu, J. Feng, Z. Lin, S. Yan, Correlation adaptive subspace segmentation by trace lasso, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1345–1352.
- [22] H. Hu, Z. Lin, J. Feng, J. Zhou, Smooth representation clustering, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3834–3841.
- [23] L. Shi, H. Zhao, Y. Zakharov, An improved variable kernel width for maximum correntropy criterion algorithm, IEEE Trans. Circuits Syst. II 67 (7) (2018) 1339–1343.
- [24] L. Shi, L. Shen, B. Chen, An efficient parameter optimization of maximum correntropy criterion, IEEE Signal Process. Lett. 30 (2023) 538–542.
- [25] R. He, Y. Zhang, Z. Sun, Q. Yin, Robust subspace clustering with complex noise, IEEE Trans. Image Process. 24 (11) (2015) 4001–4013.
- [26] L. Guo, X. Zhang, Z. Liu, Q. Wang, J. Zhou, Correntropy metric-based robust low-rank subspace clustering for motion segmentation, Int. J. Mach. Learn. Cybern. 13 (5) (2022) 1425–1440.
- [27] X. Zhang, B. Chen, H. Sun, Z. Liu, Z. Ren, Y. Li, Robust low-rank kernel subspace clustering based on the schatten p-norm and correntropy, IEEE Trans. Knowl. Data Eng. 32 (12) (2019) 2426–2437.
- [28] L. Fu, Z. Chen, Y. Chen, S. Wang, Unified low-rank tensor learning and spectral embedding for multi-view subspace clustering, IEEE Transactions on Multimedia 25 (2022) 4972–4985.
- [29] S. Wang, C. Li, Y. Li, Y. Yuan, G. Wang, Self-supervised information bottleneck for deep multi-view subspace clustering, IEEE Transactions on Image Processing 32 (2023) 1555–1567.
- [30] L. Shi, L. Cao, J. Wang, B. Chen, Enhanced latent multi-view subspace clustering, IEEE Transactions on Circuits and Systems for Video Technology (2024).
- [31] V. M. Patel, R. Vidal, Kernel sparse subspace clustering, in: IEEE International Conference on Image Processing, IEEE, 2014, pp. 2849–2853.
- [32] M. Yin, Y. Guo, J. Gao, Z. He, S. Xie, Kernel sparse subspace clustering on symmetric positive definite manifolds, in: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 5157–5164.
- [33] C. Yang, Z. Ren, Q. Sun, M. Wu, M. Yin, Y. Sun, Joint correntropy metric weighting and block diagonal regularizer for robust multiple kernel subspace clustering, Inf. Sci. 500 (2019) 48–66.
- [34] L. Zhen, D. Peng, W. Wang, X. Yao, Kernel truncated regression representation for robust subspace clustering, Inf. Sci. 524 (2020) 59–76.
- [35] Z. Wang, J. Liu, Consensus kernel subspace clustering based on coefficient discriminant information, Applied Soft Computing 113 (2021) 107987.
- [36] Z. Kang, L. Wen, W. Chen, Z. Xu, Low-rank kernel learning for graph-based clustering, Knowl.-Based Syst. 163 (2019) 510–517.
- [37] X. Zhang, X. Xue, H. Sun, Z. Liu, L. Guo, X. Guo, Robust multiple kernel subspace clustering with block diagonal representation and low-rank consensus kernel, Knowl.-Based Syst. 227 (2021) 107243.
- [38] M. Sun, S. Wang, P. Zhang, X. Liu, X. Guo, S. Zhou, E. Zhu, Projective multiple kernel subspace clustering, IEEE Trans. Multimedia 24 (2021) 2567–2579.
- [39] W. Zhu, J. Lu, J. Zhou, Nonlinear subspace clustering for image clustering, Pattern Recognit. Lett. 107 (2018) 131–136.
- [40] X. Peng, J. Feng, J. T. Zhou, Y. Lei, S. Yan, Deep subspace clustering, IEEE Trans. Neural Netw. Learn. Syst. 31 (12) (2020) 5509–5521.
- [41] P. Ji, T. Zhang, H. Li, M. Salzmann, I. Reid, Deep subspace clustering networks, in: Advances in Neural Information Processing Systems, 2017, pp. 24–33.
- [42] J. C. Patra, A. C. Kot, Nonlinear dynamic system identification using chebyshev functional link artificial neural networks, IEEE Trans. Syst., Man, Cybern. B 32 (4) (2002) 505–511.
- [43] L. Lu, Y. Yu, X. Yang, W. Wu, Time delay chebyshev functional link artificial neural network, Neurocomputing 329 (2019) 153–164.
- [44] K. Yin, Y. Pu, L. Lu, Combination of fractional flann filters for solving the van der pol-duffing oscillator, Neurocomputing 399 (2020) 183–192.
- [45] T. Balli, R. Palaniappan, Classification of biological signals using linear and nonlinear features, Physiological measurement 31 (7) (2010) 903.
- [46] F. A. Elhaj, N. Salim, A. R. Harris, T. T. Swee, T. Ahmed, Arrhythmia recognition and classification using combined linear and nonlinear features of ecg signals, Computer methods and programs in biomedicine 127 (2016) 52–63.
- [47] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Found. Trends Mach. Learn. 3 (1) (2011) 1–122.
- [48] J. Wang, X. Wang, F. Tian, C. H. Liu, H. Yu, Constrained low-rank representation for robust subspace clustering, IEEE Trans. Cybern. 47 (12) (2016) 4534–4546.
- [49] J. Yang, J. Liang, K. Wang, P. L. Rosin, M.-H. Yang, Subspace clustering via good neighbors, IEEE Trans. Pattern Anal. Mach. Intell. 42 (6) (2019) 1537–1544.
- [50] R. Vidal, P. Favaro, Low rank subspace clustering (LRSC), Pattern Recognit. Lett. 43 (2014) 47–61.
- [51] N. E. Zamri, M. A. Mansor, M. S. M. Kasihmuddin, S. S. Sidik, A. Alway, N. A. Romli, Y. Guo, S. Z. M. Jamaludin, A modified reverse-based analysis logic mining model with weighted random 2 satisfiability logic in discrete hopfield neural network and multi-objective training of modified niched genetic algorithm, Expert Systems with Applications 240 (2024) 122307.
- [52] N. E. Zamri, S. A. Azhar, M. A. Mansor, A. Alway, M. S. M. Kasihmuddin, Weighted random k satisfiability for k= 1, 2 (r2sat) in discrete hopfield neural network, Applied Soft Computing 126 (2022) 109312.