Adaptive Collaborative Correlation Learning-based Semi-Supervised Multi-Label Feature Selection
Abstract
Semi-supervised multi-label feature selection has recently been developed to solve the curse of dimensionality problem in high-dimensional multi-label data with certain samples missing labels. Although many efforts have been made, most existing methods use a predefined graph approach to capture the sample similarity or the label correlation. In this manner, the presence of noise and outliers within the original feature space can undermine the reliability of the resulting sample similarity graph. It also fails to precisely depict the label correlation due to the existence of unknown labels. Besides, these methods only consider the discriminative power of selected features, while neglecting their redundancy. In this paper, we propose an Adaptive Collaborative Correlation lEarning-based Semi-Supervised Multi-label Feature Selection (Access-MFS) method to address these issues. Specifically, a generalized regression model equipped with an extended uncorrelated constraint is introduced to select discriminative yet irrelevant features and maintain consistency between predicted and ground-truth labels in labeled data, simultaneously. Then, the instance correlation and label correlation are integrated into the proposed regression model to adaptively learn both the sample similarity graph and the label similarity graph, which mutually enhance feature selection performance. Extensive experimental results demonstrate the superiority of the proposed Access-MFS over other state-of-the-art methods.
Index Terms:
feature selection, semi-supervised multi-label learning, generalized regression model, adaptive similarity graph learning.1 Introduction
Multi-label data, where each instance is associated with multiple labels, widely exists in real-world applications [1, 2]. For example, a short video can be classified into several categorizes in the task of video classification [3], a news article may be related to several subjects in text categorization [4], and a painting image is possible to be tagged into a number of genres in the image annotation task [5]. In theses applications, multi-label data is often represented in a high-dimensional feature space with redundant and noisy features, resulting in the ”curse of dimensionality” problem and impacting the performance of subsequent tasks. Consequently, developing an effective method to reduce the dimensionality of multi-label data and thereby enhance downstream task performance has emerged as a pressing issue in practical applications.
Multi-label feature selection (MFS) deals with this issue by choosing a compact subset of discriminative features from the original high-dimensional feature space, thus eliminating redundant and noisy information [6, 7]. In recent years, numerous MFS methods have been developed, which can be broadly categorized into two groups. The first group converts multi-label data into multiple independent single-label data and then applies the traditional feature selection method to the decomposed data [8, 9, 10]. Typical feature selection methods for single-label data include the Laplacian Score [9], spectral feature selection [11] and group lasso-based feature selection [12]. Those methods usually assess the importance of features based on specific criteria while maintaining the local manifold structure of data, and subsequently select the top ranked features. However, those methods fail to consider the correlations among different labels, which could enhance the performance of feature selection. Instead of decomposing multi-label data into several single-label data, the second group of MFS methods directly construct a model from multi-label data to facilitate feature selection [13, 14, 15, 16]. MIFS (multi-label informed feature selection) is a typical method in the second group, which incorporates multi-label information into a low-dimensional subspace to identify discriminative features [13]. Besides, Zhang et al. proposed using both the local label correlation and global label correlation to guide the feature selection process [17]. Moreover, Fan et al. have developed a collaborative learning framework for multi-label feature selection, which learns the label correlation and the feature correlation simultaneously [16]. Despite the demonstrated efficacy of these proposed methods in feature selection, they share a common underlying assumption, i.e., all data has been comprehensively labeled. Unfortunately, obtaining large amounts of labeled data is expensive and time-consuming in real-world applications [18]. A more common scenario is that only a portion of the samples are labeled [19, 20]. The aforementioned method based on a supervised way cannot be directly applied in the semi-supervised scenario. To address this issue, Chang et al. proposed a unified semi-supervised multi-label feature selection framework through the integration of sparse feature learning with label prediction [21]. Guo et al. developed a multi-label propagation mechanism to learn unknown labels, which was then integrated into an extended linear discriminant analysis for feature selection [22]. Lv et al. combined adaptive graph structure learning and manifold learning to investigate the global structure of data and label correlations, and subsequently embedded them into the feature selection process [23]. Constructing or learning the similarity graph of samples is a promising technology in MFS [24]. However, current semi-supervised multi-label feature selection methods either fail to investigate the sample similarity graph or rely exclusively on predefined graph approaches. Noise and outliers in the original feature space can undermine the reliability of predefined sample similarity graphs. Additionally, these methods describe label correlations in a pre-computed manner, which can lead to inaccuracies and deficiencies, especially with large volumes of unlabeled data. Existing methods that consider only the correlations between labels or samples overlook the potential for mutual enhancement, which can improve feature selection performance. Furthermore, these methods focus solely on the discriminative capability of selected features while ignoring their redundancy, ultimately degrading the performance of feature selection.
To address these problems, we present a novel MFS method for semi-supervised multi-label data, named as Adaptive Collaborative Correlation lEarning-based Semi-Supervised Multi-label Feature Selection (Access-MFS). Specifically, the proposed method Access-MFS first embeds the feature selection process into a generalized regression model equipped with an extended uncorrelated constraint, which enables the selection of discriminative yet irrelevant features while preserving consistency between predicted and ground-truth labels in labeled data. We then integrate the instance correlation and label correlation into the proposed regression model to adaptively learn the sample similarity graph and label similarity graph. The adaptive collaborative correlation learning module is capable of preserving the sample similarity from high-dimensional to low-dimensional space, guaranteeing that similar samples are assigned similar labels and that strong label correlations lead to consistent predicted labels, simultaneously. It yields more accurate and reliable sample and label similarity graphs than those obtained through the predefined graph method. Finally, an iterative optimization algorithm is developed to solve our model. Extensive experimental results demonstrate that the proposed Access-MFS is superior to the state-of-the-art methods.
To sum up, the main contributions of this paper are as follows: (i) To the best of our knowledge, we are the first to introduce adaptive collaborative correlation learning for feature selection on semi-supervised multi-label data. Our method can adaptively learn sample and label similarity graphs directly from the data, rather than relying on predefined graphs. The simultaneous learning of these two similarity graphs is integrated into the feature selection process, where they mutually enhance each other, leading to an improvement in feature selection performance. (ii) We propose a novel model for semi-supervised multi-label feature selection, incorporating a generalized regression module with an extended uncorrelated constraint and an adaptive collaborative correlation learning module. This model effectively selects features that are both discriminative and uncorrelated, while ensuring consistency between predicted labels and existing label data. (iii) We develop an efficient alternative optimization algorithm with guaranteed convergence to solve the proposed model. Comprehensive experiments are conducted on the real-world datasets to verify its superiority over state-of-the-art methods.
The remainder of this paper is organized as follows. Section 2 briefly reviews the related work about MFS. We formulate the proposed method Access-MFS in Section 3 and provide an effective solution to this method in the following Section 4. Section 5 presents an analysis of the proposed algorithm’s convergence behavior and time complexity. Section 6 conducts a series of experiments on eight real-world datasets. Finally, we conclude this article in Section 7.

2 Related Work
In this section, we briefly introduce some representative works about MFS. MFS can be classified into two categories based on whether it decomposes multi-labels.
In the first category, multi-label data is transformed into multiple single-label datasets, upon which single-label feature selection methods are subsequently applied. Single-label feature selection methods are commonly classified into three groups: filter-based, wrapper-based, and embedded-based methods. The filter-based methods select informative features based on their importance, as defined by a specific evaluation criterion. Laplacian Score [9], as a typical filter-based approach, identifies representative features by utilizing the Laplacian Score metric, which quantifies the local preservative ability of the features. This type of method does not take into account the efficacy of the selected features on a downstream learning task, resulting in unsatisfactory performance. Wrapper-based methods typically employ a learning algorithm to assess the selected features until identifying the most suitable ones. Laporte et al. introduced a wrapper-based feature selection method that entails learning the ranking of features using support vector machines with a sparse constraint [25]. The primary limitation of wrapper-based methods lies in their extensive time requirements and limited generalization capabilities. Embedded-based methods integrate feature selection into the model learning process, offering a middle ground between filter-based and wrapper-based methods. Liu et al. embedded the procedure of feature selection into the locally linear embedding algorithm based on the -norm reconstruction loss [26]. These single-label feature selection methods ignore the label correlations, which will result in the inferior performance.
Unlike the methods in the first category, the second category explores label correlations and constructs the feature selection model directly on multi-label data. MIFS selects discriminative features by factorizing the label matrix to reveal label correlations and diminish the impact of imperfect label information [13]. In GLFS [17], the LASSO-based sparse learning is used to select label-specific features that preserve groups, enabling the simultaneous exploration of label-group and instance-group correlations. However, these methods assume that all samples are labeled, which may not hold true in real-world scenarios. More often, only a subset of samples are labeled. Multi-label feature selection within the semi-supervised learning paradigm is proposed to address this issue. In LEDA [22], Guo et al. devised a semi-supervised multi-label propagation method to capture information from both labeled and unlabeled data simultaneously. Subsequently, the acquired label information is incorporated into an extended linear discriminant analysis module to select informative features. To explore the local manifold structures of both feature and label spaces in semi-supervised multi-label data, SMLFS [27] uses a local logistic regression model that incorporates feature graph regularization and label graph regularization. In addition, -norm is applied to the regression coefficient matrix to select important feature dimensions. In [28], Li et al. proposed a semi-supervised multi-label feature selection method that simultaneously utilizes label and sample correlations. By imposing binary hash constraints on the spectral embedding model, SFS-BLL [29] identifies pseudo-labels for unlabeled data and utilizes a self-weighted sparse regression model to select discriminative features, leveraging both existing labels and learned pseudo-labels.
However, the aforementioned semi-supervised multi-label feature selection methods are confronted with three issues. First, they use a predefined way to describe the sample similarity graph, which is easily influenced by noise and outliers. Second, these methods, which pre-compute label correlations based on partially known labels, fail to accurately capture the relationships between labels due to the large number of labels that remain unknown in a semi-supervised scenario. Existing methods fail to simultaneously consider the correlations between samples and between labels, overlooking how these two correlations can mutually enhance feature selection performance. Last, they only take into account the discriminative capability of the chosen features and overlook their redundancy, which will limit the performance of feature selection.
3 Proposed method
3.1 Notations
In this paper, we denote matrices with boldface uppercase letters and vectors with boldface lowercase letters. For an arbitrary matrix , indicates the th row and th column entry of , and denote its th row and th column, respectively. We use to denote the Frobenius norm of . The -norm of is defined as , where indicates the -norm of the th row vector . Let and respectively represent the trace and the transpose of . Besides, is a centering matrix, where is an identity matrix and is a column vector of ones.
Let be a given data matrix with dimensional features and instances, where is the labeled data, is the unlabeled data and . The corresponding label matrix of is denoted by with categories, where and denote the labels of and , respectively. In , indicates the th instance is labeled as the th class, otherwise it is 0. The label matrix of the unlabeled data is set to . In semi-supervised multi-label feature selection, our aim is to select informative features from the given dataset, which is much less than the total number of features in it. The framework of Access-MFS is shown in Fig. 1.
3.2 Formulation of Access-MFS
In order to learn a mapping function from samples with high dimensional features to the desired information (such as classification label or latent features), the least squares regression model has been extensively applied in various machine learn tasks including classification, regression and dimension reduction [30, 31]. Given the input dataset , where and are the data matrix and the corresponding label matrix, respectively, the traditional least squares regression model is defined as follows:
(1) |
where is a projection matrix, indicates certain regularization on and is a regularization parameter. However, the abovementioned model cannot be directly applied to semi-supervised multi-label feature selection. Eq. (1) cannot be solved when only a subset of the labels in the label matrix is known. Furthermore, to select the dicriminative features and avoid the trivial solution, a column full rank constraint is typically imposed on in Eq. (1). This approach disregards the redundancy of selected features. In order to deal with these two issues, we propose a generalized regression model with an extended uncorrelated constraint for semi-supervised multi-label feature selection, which is formulated as follows:
(2) | ||||
where is a predicted label matrix and is an extended uncorrelated constraint. In Eq. (2), is comprised of the predicted label for the labeled data and for the unlabeled data . We employ the predicted label matrix as an optimization variable in Eq. (2), simultaneously ensuring that the predicted label is consistent with the ground-truth label of . Besides, -norm is applied to to induce row sparsity, facilitating feature selection. Then, we can select the top features by calculating the norm of each row in and then sorting them in descending order.
To select the discriminative and uncorrelated features, we propose an extended uncorrelated constraint on , denoted as . Here, is defined as , where the matrix is diagonal, with the th diagonal entry given by ( is a small constant to avoid division by 0). Additional, represents the Laplacian matrix of instance similarity matrix , is the degree matrix of , and is a trade-off parameter. The uncorrelated constraint we proposed consists of three terms obtained by expanding . The first term aims to promote the orthonormality of the projected dimensions, which makes the low-dimensional instances to be uncorrelated. The second term is added to the first term to avoid the singularity of . The third term is advantageous in reducing the variance among samples within the same neighborhood under the graph structure. When , the proposed constraint reduces to the traditional uncorrelated constraint. The proposed constraint is beneficial to select discriminative features that are also uncorrelated.
For semi-supervised multi-label feature selection [22, 23, 16, 28], the sample similarity structure and the label correlations serve as two crucial priors. Most existing methods use a predefined graph approach to capture the sample similarity or the label correlations. However, the presence of noise and outliers within the original feature space can undermine the reliability of the resulting graph. Besides, in semi-supervised settings where some labels are unknown, this approach cannot fully ascertain the label correlations. In this paper, we maintain the sample similarity structure and investigate label correlations by adaptively learning both the instance similarity graph and the label similarity graph. Guided by two fundamental assumptions that similar instances tend to have similar labels, and that samples close to each other in high-dimensional space should also be close in low-dimensional space, we formulate the instance similarity graph learning as follows:
(3) | ||||
where is the instance similarity matrix, in the predicted label matrix denotes the label of the th instance , and is a regularization parameter to control the sparsity of . The first two terms of Eq. (LABEL:New3) are designed to learn the instance similarity matrix from the perspectives of the label space and the feature space respectively. The third term serves as a regularization to prevent the trivial solution where each row in contains only one element with a value of 1, while all other elements are 0. By defining the Laplacian matrix , where is a diagonal matrix with its th diagonal element equals to the sum of the th row of , we can simplify Eq. (LABEL:New3) as follows:
(4) | ||||
Furthermore, in order to explore label correlations, we operate under the assumption that a stronger positive correlation between two labels leads to greater similarity in the predicted labels. This assumption guides the learning of the label similarity graph, which can be expressed as follows:
(5) | ||||
where denotes the label similarity matrix, is the th label vector corresponding to the th column of , and is a regularization parameter. From Eq. (5), it is evident that a larger value indicates greater similarity between and , and the converse is also true. We define the Laplacian matrix , where is a diagonal matrix with the th diagonal entry is . Then, Eq. (5) can be rewritten as
(6) | ||||
By jointly utilizing Eqs. (LABEL:New4) and (6), we are able to collaboratively learn both the sample similarity matrix and the label similarity matrix in an adaptive manner, rather than relying on the predefined method. This is helpful to enhance the performance of feature selection.
By combining Eqs. (2), (LABEL:New4) and (6) together, the proposed semi-supervised multi-label feature selection method Access-MFS is summarized as follows:
(7) | ||||
where , and and are two trade-off hyper-parameters. These two regularization parameters and can be automatically determined during the optimization process.
In our Access-MFS method, we integrate feature selection, the collaborative learning of the instance similarity graph and the label similarity graph, and the label learning into a unified framework, enabling these different learning tasks to promote each other. Access-MFS can offer two key benefits: First, the proposed extended regression model not only maintains consistency between the predicted labels and the ground-truth of the labeled data but also selects discriminative and uncorrelated features. Second, Access-MFS is capable of adaptively learning the instance and label similarity graphs that are more reliable than those generated by predefined methods. This collaborative learning of the two similarity-induced graphs can preserve the local structures of the feature space and label space concurrently.
4 Optimization and Algorithm
There are five variables that need to be optimized in Eq. (7). Since these variables are interrelated, making it difficult to solve them simultaneously. To address this, an alternative iterative algorithm is proposed to solve the optimization problem by optimizing one variable at a time while keeping the others fixed.
Since there is no constraint on the variable , we can solve for it by setting the first-order derivate of the objective function in Eq. (7) w.r.t. to 0, in accordance with the Karush-Kuhn-Tucker (KKT) conditions [32]. The optimal solution for can be easily obtained as . By substituting with the derived solution, Eq. (7) can be reformulated as follows:
(8) | ||||
In the following, we will describe the process for updating the variables , , and in Eq. (8).
4.1 Update by Fixing Other Variables
When other variables are fixed, we can disregard the terms that do not involve in Eq. (8). Then, can be obtained by solving the following problem:
(9) | ||||
According to [33], we have . Then, Eq. (9) can be transformed into the following equivalent form:
(10) | ||||
Based on the matrix calculus theory, Eq (10) can be further derived as follows:
(11) | ||||
Since the constraint and is fixed, Eq. (11) simplifies to the following problem.
(12) |
Let and . Then, Eq. (12) is transformed into
(13) |
4.2 Update by Fixing Other Variables
By ignoring other fixed variables, we can solve the following problem to obtain .
(15) | ||||
Let be a diagonal matrix, where the th diagonal entry is defined as 1 if the -th sample is labeled, and 0 otherwise. Then, we have . By following a similar deductive process as outlined in (11), problem (15) can be transformed into the following equivalent trace form:
(16) |
where , .
Take the derivation of Eq.(16) w.r.t. and set it to 0, we can obtain
(17) |
The optimal solution for Eq. (17) can be obtained by using the proposed algorithm in [35].
4.3 Update by Fixing Other Variables
While , and are fixed, the optimization problem in Eq. (8) is equivalent to solve Eq. (LABEL:New3). By setting a matrix , where , Eq. (LABEL:New3) is transformed into
(18) | ||||
Note that Eq. (18) is independent for each row. Hence, for each , we can rewritten Eq. (18) by using the method of completing the square as follows:
(19) | ||||
Then, the Lagrangian function of Eq. (19) is written as:
(20) | ||||
where and are the Lagrangian multipliers. Based on the KKT conditions, we have and . Then, we can get , where .
In practice, each sample tends to be similar to its neighbours. Therefore, we aim for to have nonzero values, indicating that the th sample is similar to its closest neighbours. Assuming that are arranged in ascending order, we prefer and . As a result, we have . Besides, according to the constraint , we get . By substituting into the obtained , we can calculate the regularization parameter as . Then, we achieve the optimal solution as follows:
(21) |
4.4 Update by Fixing Other Variables
With other variables fixed, can be optimized by solving the following problem:
(22) | ||||
Let be a matrix with its th entry . Then, Eq. (22) is transformed into
(23) | ||||
Given that the optimization of each row in is independent, Eq. (23) can be addressed by optimizing each individual subproblem as follows:
(24) | ||||
Following a similar procedure to that used in solving Eq. (19), we can determine the regularization parameter , where denotes the number of nonzero elements in . Furthermore, the optimal solution is formulated as follows:
(25) |
Up to this point, we have obtained the updated expression of , , and . The detailed optimization procedure of Access-MFS is summarized in Algorithm 2.
-
1.
The data matrix , and the centering matrix ;
-
2.
The label matrix
-
3.
The trade-off parameters and .
5 Convergence and Complexity Analysis
In this section, we provide a theoretical analysis of the proposed algorithm Access-MFS, which includes both the convergence analysis and the complexity analysis.
5.1 Convergence Analysis
Due to the objective function in Eq. (8) not being convex w.r.t. four variables , , and simultaneously, we address this by dividing it into four sub-objective functions, namely, Eqs. (9), (15), (18), and (22), each of which is separately solved with other variables fixed. Therefore, the convergence of Algorithm 2 can be demonstrated by proving that each sub-objective function decreases monotonically. We first present the following theorem to prove the monotonic decrease of the objective function in Eq. (9) when updating with , and are fixed.
Theorem 1.
We can prove this theorem based on the property in [33] and the result in [34]. Due to space constraints, the detailed proof of Theorem 1 is provided in the supplementary material.
Besides, as Eq. (15) is equivalent to Eq. (16), we only need to demonstrate that the objective function in Eq.(16) with respect to is a monotonically decreasing function. It is easy to verify that the matrix and the Laplacian matrix are both positive semidefinite. Consequently, the quadratic forms and are convex. Hence, the objective function in Eq.(16) is convex. According to [35], we can easily obtain that the objective function in Eq.(16) monotonically decrease by using the updating rules in Algorithm 2. Additionally, the objective functions in Eqs. (18) and (22) are convex w.r.t. and , and possess closed-form solutions, ensuring the convergence of the updates for and , respectively. Therefore, we can conclude that the objective function in Eq.(8) monotonically decreases with each iteration of Algorithm 2. Moreover, the convergence behavior of Algorithm 2 is further demonstrated through experiments.
5.2 Complexity Analysis
In Algorithm 2, four variables are updated alternately by solving Eqs. (9), (15), (18), and (22). The optimization of Eq. (9) is further transformed into solve Eq. (13). According to [34], the time complexity for updating in Eq. (13) is . For updating , a proposed algorithm in [35] is applied into solve Eq. (15). Based on [35], we can calculate the computational complexity for updating , which is . Eq. (18) is solved by updating row by row. According to the closed form solution of in Eq. (21), the time complexity of updating is . The optimization in Eq. (22) is similar to solving . In the same way, we can determine the time complexity of as . Therefore, the total computational complexity of Algorithm 2 is , where is the number of iteration.
6 Experiments
In this section, we compare Access-MFS with several state-of-the-art semi-supervised multi-label feature selection methods. We conduct extensive experiments on eight real-world benchmark datasets to demonstrate the effectiveness of our approach.
6.1 Experimental Schemes
6.1.1 Datasets
In our experiments, we collected eight real-world benchmark multi-label datasets, each with distinct statistical characteristics, to present the performance of various feature selection methods. These datasets consist of the biology data set VirusGO 111http://www.uco.es/kdis/mllresources, two medicine datasets Parkinson222https://github.com/XSilverBullet/parkinson- and Medical333http://www.uco.es/kdis/mllresources, as well as five text datasets: Langlog3, Enron3, Slashdot3, Recreation3 and Computer3. The detailed statistics for these multi-label datasets are summarized in Table I. Within this table, Label card. refers to the average number of labels per instance, while Label dens. reflects the label density, calculated as Label card. divided by the total number of labels.
Datasets | Instances | Features | Labels | Label card. | Label dens. |
---|---|---|---|---|---|
VirusGO | 207 | 749 | 6 | 1.2174 | 0.2029 |
Parkinson | 401 | 91 | 5 | 1.2594 | 0.2519 |
Medical | 978 | 1449 | 45 | 1.2454 | 0.0277 |
Langlog | 1460 | 1004 | 75 | 1.1801 | 0.0157 |
Enron | 1702 | 1001 | 53 | 3.3784 | 0.0637 |
Slashdot | 3782 | 1079 | 22 | 1.1809 | 0.0537 |
Recreation | 5000 | 606 | 22 | 1.4232 | 0.0647 |
Computer | 5000 | 681 | 33 | 1.5082 | 0.0457 |
6.1.2 Comparison Methods
To evaluate the performance of the proposed method Access-MFS, we compare it with several state-of-the-art multi-label feature selection methods. This includes nine semi-supervised methods (namely SMILE, FSSRDM, SCFS, LEDA, LSMR, SMLFS, SFAM, SMDR-IC, and SFS-BLL), one supervised method (LFFS), and one unsupervised method (UAFS-BH). The following is a brief introduction to the methods being compared.
All-Fea utilizes all original features for comparison.
SMILE [36] uses label correlations to infer labels for unlabeled samples and embeds them into an adapted neighborhood graph model for feature selection.
FSSRDM [37] selects features through an -norm-based sparse regression model that incorporates maximizing the dependence between labels and features.
SCFS [38] ensures consistency between feature and label spaces by leveraging spectral analysis to learn sample similarity during feature selection.
LEDA [22] integrates a multi-label propagation mechanism into an extended linear discriminant analysis for selecting informative features.
LSMR [39] constructs a multi-label semi-supervised regression model for feature selection.
SMLFS [27] incorporates the feature graph and label graph into a logistic regression model with regularization to learn the regression coefficient matrix.
SFAM [23] combines adaptive graph structure learning and local manifold learning in a unified feature selection framework.
SMDR-IC [28] selects important features by incorporating label and sample correlations into a sparse regression model equipped with -norm and -norm regularization.
SFS-BLL [29] utilizes binary hash learning to generate pseudo labels and integrates them into a self-weighted sparse regression model for feature selection.
LFFS [16] incorporates both global and local label correlation structural information into a sparse regression model to learn the feature selection matrix.
UAFS-BH [40] integrates binary hash constraints into the spectral embedding model to facilitate the learning of weakly-supervised labels and the selection of features.
6.1.3 Implementation Details
Several parameters need to be set in Access-MFS and the other compared methods. For Access-MFS, the objective function in Eq. (8) includes five parameters: , , , and . The regularization parameters and are automatically determined while solving and in Eqs. (21) and (25), respectively. The remaining three parameters , and are tuned using a grid search within the range . For all compared methods, the parameters are tuned in accordance with the original papers to achieve optimal results. For each dataset, following the approach of existing semi-supervised feature selection methods [29, 41], we randomly select of the instances as the training set from the entire dataset, using the remainder as the testing set. Additionally, the test data also serve as unlabeled data for the semi-supervised multi-label feature selection methods. As determining the optimal number of selected features is still an open problem [42], we set the range for the number of selected features between 100 and 200 with an increment of 10 across all datasets (except for Parkinson dataset, which contains only 91 features). For Parkinson dataset, we vary the number of selected features from 5 to 55, with an incremental size of 5. Then, Multi-Label K-nearest Neighbor (MLKNN), as proposed in [43], is executed on the selected features. Four commonly used evaluation metrics Average Precision (AP) [1], Macro-F1 (MaF) [1], Ranking Loss (RL) [1] and One Error (OE) [1] are employed to assess the performance of different methods. For the Average Precision and Macro-F1 metrics, a higher value indicates better performance, while for Ranking Loss and One Error, a lower value indicates better performance. All methods are implemented in MATLAB R2022b and executed on a desktop with an Intel Corei9-10900, CPU 2.80 GHz and 64 GB RAM. The experiments are independently conducted five times and the average results are reported for comparison.
Datasets | VirusGO | Parkinson | Medical | Langlog | Enron | Slashdot | Recreation | Computer | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AP | MaF | AP | MaF | AP | MaF | AP | MaF | AP | MaF | AP | MaF | AP | MaF | AP | MaF | |
Access-MFS | ||||||||||||||||
All-Fea | ||||||||||||||||
SMILE | ||||||||||||||||
FSSRDM | ||||||||||||||||
SCFS | ||||||||||||||||
LEDA | ||||||||||||||||
LSMR | ||||||||||||||||
SMLFS | ||||||||||||||||
SFAM | ||||||||||||||||
SMDR-IC | ||||||||||||||||
SFS-BLL | ||||||||||||||||
LFFS | ||||||||||||||||
UAFS-BH |




6.2 Experimental Results and Performance Analysis
In this section, we demonstrate the superior performance of the proposed method Access-MFS compared with other competing methods in terms of AP, MaF, RL and OE. Table II summarizes the experimental results of various methods on eight multi-label datasets based on AP and MaF metrics, with a fixed ratio of labeled instances at 40%, selecting 30 features for the Parkinson dataset and 150 features for the remaining datasets. Due to space constraints, the experimental results regarding RL and OE can be found in Table 1 of the supplementary material. From Table II, we can see that the proposed method Access-MFS consistently outperforms other methods across all datasets. As to VirusGO, Parkinson and Recreation datasets, Access-MFS achieves an average improvement of over 10% in both AP and MaF metrics compared to all other methods. On Medical dataset, Access-MFS gains over 32% and 15% average improvement in terms of AP and MaF, respectively. As to Slashdot datasets, Access-MFS outperforms other methods with more than 13% improvement in average AP and MaF. On Computer dataset, Access-MFS obtains over 2.5% average improvement in both AP and MaF. On Langlog and Enron datasets, Access-MFS achieves an average improvement of over 3.6% in AP and still outperforms all other methods in MaF. Besides, Table 1 in the supplementary material also shows that the proposed Access-MFS is still better than other methods on all datasets in terms of RL and OE. Furthermore, compared to the baseline method All-Fea, Access-MFS is superior across all datasets in terms of AP, MaF, RL, and OE, demonstrating the effectiveness of Access-MFS. Access-MFS also shows better performance than the supervised method LEFS on all datasets, indicating the effectiveness of the proposed semi-supervised learning strategy in enhancing feature selection performance.
Due to the difficulty in determining the optimal number of selected features, we also present the performance (AP, MaF, RL and OE) of different methods as the number of selected features varies. Since the page limitation, we only report the experimental results for AP and MaF. The results for RL and OE can be found in Figs. 1 and 2 of the supplementary material. Figs. 2 and 3 respectively show the AP and MaF values for different numbers of selected features, with the labeled instance ratio fixed at 40%. As illustrated in these two figures, the proposed Access-MFS method outperforms other methods in most cases, with the number of selected features varying from 5 to 55 on Parkinson dataset and from 100 to 200 on the other datasets. Furthermore, we also investigate the performance of different methods with varying ratios of labeled instances. Figs. 4 and 5 respectively show the AP and MaF of different methods when the ratio of labeled instances ranges from 10% to 50%. The experimental results for RL and OE with varying ratios of labeled instances can be found in Figs. 3 and 4 of the supplementary material, respectively. From these figures, we can see that our method achieves better performance in most cases. In addition, our method performs better as the ratio of labeled instances increases, indicating that our approach effectively leverages labeled information to guide the collaborative learning of sample and label similarity graphs, ultimately enhancing feature selection performance. The superior performance of the proposed method is attributed to the adaptive collaborative learning of similarity graphs from both the sample and label perspectives, which are embedded into a generalized regression model with an extended uncorrelated constraint and a consistency constraint between predicted and existing labels.
6.3 Parameter Sensitivity Experiment
In this section, we conduct experiments to investigate the sensitivity of the proposed Access-MFS to three parameters, i.e., , , and . Fig. 6 shows the performance variations w.r.t. AP on Enron and Recreation datasets when two of the parameters , , and are varied while the third is kept fixed. We can observe that the parameters and are relatively more sensitive compared to the parameter . Besides, when is fixed, the proposed method achieves prominent performances with the parameter combinations of or and in most cases. Hence, We can empirically fine-tune and within a narrower range discussed to achieve a better performance.


6.4 Convergence Experiment
The theoretical convergence of the proposed Access-MFS algorithm has been established in Section 5.1. This subsection provides experimental validation of its convergence and analyzes the speed of convergence. Fig. 7 displays the convergence curves of Access-MFS on VirusGo, Parkinson, Enron and Recreation datasets, demonstrating the variation of the objective function value with the number of iterations. As illustrated in Fig. 7, the convergence curves steeply fall within a few iterations and approach stability almost within 10 iterations. The experimental results demonstrate that the proposed method can converge effectively.
Metircs | Methods | VirusGO | Parkinson | Medical | Langlog | Enron | Slashdot | Recreation | Computer |
---|---|---|---|---|---|---|---|---|---|
AP | Access-MFS | ||||||||
Accsess-MFS-I | |||||||||
Accsess-MFS-II | |||||||||
Access-MFS-III | |||||||||
MaF | Access-MFS | ||||||||
Accsess-MFS-I | |||||||||
Accsess-MFS-II | |||||||||
Access-MFS-III |
6.5 Ablation Study
In this section, we conduct an ablation study to evaluate the effects of the adaptive collaborative correlation learning module for the instance similarity graph and label similarity graph in the proposed method. To this end, three variant methods of Access-MFS are designed for comparison as follows:
(1) Access-MFS-I: It is only composed of the instance similarity graph learning module in Access-MFS.
(26) | ||||
(2) Access-MFS-II: It is only composed of the label similarity graph learning module in Access-MFS.
(27) | ||||
(3) Access-MFS-III: It performs feature selection without the adaptive collaborative learning of the instance similarity graph and the label similarity graph modules in Access-MFS.
(28) | ||||
By comparing the proposed Access-MFS with its three variant methods, Table III presents the ablation experimental results in terms of AP and MaF. The experimental results regarding RL and OE can be found in Table 2 of the supplementary material. From those tables, we can see that our method outperforms methods that use only instance similarity graph learning or only label similarity graph learning (namely, Access-MFS-I and Access-MFS-II). Additionally, methods with only one type of similarity graph learning (Access-MFS-I and Access-MFS-II) perform better than those without any similarity graph learning, i.e., Access-MFS-III. This demonstrates the effectiveness of the proposed adaptive collaborative correlation learning module, which can automatically capture the local geometric structure of samples and accurately learn label correlations using existing labels. These two processes mutually enhance each other, improving feature selection performance.
7 Conclusion
To address the high dimensionality of multi-label data and the difficulty of obtaining labels for all instances, we proposed a novel semi-supervised multi-label feature selection method named Access-MFS, which uses only a subset of labeled samples. Unlike existing methods that use predefined approaches to describe either sample correlations or label correlations separately, our method enhanced the performance of semi-supervised multi-label feature selection by adaptively and collaboratively learning both sample correlations and label correlations. An generalized regression model equipped with an extended uncorrelated constraint was presented to select informative features from semi-supervised multi-label data. Meanwhile, the adaptive collaborative learning module of instance similarity graphs and label similarity graphs was integrated into the generalized regression model to preserve sample similarity in both high-dimensional and low-dimensional spaces, assign similar labels to similar samples, and establish strong label correlations for consistent predictions. An alternative iterative optimization algorithm was also developed to solve the proposed objective function. The experimental results on real-world semi-supervised multi-label data showed that Access-MFS outperformed the state-of-the-art methods.
Acknowledgments
This work was supported by the Natural Science Foundation Project of Sichuan Province (No. 24NSFSC0508), the Youth Fund Project of Humanities and Social Science Research of Ministry of Education (No. 21YJCZH045), and the National Natural Science Foundation of China (No. 62176221).
References
- [1] M.L. Zhang and Z.H. Zhou, ”A review on multi-Label learning algorithms”, IEEE Trans. Knowl. Data Eng., 26 (8) (2014) 1819-1837.
- [2] M.L. Zhang and L. Wu, ”Lift: Multi-label learning with label-specific features,” IEEE Trans. Pattern Anal. Mach. Intell., 37 (1) (2015) 107-120.
- [3] W. Lu, D. Li, L. Nie, P. Jing and Y. Su, ”Learning dual low-rank representation for multi-label micro-video classification,” IEEE Trans. Multimedia, 25 (2023) 77-89.
- [4] T. Jiang, D. Wang, L. Sun, H. Yang, Z. Zhao, and F. Zhuang, ”LightXML: Transformer with dynamic negative sampling for high-performance extreme multi-label text classification,” AAAI Conf. Artif. Intell., 35 (9) (2021) 7987-7994.
- [5] C. Wang, S.C Yan, L. Zhang and H. J. Zhang, ”Multi-label sparse coding for automatic image annotation,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2009, pp. 1643-1650.
- [6] W. Qian, J. Huang, F. Xu, W. Shu and W. Ding, ”A survey on multi-label feature selection from perspectives of label fusion,” Inf. Fusion, 100 (2023) 101948.
- [7] L. Hu, L. Gao, Y. Li, P. Zhang and W. Gao, ”Feature-specific mutual information variation for multi-label feature selection,” Inf. Sci., 593 (2022) 449-471.
- [8] J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang and H. Liu, ”Feature selection: A data perspective,” ACM Comput. Surv., 50 (6) (2017) 1-45.
- [9] X. He, D. Cai and P. Niyogi, ”Laplacian score for feature selection,” Proc. Adv. Neural Inf. Process. Syst., 2005, pp. 507-514.
- [10] Y. Fu, X. Liu, S. Sarkar and T. Wu, ”Gaussian mixture model with feature selection: An embedded approach,” Comput. Ind. Eng., 152 (2021) 107000.
- [11] Z. Zhao and L. Huan, ”Spectral feature selection for supervised and unsupervised learning,” Proc. 24th Int. Conf. Mach. Learn., 2007, pp. 1151-1157.
- [12] F. Wu, Y. Yuan and Y. Zhuang, ”Heterogeneous feature selection by group lasso with logistic regression,” Proc. 18th ACM Int. Conf. Multimedia, 2010, pp. 983-986.
- [13] L. Jian, J. Li, K. Shu and H. Liu, ”Multi-label informed feature selection,” Proc. Int. Joint Conf. Artif. Intell., 2016, pp. 1627-1633.
- [14] A. Braytee, W. Liu, D. R. Catchpoole and P. J. Kennedy, ”Multi-label feature selection using correlation information,” Proc. ACM Conf. Inf. Knowl. Management, 2017, pp. 1649-1656.
- [15] J. Zhang, Z. Luo, C. Li, C. Zhou and S. Li, ”Manifold regularized discriminative feature selection for multi-label learning,” Pattern Recognit., 95 (2019) 136–150.
- [16] Y.L. Fan, B.H. Chen, W.Q. Huang, J.H. Liu, W. Weng and W.Y. Lan, ”Multi-label feature selection based on label correlations and feature redundancy,” Knowl. Based Syst., 241 (2022) 108256.
- [17] J. Zhang, H. Wu, M. Jiang, J. Liu, S. Li, Y. Tang and J. Long, ”Group-preserving label-specific feature selection for multi-label learning,” Expert Syst. Appl., 213 (2023) 118861.
- [18] Z.Ma, F. Nie, Y. Yang, J. R. Uijlings and N. Sebe, ”Web image annotation via subspace-sparsity collaborated feature selection,” IEEE Trans. Multimedia, 14 (4) (2012) 1021-1030.
- [19] Z. Xu, I. King, M.R.T. Lyu and R. Jin, ”Discriminative semi-supervised feature selection via manifold regularization,” IEEE Trans. Neural Netw., 21 (7) (2010) 1033-1047.
- [20] N. Sun, T. Luo, W. Zhuge, H. Tao, C. Hou and D. Hu, ”Semi-supervised learning with label proportion,” IEEE Trans. Knowl. Data Eng., 35 (1) (2021) 877-890.
- [21] X. Chang, F.Nie, Y. Yang and H. Huang, ”A convex formulation for semi-supervised multi-label feature selection,” Proc. AAAI Conf. Artif. Intell., 2014, pp. 1171-1177.
- [22] B. Guo, H. Tao, C. Hou and D. Yi, ”Semi-supervised multi-label feature learning via label enlarged discriminant analysis,” Knowl. Inf. Syst., 62 (2020) 2383-2417.
- [23] S. Lv, S.F. Shi, H.Z. Wang and F. Li, ”Semi-supervised multi-label feature selection with adaptive structure learning and manifold learning,” Knowl. Based Syst., 214 (2021) 106757.
- [24] W. Liu, H. Wang, X. Shen and I.W. Tsang, ”The emerging trends of multi-label learning,” IEEE Trans. Pattern Anal. Mach. Intell., 44 (11) (2021) 7955-7974.
- [25] L. Laporte, R. Flamary, S. Canu, S. Déjean and J. Mothe, ”Nonconvex regularizations for feature selection in ranking with sparse SVM,” IEEE Trans. Neural Netw. Learn. Syst., 25 (6) (2013) 1118-1130.
- [26] Y. Liu, D. Ye, W. Li, H. Wang and Y. Gao, ”Robust neighborhood embedding for unsupervised feature selection,” Knowl. Based Syst. 193 (2020) 105462.
- [27] Y. Zhang, Y. Ma, X. Yang, H. Zhu and T. Yang, ”Semi-supervised multi-label feature selection with local logic information preserved,” Adv. Comput. Intell., 1 (7) (2021) 1-15.
- [28] R. Li, J. Du, J. Ding, L. Jia, Y. Chen and Z. Shang, ”Semi-supervised multi-label dimensionality reduction learning by instance and label correlations”, Mathematics, 11(3) (2023) 782.
- [29] D. Shi, L. Zhu, J. Li, Z. Cheng and Z. Liu, ”Binary label learning for semi-supervised feature selection,” IEEE Trans. Knowl. Data Eng., 35 (3) (2023) 2299-2312.
- [30] S. Xiang, F. Nie, G. Meng, C. Pan and C. Zhang, “Discriminative least squares regression for multiclass classification and feature selection”, IEEE Trans. Neural Netw. Learn. Syst., 23 (11) (2012) 1738-1754.
- [31] X. Chen, F. Nie, G. Yuan and J.Z. Huang, “Semi-supervised feature selection via rescaled linear regression”, Proc. AAAI Int. Joint Conf. Artif. Intell., 2017, pp. 1525-1531.
- [32] D.P. Bertsekas, ”Constrained optimization and Lagrange multiplier methods,” Academic Press, 2014.
- [33] F. Nie, H. Huang, X. Cai, and C. H. Ding, ”Efficient and robust feature selection via joint l2,1-norms minimization”, Proc. Adv. Neural Inf. Proc. Syst., 23 (2010) 1813-1821.
- [34] J. Huang, F.P. Nie, H. Huang and C. Ding, ”Robust manifold nonnegative matrix factorization,” ACM Trans. Knowl. Disc. Data, 8 (3) (2014) 1-21.
- [35] R. Bartels and G. Stewart, ”Solution of the matrix equation ax+xb=c,” Commun. ACM, 15 (9) (1972) 820-826.
- [36] Q. Tan, Y. Yu, G. Yu and J. Wang, ”Semi-supervised multi-label classification using incomplete label information,” Neurocomputing, 260 (2017) 192-202.
- [37] L. Jiang, J. Wang and G. Yu, ”Semi-supervised multi-label feature selection based on sparsity regularization and dependence maximization,” Int. Conf. Intell. Control Inf. Proc., 2018, pp. 325-332.
- [38] Y. Xu, J. Wang, S. An, J. Wei and J. Ruan, ”Semi-supervised multi-label feature selection by preserving feature-label space consistency,” Proc. 27th ACM Int. Conf. Inf. Knowl. Manag., 2018, pp. 783-792.
- [39] V. Kraus, K. Benabdeslem and B. Canitia, ”Laplacian-based semi-supervised multi-label regression,” Int. Joint Conf. Neural Netw., 2020, pp. 1-8.
- [40] D. Shi, L. Zhu, J. Li, Z. Zhang and X. Chang, ”Unsupervised adaptive feature selection with binary hashing,” IEEE Trans. Image Process., 32 (2023) 838-853.
- [41] X. Chen, G. Yuan, F. Nie and Z. Ming, ”Semi-supervised feature selection via sparse rescaled linear square regression,” IEEE Trans. Knowl. Data Eng., 32 (1) (2020) 165-176.
- [42] J. Li, K. Cheng, S. Wang, F. Morstatter, R. P. Trevino, J. Tang and H. Liu, ”Feature selection: A data perspective,” ACM Comput. Surv., 50 (6) (2017) 1-45.
- [43] M.L. Zhang and Z.H. Zhou, ”ML-KNN: A lazy learning approach to multi-label learning,” Pattern Recognit., 40 (7) (2007) 2038-2048.