This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Modified Feature Selection for Improved Classification of Resting-State Raw EEG Signals in Chronic Knee Pain

Jean Li, Dirk De Ridder, Divya Adhia, Matthew Hall, Jeremiah D. Deng J Li is with the School of Computing, University of Otago, Dunedin 9054, New Zealand; D De Ridder, D Adhia and M Hall are with the Department of Surgical Sciences, School of Medicine, University of Otago, Dunedin 9054, New Zealand. *JD Deng is with the School of Computing, University of Otago (correspondence e-mail: [email protected]).
Abstract

Objective: Diagnosing pain in research and clinical practices still relies on self-report. This study aims to develop an automatic approach that works on resting-state raw EEG data for chronic knee pain prediction. Method: A new feature selection algorithm called “modified Sequential Floating Forward Selection” (mSFFS) is proposed. The improved feature selection scheme can better avoid local minima and explore alternative search routes. Results: The feature selection obtained by mSFFS displays better class separability as indicated by the Bhattacharyya distance measures and better visualization results. It also outperforms selections generated by other benchmark methods, boosting the test accuracy to 97.5%. Conclusion: The improved feature selection searches out a compact, effective subset of connectivity features that produces competitive performance on chronic knee pain prediction. Significance: We have shown that an automatic approach can be employed to find a compact connectivity feature set that effectively predicts chronic knee pain from EEG. It may shed light on the research of chronic pains and lead to future clinical solutions for diagnosis and treatment.

Index Terms:
Feature selection, resting-state EEG, chronic pain, knee pain.
{justify}

I Introduction

Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage [1]. While acute pain can be recognized as a symptom of an underlying problem, chronic pain, which extends beyond the period of healing from the original insult or injury, lacks the acute warning function of physiological nociception [2]. The International Association for the Study of Pain and International Classification of Diseases (ICD11) has defined chronic pain as pain that extends beyond three months, irrespective of the cause. Therefore, chronic pain can be considered not a mere symptom of another disease, but a health condition by itself [3, 2]. Indeed, chronic pain is not simply a temporal extension of acute pain but involves distinct mechanisms [4]. It is associated with multiple other symptoms. Despite being associated with physical conditions such as cardiovascular disease [5], chronic pain also correlates with cognitive dysfunction such as difficulties in attention, learning, memory, and decision making [6]. About 1/3 of the patients who suffer from chronic pain display other symptoms such as a combination of irritability, depression, anxiety, and sleep problems [7, 5, 8].

Chronic pain and its co-morbidities lead to increased disability [9], carrying the highest global burden associated with disability [10]. The correlation between chronic pain and disability creates a huge cost to society ($560-635 billion per year), far more than heart disease, cancer, and diabetes [11], and back pain alone amounts to 1.7% of the global national product every year [12]. Chronic pain is a worldwide problem. Studies have found that the prevalence of chronic pain is 20%, both in the USA [13] and Europe [7], and around 30% in China [8], which is similar to the prevalence in low-income and middle-income countries [14]. High-impact chronic pain, which occurs in 8% of the population, frequently limits patients’ life or work activities [13]. Due to the lack of effective, specific, and safe therapies, chronic pain remains one of the major causes of human suffering [10]. Many of the currently available pain therapies either are inadequate or cause unpleasantness to patients [15], some even causing deleterious side effects [16, 17].

In research and clinical practice, the gold standard of diagnosing pain is based on self-report. Self-report pain diagnosis could be challenging in certain clinical contexts or conditions. In particular, reliance on self-report of pain can be challenging where there is no evidence of tissue damage, or the presence of tissue damage may not be necessary to determine the presence of pain. Moreover, an objective measure of pain could pave the way to improve diagnostic and treatment outcomes. Therefore, it is necessary to develop simple, reliable, and clinically applicable method that can validate the pain experience and justify the allocation of relevant healthcare services and benefits.

The goal of this study is to find an automatic method to predict chronic knee pain through resting-state EEG. We focus on functional connectivity rather than activity, as recent studies have suggested the effectiveness of functional connectivity in EEG pain analysis [18, 19, 20], and this may be superior to detecting a pain-generating network [21].

Our main contribution is to propose a modified sequential floating forward feature selection algorithm (mSFFS) that explores alternative search paths to avoid potential local minima and searches out better feature selection outcome. Our experiments show that mSFFS outperforms a wide range of existing feature selection solutions. Apart from classification performance evaluation, we also assess the quality of the feature schemes from multiple perspectives, including visualization, and evaluation of the Jaccard index and a separability index based on the Bhattacharyya distance.

The remainder of the paper is organized as follows. We first give a brief review of some related work in Section II, and introduce our EEG dataset in Section III. Methods for pre-processing of EEG recordings and feature extraction are outlined in Section IV, where our modified algorithm for feature selection, the approach for model evaluation, and the class separability index are also given. Section V presents the experiment results, where mSFFS show clear competence. A 20-dimension connectivity feature set is found, and its relevance to the recent literature is discussed. We conclude the paper in Section VI.

II Related work

Multiple previous studies have shown the effectiveness of pain prediction through EEG using machine learning methods. Vanneste et al. [22] has yielded an accuracy of 92.53% on chronic pain through a support vector machine (SVM) trained on regions of interest. Nezam et al. [23] applied a SVM on selected band power features and reported an accuracy of 83±\pm5% and 62±\pm6% for the three and five pain levels respectively. Vuckovic et al. [24] predicted central neuropathic pain based on the resting-state EEG and reported an accuracy of 85% with a simple linear classifier using band power features. Vijayakumar et al. [25] investigated quantifying tonic thermal pain across healthy subjects. By using time-frequency wavelet representations of independent components obtained from EEG, their random forest model achieved an accuracy of 89.45%. Features selected from a combination of power spectral density (PSD) and functional connectivity (FC) abtained an accuracy of 86.2%–93.8% in classifying heat pain [26]. And 85%-100% accuracies can be achieved when classifying stimulated pain versus non-pain state on the same individuals using SVM [27].

These previous approaches, however, relied on labor-intensive pre-processing for artefact removal, which is expensive and, therefore, cannot be used in routine clinical practice. Therefore, a user-friendly, clinically applicable system is needed that can automatically pick up the neural pain signature on raw, unprocessed data, using a low-cost device.

III Data Description

The data used for this study consist of EEG recordings of 37 healthy participants, and 37 patients with chronic knee pain. Data were acquired in Dunedin Hospital with 19 electrodes in the standard 10–20 placement (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2). The study was approved by the Health and Disability Ethics Committee (HDEC), New Zealand (21CEN63), on 22 March 2021.

IV Methods

IV-A Pre-processing and feature extraction

All EEG recordings went through an automatic pre-processing procedure to remove potential artifacts and control volume conduction.

The first 10 seconds of all the recordings were discarded to avoid the “initial drift” noise. A notch filter at 50 Hz to attenuate the power line interference was applied to all data, followed by a high-pass filter of 1 Hz and a low-pass filter of 40 Hz. Each recording was cut into 10-second long epochs and went through an auto-cleaning process to repair epochs by interpolation or excluding them. This is achieved by estimating an optimal global peak-to-peak threshold first, then finding trial-wise bad sensors by estimating a separate rejection threshold for each sensor [28]. Lastly, the current source density transformation that is based on spherical spline surface Laplacian [29] [30] [31] [32] was applied to control volume conduction.

In this study, the EEG connectivity is based on the corrected imaginary phase locking value (ciPLV) [33], as this metric is robust in the presence of volume conduction, and proved capable of ignoring zero-lag connectivity while correctly estimating nonzero-lag connectivity [33]. In particular, for each pair of channels, the ciPLV across all epochs on chosen frequency ω\omega was computed based on the averages of the real and imaginary parts of the cross-channel spectrum densities:

C(ω)=|E[Im(Sxy(ω))|Sxy(ω)|]|1|E[Re(Sxy(ω)|Sxy(ω)|]|2,C(\omega)=\frac{\left|E\left[\frac{Im(S_{xy}(\omega))}{|S_{xy}(\omega)|}\right]\right|}{\sqrt{1-\left|E\left[\frac{Re(S_{xy}(\omega)}{|S_{xy}(\omega)|}\right]\right|^{2}}}, (1)

where Sxy(.)S_{xy}(.) is the cross-spectral density for channels xx and yy, and E[.]E[.] denotes averaging over all epochs within a sample.

For every pair of channels, the connectivity across all epochs was computed on the mean frequency value of 5 major brain wavebands. These frequency bands are delta (141\sim 4 Hz), theta (484\sim 8 Hz), alpha (8128\sim 12 Hz), beta (123012\sim 30 Hz), and gamma (304030\sim 40 Hz). Each sample is therefore represented by a feature vector of 855 dimensions (171 channel pairs ×\times 5 bands). The MNE package [34] was applied to compute the spectral connectivity measures.

IV-B Feature selection

There are two main challenges to selecting features for our study. First, we have a relatively small sample size of 74 and a large number of features at 855. This requires a feature selection approach that can effectively handle high-dimensional problems with limited data. Second, the signals collected from each channel represent a summation of neural activity from a broad region of the brain, resulting in high collinearity between features in sensory space EEG analysis.

Various existing feature selection schemes were examined, including those based on filtering, greedy search, mutual information, random shuffling and swarm intelligence. Unfortunately, none of these approaches yielded satisfactory results. As a result, we developed a customized feature selection scheme by modifying the sequential floating forward selection (SFFS) method [35]. Like SFFS, we also use a classifier as a wrapper. Although our feature method is independent of wrapper choices, we used a support vector machine (SVM) with radial basis function kernels as the classifier/wrapper. For the remainder of this article, we will refer to this modified SFFS scheme as “mSFFS”.

mSFFS starts with an empty feature set. In each searching iteration, a forward selection is conducted, followed by a backward selection. In the forward selection step, assuming there are k1k-1 features in the current set, the feature that can increase the historical best score of kk-feature sets is included to form a new set of kk features. Similarly, in backward selection, features are excluded one by one if the reduced set results in a higher score than the historical best set. This process is repeated until kk reaches the number of features required. Compared to the classical SFFS algorithm, mSFFS records the winning sets of all numbers of features and always starts forward selection from a historical best set, hence allowing for multiple searching branches. The winning sets are updated along the way of both the forward and backward selection process. Bookkeeping of all searched sets is carried out to speed up the searching process and avoid deadlocks. The feature selection algorithm, using an external classifier function to evaluate feature selections, is presented in Algorithm 1.

To narrow down the search space, a preselection procedure was employed. We calculated the SHAP values [36] using XGBoost [37], and only kept the features with a mean absolute SHAP value greater than a given threshold. We took a lenient setting here and had the threshold set to 0. Depending on the exact situation, a positive threshold value could be considered if a harsher preselection policy is intended.

The number of selected features to be used in modeling will contribute to the complexity of the model, which may affect its generalization ability. We used cross-validation in deciding the optimal number of features because cross-validation is an established, reliable approach in model selection [38]. In our experiment, we obtained the performance of feature selections using 1 to 50 features incrementally, and selected the best feature set corresponding to the best cross-validation accuracy.

Data: KK: intended number of features; full feature set \cal F
External Function: score(W)
      returns the performance metric (e.g., accuracy) of the classifier given a feature scheme WW;
Result: Selected feature subset WKW_{K}
Preselection: 
       Calculate SHAP value for all features;
       Remove features with negative SHAP values from \mathcal{F};
      
Initialization: 
       k1k\leftarrow 1 ;
       for ii\in 0:K do
            winning set WiW_{i}\leftarrow\emptyset;
             winning set score si0s_{i}\leftarrow 0;
       end for
      Feature set bookkeeping: \mathcal{B}\leftarrow\emptyset;
      
while kKk\leq K do
       Forward selection: 
             Find the best feature to add: fargmaxfscore(Wk1f)f\leftarrow\arg\displaystyle\max_{f}~{}\mathrm{score}(W_{k-1}\cup f), f,(Wk1f)\forall f\in\mathcal{F},(W_{k-1}\cup f)\notin\mathcal{B};
             Update bookkeeping: {Wk1f}\mathcal{B}\leftarrow\mathcal{B}\cup\{W_{k-1}\cup f\};
             if score(Wk1f)>sk\mathrm{score}(W_{k-1}\cup f)>s_{k} then
                   Update winning selection: WkWk1fW_{k}\leftarrow W_{k-1}\cup f ;
                   Update winning score: skscore(Wk)s_{k}\leftarrow\mathrm{score}(W_{k}) ;
                  
             end if
            
      
      Backward selection: 
             while k>2k>2 do
                   Find the best feature to drop: fargmaxfscore(Wk\f)f\leftarrow\arg\displaystyle\max_{f}\mathrm{score}(W_{k}\backslash f), fWk\forall f\in W_{k}, (Wk\f)(W_{k}\backslash f)\notin\mathcal{B};
                   Update bookkeeping: {Wk\f}\mathcal{B}\leftarrow\mathcal{B}\cup\{W_{k}\backslash f\};
                   if score(Wk/f)>sk\mathrm{score}(W_{k}/f)>s_{k} then
                         Update winning selection: Wk1Wk\fW_{k-1}\leftarrow W_{k}\backslash f;
                         Update winning score: sk1score(Wk1)s_{k-1}\leftarrow\mathrm{score}(W_{k-1});
                         kk1k\leftarrow k-1;
                        
                  else
                         break;
                        
                   end if
                  
             end while
            
      
      kk+1k\leftarrow k+1;
      
end while
Algorithm 1 Modified SFFS

IV-C Model training and evaluation

Tentative feature selections were used to give classification performance. We performed nested cross-validation using support vector machien (SVM) classifiers to obtain the accuracy scores.

In each of the outer 10-fold cross-validation loop, the data points of 90% of the individuals were used for training, on which another 10-fold cross-validation followed for parameter tuning and cross-validation score generation. We then use the rest 10% data to obtain the test scores.

IV-D Feature selection evaluation

To examine the efficiency of our proposed mSFFS feature selection algorithm, we also trained and obtained the test scores through the same process, using features selected by the classical SFFS (from the mlxtend package [39]) (denoted by SFFS), and the top-nn features ranked by SHAP using the same amount of features (denoted by SHAP).

For a wider comparison, we also evaluated various swarm-based feature selection schemes, including the binary particle swarm optimization (BPSO) [40], binary Harris Hawks optimizer (BHH) [41], binary grey wolf optimization (BGW) [42], and the improved dragonfly algorithm (IDA) [43]. We used the open-sourced Zoofs library 111Available at https://jaswinder9051998.github.io/zoofs/ with default setting to perform these swarm-based feature selections. The classification performance of feature selections obtained from these algorithms was compared with that obtained by mSFFS.

Refer to caption
Figure 1: Classification accuracy scores when using an increasing number of mSFFS selected features. The shadow area gives the confidence intervals of the cross-validation scores.
Refer to caption
Figure 2: Testing scores using an increasing number of features. The shadow areas represent the confidence intervals for 10-fold tests.

Obviously, there will be partial overlaps between various feature selections as the relevant algorithms all target some good features. To evaluate the similarity between selected feature sets, we used the Jaccard index. Given two feature selections F1F_{1} and F2F_{2}, the Jaccard index is given by

𝒥(F1,F2)=|F1F2||F1F2|.\mathcal{J}(F_{1},F_{2})=\frac{|F_{1}\cap F_{2}|}{|F_{1}\cup F_{2}|}. (2)

We expect the Jaccard indices will be moderate between selected feature sets given by different selection algorithms.

To further assess the quality of selected feature schemes, and investigate the differences between mSFFS and SFFS, we took both a qualitative approach (through visualization) and a quantitative approach (by measuring class separabilities).

We choose the t-distributed stochastic neighbor embedding (t-SNE) algorithm [44] to visualize the data. The Scikit-learn [45] implementation of t-SNE is used in this study. The perplexity is set to 25.

We use the Bhattacharyya distance as an additional metric to measure the separability of the feature selection schemes, independent from classifier choices.

Assume two normal distributions p=N(𝝁1,𝚺1)p=N(\bm{\mu}_{1},\bm{\Sigma}_{1}) and q=N(𝝁2,𝚺2)q=N(\bm{\mu}_{2},\bm{\Sigma}_{2}). The Bhattacharyya distance between them is given by

DB(p,q)=18(𝝁1𝝁2)T𝚺1(𝝁1𝝁2)+12ln(|𝚺||𝚺1||𝚺2|),D_{B}(p,q)=\frac{1}{8}(\bm{\mu}_{1}-\bm{\mu}_{2})^{T}\bm{\Sigma}^{-1}(\bm{\mu}_{1}-\bm{\mu}_{2})+\frac{1}{2}\ln\left(\frac{|\bm{\Sigma}|}{\sqrt{|\bm{\Sigma}_{1}||\bm{\Sigma}_{2}|}}\right), (3)

where 𝚺=(𝚺1+𝚺2)/2\bm{\Sigma}=(\bm{\Sigma}_{1}+\bm{\Sigma}_{2})/2. From this definition, we can see that the Bhattacharyya distance extends the Mahalanobis distance by adding an extra term that informs on the dissimilarity between the two covariance matrices. The Bhattacharyya distance has been effectively used for optimizing feature extraction [46] and transfer learning [47].

kk mSFFS SFFS
Selected features t-score Selected features t-score
1 P4-O1(θ\theta) 0.754 P4-O1(θ\theta) 0.754
2 P4-O1(θ\theta), Fp1-F7(θ\theta) 0.725 Fp1-F7(θ\theta), P4-O1(θ\theta) 0.725
3 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta) 0.838 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta) 0.838
4 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta) 0.862 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), C3-T4(β\beta) 0.873
5 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), C3-T4(β\beta) 0.861 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), Fp1-F4(β\beta), C3-T4(β\beta) 0.820
6 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta) 0.918 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), Fp1-F4(β\beta), Cz-T6(β\beta), C3-T4(β\beta) 0.873
7 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta) 0.932 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), Fp1-F4(β\beta), F4-C4(δ\delta), Cz-T6(β\beta), C3-T4(β\beta) 0.82
8 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma) 0.891 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), F8-T3(γ\gamma), Fp1-F4(β\beta), F4-C4(δ\delta), Cz-T6(β\beta), C3-T4(β\beta) 0.848
9 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta) 0.934 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), C3-T4(β\beta) 0.875
10 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma) 0.907 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta) 0.836
11 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma), Fp1-F4(β\beta) 0.907 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(θ\theta), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.873
12 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma), Fp1-F4(β\beta), C3-O1(δ\delta) 0.879 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.873
13 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma), Fp1-F4(β\beta), C3-O1(δ\delta), Fp1-F7(α\alpha) 0.893 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C3(α\alpha), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.888
14 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma), Fp1-F4(β\beta), C3-O1(δ\delta), Fp1-F7(α\alpha), F7-P4(γ\gamma) 0.879 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), F4-C3(α\alpha), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.875
15 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), Fp2-T4(θ\theta), Fz-C4(δ\delta), F8-Cz(δ\delta), T4-Pz(γ\gamma), Fp1-F8(δ\delta), T3-P3(γ\gamma), Fp1-F4(β\beta), C3-O1(δ\delta), Fp1-F7(α\alpha), F7-P4(γ\gamma), T5-O1(θ\theta) 0.907 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.852
16 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), F8-Cz(δ\delta), Fp1-F4(β\beta), T5-O1(θ\theta), Fz-P4(γ\gamma), F8-T4(β\beta), F3-F4(α\alpha), C3-T4(β\beta), F4-F8(α\alpha), Fz-F4(α\alpha), F8-T5(δ\delta), Fz-Cz(δ\delta), Fp2-F7(θ\theta) 0.948 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F4-P4(β\beta), F8-T4(β\beta), C3-T4(β\beta), C3-P3(α\alpha) 0.777
17 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), F8-Cz(δ\delta), Fp1-F4(β\beta), T5-O1(θ\theta), F8-T3(γ\gamma), Fz-P4(γ\gamma), F8-T4(β\beta), F3-F4(α\alpha), C3-T4(β\beta), F4-F8(α\alpha), Fz-F4(α\alpha), F8-T5(δ\delta), Fz-Cz(δ\delta), Fp2-F7(θ\theta) 0.948 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fp2-T4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F4-P4(β\beta), F8-T4(β\beta), C3-T4(β\beta), F8-Cz(δ\delta) 0.877
18 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), F8-Cz(δ\delta), Fp1-F4(β\beta), T5-O1(θ\theta), F8-T3(γ\gamma), Fz-P4(γ\gamma), F8-T4(β\beta), F3-F4(α\alpha), T3-C3(γ\gamma), C3-T4(β\beta), F4-F8(α\alpha), Fz-F4(α\alpha), F8-T5(δ\delta), Fz-Cz(δ\delta), Fp2-F7(θ\theta) 0.934 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fp2-T4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), T3-C3(γ\gamma), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F4-P4(β\beta), F8-T4(β\beta), C3-T4(β\beta), F8-Cz(δ\delta) 0.850
19 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), F8-Cz(δ\delta), Fp1-F4(β\beta), T5-O1(θ\theta), T5-P3(δ\delta), F8-T3(γ\gamma), Fz-P4(γ\gamma), F8-T4(β\beta), F3-F4(α\alpha), T3-C3(γ\gamma), C3-T4(β\beta), F4-F8(α\alpha), Fz-F4(α\alpha), F8-T5(δ\delta), Fz-Cz(δ\delta), Fp2-F7(θ\theta) 0.961 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fp2-T4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), T3-C3(γ\gamma), Fz-F8(δ\delta), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), Cz-T6(β\beta), F4-P4(β\beta), F8-T4(β\beta), C3-T4(β\beta), F8-Cz(δ\delta) 0.864
20 P4-O1(θ\theta), Fp1-F7(θ\theta), T4-Pz(θ\theta), F4-P4(β\beta), F8-Cz(δ\delta), Fp1-F4(β\beta), T5-O1(θ\theta), T3-T6(δ\delta), T5-P3(δ\delta), F8-T3(γ\gamma), Fz-P4(γ\gamma), F8-T4(β\beta), F3-F4(α\alpha), T3-C3(γ\gamma), C3-T4(β\beta), F4-F8(α\alpha), Fz-F4(α\alpha), F8-T5(δ\delta), Fz-Cz(δ\delta), Fp2-F7(θ\theta) 0.975 Fp1-F7(θ\theta), P4-O1(θ\theta), T4-Pz(γ\gamma), T4-Pz(θ\theta), F4-C4(θ\theta), Fp2-T4(θ\theta), Fz-P4(γ\gamma), F4-C3(α\alpha), F8-T3(γ\gamma), T3-C3(γ\gamma), Fz-F8(δ\delta), Fp1-F4(β\beta), F7-P4(γ\gamma), F4-C4(δ\delta), C3-O1(δ\delta), Cz-T6(β\beta), F4-P4(β\beta), F8-T4(β\beta), C3-T4(β\beta), F8-Cz(δ\delta) 0.836
TABLE I: Evolution of feature selection for mSFFS and SFFS when the number of features kk increases from 1 to 20.

V Results

V-A Selected features

We produced feature selections of increasing dimensionality (from 1 to 50) and obtained their corresponding cross-validation scores (Fig. 1) and test accuracies (Fig. 2). According to the cross-validation curve, using 20 features produces the best mean validation score of 0.960 with a relatively small standard deviation of 0.018. This happens to be where the mean testing score peaks (0.975). We, therefore, use these 20 features as the final selected connectivity features. They are: P4-O1 theta, Fp1-F7 theta, T4-Pz theta, F4-P4 beta, F8-Cz delta, Fp1-F4 beta, T5-O1 theta, T3-T6 delta, T5-P3 delta, F8-T3 gamma, Fz-P4 gamma, F8-T4 beta, F3-F4 alpha, T3-C3 gamma, C3-T4 beta, F4-F8 alpha, Fz-F4 alpha, F8-T5 delta, Fz-Cz delta, and Fp2-F7 theta.

Evolution of feature selections

To gain some insight on the floating selection processes in both SFFS and mSFFS and how the selection outcomes evolve, the selected feature sets are displayed in Table I when the number of features increases from 1 to 20. Starting from the same first two chosen features, the feature selections of both schemes start to diverge from k=4k=4, yet maintaining some similarity to each other.

Fig. 3 sheds some light on how the search paths of SFFS and mSFFS differ over time. As can be seen, for n=1,,20n=1,\cdots,20, SFFS mainly follows incremental addition of the next best feature, with n=17n=17 being the only exception where this linear growth pattern is disrupted, and two new features get selected for that round. For mSFFS, there is a similar, small disruption at n=6n=6, and also a major disruption at n=16n=16, where 9 new features make into the selection. This indicates that mSFFS seems more capable of escaping from local-minimum traps and branch out new search paths than SFFS.

Correlating this behaviour with the validation and testing performance trends as shown in Figs. 1 and  2, we can see that n=16n=16 is where mSFFS starts to gain a visible performance margin against SFFS, before reaching the performance peak at n=20n=20.

Refer to caption

Figure 3: Incremental differences of feature selections by SFFS and mSFFS.

Refer to caption

Figure 4: Jaccard index of the two evolving feature selections over time.

Jaccard indeces

We further employ the Jaccard index between feature selection outcomes to quantify the agreement between two feature schemes. As shown in Fig. 4, when nn increases from 1 to 20, the two feature schemes deviate from initial full overlap (𝒥(n)=1,n=1,2,3\mathcal{J}(n)=1,n=1,2,3), drop to the lowest score at n=10n=10, before developing some moderate overlap again.

Upon reaching k=20k=20, the mSFFS selection gives a Jaccard index of 0.379 to that of SFFS. It overlaps less with the top-20 (with the highest SHAP values) feature set, with a Jaccard index of 0.333, while the same index between SFFS and top-20 selections is 0.538. Hence mSFFS’s search route further deviates from the top-20 set, allowing alternative features to be explored for inclusion that have less redundancy to existing features but larger contributions to classification.

Refer to caption
(a) SHAP
Refer to caption
(b) SFFS
Refer to caption
(c) mSFFS
Figure 5: Mean absolute SHAP values of the features selected by three feature selection schemes: (a) top-20 SHAP; (b) SFFS, 20 features; (c) SFFS, 20 features.

SHAP value comparison

The sorted, mean absolute SHAP values of the features selected by three feature selection schemes are shown in Fig. 5. The difference in the search outcome in mSFFS is also reflected here. Clearly, mSFFS does not simply favor features with large SHAP values, which may help it overcome the simple greedy nature of SFFS, allowing small features with lesser SHAP values to be chosen and eventually contribute to better performance.

V-B Classification performance

We followed the nested cross-validation process described in Section  IV-C. The average cross-validation score on the training set is 0.960 (±\pm0.018), and the average test score is 0.975 (±\pm0.050), using the mSFFS scheme.

We compared the classification performance of the mSFFS scheme with SFFS, top-20 SHAP selections, and selections given by the swarm-based schemes, Boruta [48], and Minimum Redundancy and Maximum Relevance (mRMR) [49]. None of the swarm-based feature selection schemes, including BPSO [50], BHH [41], BGW [42] and IDA [43], effectively reduced the feature dimension. SHAP and SFFS can designate a specific feature number, but their performance is worse than mSFFS. Table II gives the number of features and test accuracy from each feature selection algorithm. It can be seen that swarm-based selection methods failed to find a compact feature selection, all resulting in dimensionalities larger than 400. The corresponding feature selections also performed poorly in classification. Boruta has reduced the number of features to nine, but the testing performance is inadequate. Other methods, including mRMR, SHAP and SFFS, can specify the number of features needed, but failed to produce optimal results.

Algorithm No.features Accuracy F1 Precision Recall
BPSO 417 0.607 0.501 0.617 0.508
BHH 556 0.539 0.478 0.511 0.508
BGW 615 0.552 0.468 0.532 0.450
IDA 404 0.602 0.532 0.600 0.558
Boruta 9 0.784 0.807 0.757 0.875
mRMR 20 0.864 0.869 0.847 0.917
SHAP 20 0.795 0.799 0.838 0.808
SFFS 20 0.836 0.858 0.818 0.925
mSFFS 20 0.975 0.892 0.922 0.875
TABLE II: Feature selection schemes and their testing performance in comparison.
kk mSFFS vs SFFS mSFFS vs SHAP
t-statistics pp-value t-statistics pp-value
1 0.00 1.00 1.36 0.19
2 0.00 1.00 0.00 1.00
3 0.00 1.00 1.09 0.29
4 -0.14 0.89 1.09 0.29
5 0.67 0.51 0.77 0.45
6 0.68 0.51 2.12 0.05
7 1.75 0.10 2.55 0.02
8 0.70 0.49 1.22 0.24
9 1.03 0.32 2.07 0.05
10 1.24 0.23 1.20 0.24
11 1.60 0.56 1.39 0.18
12 0.09 0.93 0.23 0.82
13 0.09 0.93 0.56 0.58
14 0.07 0.95 0.00 1.00
15 1.06 0.30 0.81 0.43
16 2.69 0.02 3.64 0.00
17 1.81 0.09 3.95 0.00
18 2.15 0.05 2.88 0.01
19 2.63 0.03 3.77 0.00
20 4.32 0.00 3.90 0.00
21 1.31 0.21 2.83 0.01
22 2.24 0.04 3.70 0.00
23 1.81 0.09 3.00 0.01
24 2.85 0.01 2.66 0.02
25 1.28 0.22 3.03 0.01
26 2.62 0.02 3.33 0.00
27 2.24 0.04 2.99 0.01
28 2.16 0.04 2.42 0.03
29 2.05 0.05 2.19 0.04
30 1.76 0.09 1.75 0.10
TABLE III: T-test results on the testing scores obtained from mSFFS selections versus those by SFFS and top SHAP values over a range of selected feature numbers (kk) up to 30. Further results are statistical insignificance, hence omitted. Dimensionalities on which mSFFS outperforms both SFFS and SHAP with statistical significance (p0.05p\leq 0.05) are highlighted in bold. Asterisks after pp-values in italics indicate significance.

Refer to caption
(a)
Refer to caption
(b)

Refer to caption
(c)
Refer to caption
(d)
Figure 6: Outcome of t-SNE visualization of data distributions using: (a) all 855 features; (b) 20 top-ranked SHAP features; (c) 20 features selected by SFFS; (d) 20 features selected by mSFFS.

V-C Detailed evaluations

Further analyses demonstrated the differences between mSFFS and SFFS. In specific, t-test shows that the performance scores are significantly different; t-SNE illustrated a clear visual difference; and Bhattacharyya distance evaluation suggests that the classes are better separated using the mSFFS features. To gain the insight of how these two algorithms incorporate features with high importance, top-ranked SHAP is also included in the comparison.

tt-test

We conducted two-sided tt-tests to compare the test scores obtained from each scheme using 20 features. The results of the tt-tests revealed that the comparison between mSFFS and SFFS had a tt-statistics value of 4.316 and a pp-value of 0.0004, indicating a statistically significant difference between the two schemes. Similarly, the comparison between mSFFS and SHAP selection yielded a statistics value of 3.896 with a pp-value of 0.001, suggesting that there was also a significant difference between these two schemes. The tt-test results over the range of selection dimensionality (k=130k=1\sim 30) are detailed in Table III. On 10 out of 30 occasions the performance gains of mSFFS are statistically significant against both SFFS and SHAP.

t-SNE visualization

The testing performance was aligned with the t-SNE visualization outcome. Figure 6 reveals the distribution of the two-class data samples before and after feature selection. Out of the three feature schemes (mSFFS, SFFS, and SHAP), the mSFFS selected features show the clearest separation between pain and healthy participants, which demonstrates the superiority of mSFFS in selecting features that contributes to differentiating the classes.

Bhattacharyya separabilities

In addition to the visual assessment, we also measured the dissimilarity of the healthy and pain classes’ probability distributions using the Bhattacharyya distance (DBD_{B}). This is accomplished using features selected through different schemes and embedded into a 2-dimensional space by t-SNE, followed by distance calculation. The result indicates that the data characterized by mSFFS features exhibit considerably greater segregation (DB=1.492D_{B}=1.492) between pain and healthy individuals when compared with SFFS (DB=1.256D_{B}=1.256) and SHAP-rank (DB=0.223D_{B}=0.223).

To sum up, our results suggest that mSFFS is a more effective feature selection scheme than SFFS, SHAP-based selection, and state-of-the-art swarm-based algorithms. In general, our proposed mSFFS scheme provides feature selections with better visualization and better separability, and can achieve higher accuracies with only a few features.

V-D Discussion

Our basic approach in searching out optimal combinations of features is well-aligned with a network science approach to diagnosing pain in the brain [51]. By exploring coherence or interaction among sensor nodes, our feature selection algorithm deviates from top-ranked individual features but searches for top-performing combinations, which is clearly shown in the search paths given in Table I, and the final feature compositions given in Fig. 5. Comparing SFFS and mSFFS, it seems that mSFFS searches deeper in the diverse feature space and deviates further from the plain top-20 selection. Although more weaker features are included in the mSFFS selection, the combination works the best and leads to the improvements as observed from visualization (Fig. 6), and quantitative assessment on class separability and classification accuracy given in Section V.

The connectivity feature selection is identified in the sensor space using raw EEG signals instead of the source space, which would often require manual data cleaning and extra processing for solving the inverse solution of retrieving the brain sources for generating the activity detected by the EEG sensors. The sensors that are involved however might be related to activity coming from somatosensory cortices (Pz and P4), i.e. the lateral pathway, and the anterior cingulate cortex (Fz and Cz), i.e. the medial pathway. The regions of the sensor signature we found overlap with some of the pain predictive brain regions found in other studies [52, 15, 22]. The activity in theta band (3 top features selected by mSFFS with sensors including frontal and right temporal sensors) is in agreement with the reported high theta band power in right frontal and temporal regions for subjects with high knee pain intensity [53]. Four beta band activities are selected into the 20-feature selection but with moderate importance, also partially overlapping (Fp1, F4, F8) with those reported beta activities [53].

It is interesting to see that sensors at regions related to emotional memory and cognitive processing (e.g., P4, T4 etc.) are included in the connectivity feature selection, even though these seem to be not reported in other studies.

VI Conclusion

Despite being an important healthcare issue, chronic pain has received little data-centred treatment in the research literature. In this paper, we have demonstrated that a simple, cheap and clinically useful method of chronic knee pain prediction can perform on raw EEG data. This permits the development of a clinically applicable device that can detect and diagnose the presence of pain in an objective purely data-driven way with a test accuracy of 97.5%. This is highly relevant both from a research and clinical point of view. We consider the effectiveness and compactness of the selected functional connectivity a significant advantage, as this may potentially lead to cheap hardware implementations that can carry out automatic diagnosis of chronic knee pain, replacing the current, subjective self-reporting practice. The finding of a compact, reliable sensor-level connectivity signature for chronic knee pain could lead to potential neuromodulation applications for chronic knee pain relief and treatment.

The performance gain we have achieved is due to the modified SFFS algorithm, whose better feature selection quality against SFFS and other state-of-the-art methods in comparison has been demonstrated from multiple perspectives: better cross validation performance, enhance external testing results, better t-SNE visualization outcome as well as improved quantitative class separability between the pain and healthy control groups. We have also analyzed the searching behaviour of the feature selection algorithms, which reveals that our algorithm tends to have more dynamics to escape from local minima and arrive at more competitive feature selections.

We utilized EEG data acquired from two sources in this study. On the other hand, the so-called “domain gaps” were reported in the literature, namely, trained predictive models may have reduced generalization ability on EEGs acquired from diverse sources [54]. Another limitation of the current study is that we did not pursue a deep-learning solution owing to the small amount of data we had.

For future work, we will seek to augment our datasets by including recordings from different sources. With data of larger amounts and more diversity available, the subsequent investigation involves exploring deep domain adaption techniques [55, 56] and developing classification models that perform steadily across acquisition sources.

References

  • [1] J. J. Bonica, “The need of a taxonomy.” Pain, vol. 6, no. 3, pp. 247–248, 1979.
  • [2] R.-D. Treede, W. Rief, A. Barke, Q. Aziz, M. I. Bennett, R. Benoliel, M. Cohen, S. Evers, N. B. Finnerup, M. B. First, M. A. Giamberardino, S. Kaasa, B. Korwisi, E. Kosek, P. Lavand’homme, M. Nicholas, S. Perrot, J. Scholz, S. Schug, B. H. Smith, P. Svensson, J. W. S. Vlaeyen, and S.-J. Wang, “Chronic pain as a symptom or a disease: the IASP classification of chronic pain for the international classification of diseases (ICD-11),” Pain, vol. 160, no. 1, pp. 19–27, Jan. 2019.
  • [3] J. Scholz, N. B. Finnerup, N. Attal, Q. Aziz, R. Baron, M. I. Bennett, R. Benoliel, M. Cohen, G. Cruccu, K. D. Davis, S. Evers, M. First, M. A. Giamberardino, P. Hansson, S. Kaasa, B. Korwisi, E. Kosek, P. Lavand’homme, M. Nicholas, T. Nurmikko, S. Perrot, S. N. Raja, A. S. C. Rice, M. C. Rowbotham, S. Schug, D. M. Simpson, B. H. Smith, P. Svensson, J. W. S. Vlaeyen, S.-J. Wang, A. Barke, W. Rief, R.-D. Treede, and Classification Committee of the Neuropathic Pain Special Interest Group (NeuPSIG), “The IASP classification of chronic pain for ICD-11: chronic neuropathic pain,” Pain, vol. 160, no. 1, pp. 53–59, Jan. 2019.
  • [4] R. Kuner and H. Flor, “Structural plasticity and reorganisation in chronic pain,” Nat. Rev. Neurosci., vol. 18, no. 1, pp. 20–30, Dec. 2016.
  • [5] S. E. E. Mills, K. P. Nicolson, and B. H. Smith, “Chronic pain: a review of its epidemiology and associated factors in population-based studies,” Br. J. Anaesth., vol. 123, no. 2, pp. e273–e283, Aug. 2019.
  • [6] O. Moriarty, B. E. McGuire, and D. P. Finn, “The effect of pain on cognitive function: a review of clinical and preclinical research,” Prog. Neurobiol., vol. 93, no. 3, pp. 385–404, Mar. 2011.
  • [7] H. Breivik, B. Collett, V. Ventafridda, R. Cohen, and D. Gallacher, “Survey of chronic pain in Europe: prevalence, impact on daily life, and treatment,” Eur. J. Pain, vol. 10, no. 4, pp. 287–333, May 2006.
  • [8] Y. Zheng, T. Zhang, X. Yang, Z. Feng, F. Qiu, G. Xin, J. Liu, F. Nie, X. Jin, and Y. Liu, “A survey of chronic pain in China,” Libyan J. Med., vol. 15, no. 1, p. 1730550, Dec. 2020.
  • [9] E. N. Mutubuki, Y. Beljon, E. T. Maas, F. J. P. M. Huygen, R. W. J. G. Ostelo, M. W. van Tulder, and J. M. van Dongen, “The longitudinal relationships between pain severity and disability versus health-related quality of life and costs among chronic low back pain patients,” Quality of Life Research, vol. 29, no. 1, pp. 275–287, Jan. 2020.
  • [10] GBD 2017 Disease and Injury Incidence and Prevalence Collaborators, “Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: a systematic analysis for the global burden of disease study 2017,” Lancet, vol. 392, no. 10159, pp. 1789–1858, Nov. 2018.
  • [11] D. J. Gaskin and P. Richard, “The economic costs of pain in the United States,” Journal of Pain, vol. 13, no. 8, pp. 715–724, Aug. 2012.
  • [12] M. W. van Tulder, B. W. Koes, and L. M. Bouter, “A cost-of-illness study of back pain in the netherlands,” Pain, vol. 62, no. 2, pp. 233–240, Aug. 1995.
  • [13] J. Dahlhamer, J. Lucas, C. Zelaya, R. Nahin, S. Mackey, L. DeBar, R. Kerns, M. Von Korff, L. Porter, and C. Helmick, “Prevalence of chronic pain and high-impact chronic pain among adults - United States, 2016,” MMWR Morb. Mortal. Wkly. Rep., vol. 67, no. 36, pp. 1001–1006, Sep. 2018.
  • [14] T. Jackson, S. Thomas, V. Stabile, X. Han, M. Shotwell, and K. McQueen, “Prevalence of chronic pain in low-income and middle-income countries: a systematic review and meta-analysis,” Lancet, vol. 385 Suppl 2, p. S10, Apr. 2015.
  • [15] T. D. Wager, L. Y. Atlas, M. A. Lindquist, M. Roy, C.-W. Woo, and E. Kross, “An fMRI-based neurologic signature of physical pain,” N. Engl. J. Med., vol. 368, no. 15, pp. 1388–1397, Apr. 2013.
  • [16] C. L. Stucky, M. S. Gold, and X. Zhang, “Mechanisms of pain,” Proc. Natl. Acad. Sci. U. S. A., vol. 98, no. 21, pp. 11 845–11 846, Oct. 2001.
  • [17] M. P. Jensen, M. A. Day, and J. Miró, “Neuromodulatory treatments for chronic pain: efficacy and mechanisms,” Nat. Rev. Neurol., vol. 10, no. 3, pp. 167–178, Mar. 2014.
  • [18] J. A. Kim, R. L. Bosma, K. S. Hemington, A. Rogachov, N. R. Osborne, J. C. Cheng, B. T. Dunkley, and K. D. Davis, “Sex-differences in network level brain dynamics associated with pain sensitivity and pain interference,” Human Brain Mapping, vol. 42, no. 3, pp. 598–614, 2021.
  • [19] J. A. Kim, R. L. Bosma, K. S. Hemington, A. Rogachov, N. R. Osborne, J. C. Cheng, J. Oh, B. T. Dunkley, and K. D. Davis, “Cross-network coupling of neural oscillations in the dynamic pain connectome reflects chronic neuropathic pain in multiple sclerosis,” NeuroImage: Clinical, vol. 26, p. 102230, 2020.
  • [20] J. A. Kim and K. D. Davis, “Neural oscillations: understanding a neural code of pain,” The Neuroscientist, vol. 27, no. 5, pp. 544–570, 2021.
  • [21] D. De Ridder, S. Perera, and S. Vanneste, “State of the art: Novel applications for cortical stimulation,” Neuromodulation, vol. 20, no. 3, pp. 206–214, Apr. 2017.
  • [22] S. Vanneste, J.-J. Song, and D. De Ridder, “Thalamocortical dysrhythmia detected by machine learning,” Nature Communications, vol. 9, no. 1, pp. 1–13, 2018.
  • [23] T. Nezam, R. Boostani, V. Abootalebi, and K. Rastegar, “A novel classification strategy to distinguish five levels of pain using the EEG signal features,” IEEE Transactions on Affective Computing, vol. 12, no. 1, pp. 131–140, 2018.
  • [24] A. Vuckovic, V. J. F. Gallardo, M. Jarjees, M. Fraser, and M. Purcell, “Prediction of central neuropathic pain in spinal cord injury based on EEG classifier,” Clinical Neurophysiology, vol. 129, no. 8, pp. 1605–1617, 2018.
  • [25] V. Vijayakumar, M. Case, S. Shirinpour, and B. He, “Quantifying and characterizing tonic thermal pain across subjects from EEG data using random forest models,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp. 2988–2996, 2017.
  • [26] F.-J. Hsiao, W.-T. Chen, L.-L. H. Pan, H.-Y. Liu, Y.-F. Wang, S.-P. Chen, K.-L. Lai, and S.-J. Wang, “Machine learning–based prediction of heat pain sensitivity by using resting-state eeg,” Frontiers in Bioscience-Landmark, vol. 26, no. 12, pp. 1537–1547, 2021.
  • [27] C. Okolo and A. Omurtag, “Use of dry electroencephalogram and support vector for objective pain assessment,” Biomedical Instrumentation & Technology, vol. 52, no. 5, pp. 372–378, 2018.
  • [28] M. Jas, D. A. Engemann, Y. Bekhti, F. Raimondo, and A. Gramfort, “Autoreject: Automated artifact rejection for meg and eeg data,” NeuroImage, vol. 159, pp. 417–429, 2017.
  • [29] F. Perrin, O. Bertrand, and J. Pernier, “Scalp current density mapping: value and estimation from potential data,” IEEE Transactions on Biomedical Engineering, no. 4, pp. 283–288, 1987.
  • [30] F. Perrin, J. Pernier, O. Bertrand, and J. F. Echallier, “Spherical splines for scalp potential and current density mapping,” Electroencephalography and Clinical Neurophysiology, vol. 72, no. 2, pp. 184–187, 1989.
  • [31] M. X. Cohen, Analyzing neural time series data: theory and practice.   MIT Press, 2014.
  • [32] J. Kayser and C. E. Tenke, “On the benefits of using surface laplacian (current source density) methodology in electrophysiology,” International journal of psychophysiology: official journal of the International Organization of Psychophysiology, vol. 97, no. 3, p. 171, 2015.
  • [33] R. Bruña, F. Maestú, and E. Pereda, “Phase locking value revisited: teaching new tricks to an old dog,” Journal of neural engineering, vol. 15, no. 5, p. 056011, 2018.
  • [34] A. Gramfort, M. Luessi, E. Larson, D. A. Engemann, D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, and M. S. Hämäläinen, “MEG and EEG data analysis with MNE-Python,” Frontiers in Neuroscience, vol. 7, no. 267, pp. 1–13, 2013.
  • [35] P. Pudil, J. Novovičová, and J. Kittler, “Floating search methods in feature selection,” Pattern Recognition Letters, vol. 15, no. 11, pp. 1119–1125, 1994.
  • [36] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds.   Curran Associates, Inc., 2017, pp. 4765–4774.
  • [37] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’16.   New York, NY, USA: Association for Computing Machinery, 2016, p. 785–794.
  • [38] G. James, D. Witten, T. Hastie, and R. Tibshirani, An Introduction to Statistical Learning: with Applications in R, 2nd ed.   Springer, 2021.
  • [39] S. Raschka, “MLxtend: Providing machine learning and data science utilities and extensions to python’s scientific computing stack,” The Journal of Open Source Software, vol. 3, no. 24, Apr. 2018.
  • [40] B. Xue, M. Zhang, and W. N. Browne, “Particle swarm optimization for feature selection in classification: A multi-objective approach,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1656–1671, 2013.
  • [41] T. Thaher, A. A. Heidari, M. Mafarja, J. S. Dong, and S. Mirjalili, “Binary harris hawks optimizer for high-dimensional, low sample size feature selection,” Evolutionary Machine Learning Techniques: Algorithms and Applications, pp. 251–272, 2020.
  • [42] E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary grey wolf optimization approaches for feature selection,” Neurocomputing, vol. 172, pp. 371–381, 2016.
  • [43] A. I. Hammouri, M. Mafarja, M. A. Al-Betar, M. A. Awadallah, and I. Abu-Doush, “An improved dragonfly algorithm for feature selection,” Knowledge-Based Systems, vol. 203, p. 106131, 2020.
  • [44] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, no. 86, pp. 2579–2605, 2008.
  • [45] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  • [46] E. Choi and C. Lee, “Feature extraction based on the Bhattacharyya distance,” Pattern Recognition, vol. 36, no. 8, pp. 1703–1709, Aug. 2003.
  • [47] M. Pándy, A. Agostinelli, J. Uijlings, V. Ferrari, and T. Mensink, “Transferability estimation using Bhattacharyya class separability,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 9162–9172.
  • [48] M. B. Kursa, A. Jankowski, and W. R. Rudnicki, “Boruta–a system for feature selection,” Fundamenta Informaticae, vol. 101, no. 4, pp. 271–285, 2010.
  • [49] H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226–1238, 2005.
  • [50] B. Tran, B. Xue, and M. Zhang, “Overview of particle swarm optimisation for feature selection in classification,” in Simulated Evolution and Learning: 10th International Conference, SEAL 2014, Dunedin, New Zealand, December 15-18, 2014. Proceedings 10.   Springer, 2014, pp. 605–617.
  • [51] B. Deak, T. Eggert, A. Mayr, A. Stankewitz, F. Filippopulos, P. Jahn, V. Witkovsky, A. Straube, and E. Schulz, “Intrinsic network activity reflects the fluctuating experience of tonic pain,” Cereb. Cortex, Jan. 2022.
  • [52] L. Kohoutová, L. Y. Atlas, C. Büchel, J. T. Buhle, S. Geuter, M. Jepma, L. Koban, A. Krishnan, D. H. Lee, S. Lee et al., “Individual variability in brain representations of pain,” Nature Neuroscience, pp. 1–11, 2022.
  • [53] M. Simis, M. Imamura, K. Pacheco-Barrios, A. Marduy, P. S. de Melo, A. J. Mendes, P. E. P. Teixeira, L. Battistella, and F. Fregni, “EEG theta and beta bands as brain oscillations for different knee osteoarthritis phenotypes according to disease severity,” Scientific Reports, vol. 12, no. 1, Jan. 2022.
  • [54] X. Chai, Q. Wang, Y. Zhao, Y. Li, D. Liu, X. Liu, and O. Bai, “A fast, efficient domain adaptation technique for cross-domain electroencephalography(EEG)-based emotion recognition,” Sensors (Basel), vol. 17, no. 5, May 2017.
  • [55] J. Hou, X. Ding, J. D. Deng, and S. Cranefield, “Deep adversarial transition learning using cross-grafted generative stacks,” Neural Networks, vol. 149, pp. 172–183, 2022.
  • [56] Q. Wang, F. Meng, and T. P. Breckon, “Data augmentation with norm-ae and selective pseudo-labelling for unsupervised domain adaptation,” Neural Networks, vol. 161, pp. 614–625, 2023.