This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Mental-Perceiver: Audio-Textual Multimodal Learning for Mental Health Assessment


1 Guangdong University of Technology, 2 Shuye Intelligent Co., Ltd.,
3 University of Toronto, 4 University of California Berkeley,
5 Ningbo Institute Of Digital Twin(EIAS)
   Jinghui Qin1\equalcontrib, Changsong Liu2,3\equalcontrib, Tianchi Tang2, Dahuang Liu2,
Minghao Wang2, Qianying Huang2, Yang Xu4, Rumin Zhang5
Abstract

Mental disorders, such as anxiety and depression, have become a global issue that affects the regular lives of people across different ages. Without proper detection and treatment, anxiety and depression can hinder the sufferer’s study, work, and daily life. Fortunately, recent advancements of digital and AI technologies provide new opportunities for better mental health care and many efforts have been made in developing automatic anxiety and depression assessment techniques. However, this field still lacks a publicly available large-scale dataset that can facilitate the development and evaluation of AI-based techniques. To address this limitation, we have constructed a new large-scale Multi-Modal Psychological assessment corpus (MMPsy) on anxiety and depression assessment of Mandarin-speaking adolescents. The MMPsy contains audios and extracted transcripts of responses from automated anxiety or depression assessment interviews along with the self-reported anxiety or depression evaluations of the participants using standard mental health assessment questionnaires. Our dataset contains over 7,700 post-processed recordings of interviews for anxiety assessment and over 4,200 recordings for depression assessment. Using this dataset, we have developed a novel deep-learning based mental disorder estimation model, named Mental-Perceiver, to detect anxious/depressive mental states from recorded audio and transcript data. Extensive experiments on our MMPsy and the commonly-used DAIC-WOZ datasets have shown the effectiveness and superiority of our proposed Mental-Perceiver model in anxiety and depression detection. The MMPsy dataset will be made publicly available later to facilitate the research and development of AI-based techniques in the mental health care field.

Introduction

Anxiety and depression are two common mental health disorders that significantly impact patients, characterized by a cluster of similar signs and symptoms. Individuals suffering from anxiety or depression often experience a persistent state of distress, excessive fear and worry, lack of interest in everyday activities, and diminished energy levels. Beyond emotional disturbances, people with anxiety or depression also exhibit various physiological symptoms, such as weight loss, appetite changes, insomnia, and menopause in female patients (Cowen, Harrison, and Burns 2012). Due to these psychological and physiological impacts, anxiety and depression can hinder their sufferers’ study, work, and daily life. These mental disorders have become a thorny issue in our modern society. According to a 2019 report by the World Health Organization (WHO) (de la Santé 2019), over 301 million people, including 58 million children and adolescents, suffer from anxiety disorders, while more than 280 million people, including 23 million children and adolescents, live with depression. Despite the widespread prevalence, these disorders are frequently under- or mis-diagnosed and the treatment rates remain low (Kessler 2012). This is largely due to the time-consuming and costly nature of diagnosis and treatment. Additionally, patients, particularly children and adolescents, may conceal their true mental state out of fear of stigma during mental health assessments such as self-report screenings or clinical interviews, leading to misdiagnosis.

Fortunately, recent advancements of digital and AI technologies provide new opportunities for better mental health care and many efforts have been made in developing automatic anxiety and depression assessment techniques (Mao, Wu, and Chen 2023; Hecker et al. 2022). Given the pioneering explorations and achievements in this field, however, several significant limitations hinder current research efforts. Firstly, many existing methods rely heavily on the expertise of psychologists during interviews for data collection, which incurs substantial costs and makes it difficult to construct large-scale annotated datasets. As a result, the size of current datasets related to anxiety and depression detection, such as DAIC-WoZ (Gratch et al. 2014) and AViD-Corpus (Valstar et al. 2014), is limited, typically involving only around 100 participants. Although Shen et al. (Shen, Yang, and Lin 2022) made efforts to develop the EATD-Corpus, an emotional audio-textual depression detection dataset utilizing the Self-Rating Depression Scale (SDS), their dataset still includes only 162 volunteers from a University. Besides, the effectiveness of the SDS questionnaires completed by these volunteers was not validated, casting doubt on the reliability of the results. Consequently, constructing larger datasets at a lower cost remains a significant challenge for enhancing automatic anxiety and depression detection systems.

In this work, we address key challenges in the study of automatic anxiety and depression detection, aiming to extend research into more realistic settings. First, we introduce a novel large-scale Multi-Modal Psychological assessment corpus (MMPsy) focused on anxiety and depression in Mandarin-speaking adolescents. The MMPsy corpus includes audio recordings and corresponding transcripts of responses from both anxious/depressed and non-anxious/non-depressed adolescent volunteers, collected using a human-computer interaction system. Self-reporting anxiety and depression rating scales were also collected and verified using the same system, which can be used as gold standard in training and evaluating machine-learning based techniques. After rigorous data preprocessing and cleaning, the corpus comprises 7,758 interview data points for anxiety detection and 4,266 for depression detection. To the best of our knowledge, MMPsy is the first publicly available adolescent psychological assessment corpus that simultaneously detects anxiety and depression while providing both audio and text data in Chinese.

In addition, we propose a novel mental disorder estimation network, termed Mental-Perceiver, to automatically detect anxious or depressive mental states based on users’ audio inputs and corresponding transcripts. The Mental-Perceiver leverages attention mechanisms to map multimodal inputs and category semantic priors to a fixed-size multimodal feature, using a learnable, sharable embedding for effective multimodal fusion. It then applies a deep, fully attentional network to process the fused multimodal feature for in-depth multimodal feature extraction. Finally, the network decodes the extracted feature using a learnable query array to produce the final mental state estimation. Extensive experiments conducted on the MMPsy corpus and the public DAIC-WOZ dataset demonstrate the effectiveness and superiority of our proposed Mental-Perceiver model.

Our contributions can be summarized as follows:

  • We introduce MMPsy, a large-scale multimodal psychological assessment corpus containing 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection.

  • We develop Mental-Perceiver, a novel mental disorder estimation network that automatically detects anxious or depressive mental states from audio and transcript data, utilizing a fully attentional network architecture for effective multimodal fusion.

  • Extensive experimental results on MMPsy and the public DAIC-WOZ dataset validate the effectiveness and superiority of our proposed Mental-Perceiver model in automatic mental disorder detection.

Related Work

Automatic Anxiety/Depression Detection

Early studies of automatic anxiety/depression detection were concentrated on extracting effective features from the responses to questions that were highly correlated with anxiety/depression. Williamson et al. (Williamson et al. 2016) used related semantic context cues entailed in the voice, video-based facial action units, and transcribed text of individuals and built a Gaussian Staircase Model to detect depression automatically. Yang et al. (Yang et al. 2016) selected depression-related questions after analyzing interview transcripts manually and constructed a decision tree with the selected questions to predict the participants’ depression states. Similarly, Sun et al. (Sun et al. 2017) first manually selected questions related to certain topics such as recent feelings and sleep quality by conducting content analysis based on the text transcripts of clinical interviews. Then, they extracted text features from these selected questions as the input of Random Forest to detect depression tendencies. Gong et al. (Gong and Poellabauer 2017) performed topic modeling to split the interviews into topic-related segments. Then, they maintained the most discriminating features by a feature selection algorithm. Giannakakis et al. (Giannakakis et al. 2017) focused on non-voluntary and semi-voluntary facial cues to estimate the emotion representation more objectively. They selected the most robust features from the features including eye-related events, mouth activity, head motion parameters, and heart rate estimated through camera-based photoplethysmography. Finally, they deployed a ranking transformation to investigate the correlation of facial parameters with a participant’s perceived amount of stress/anxiety.

With the advancement of deep learning, extracting and integrating multi-modal features through deep learning models have become particularly promising for anxiety/depression detection. Ma et al. (Ma et al. 2016) encoded the depressive audio characteristics with the combination of CNN and LSTM to predict the presence of depression. Yang et al. (Yang et al. 2017) trained a deep Convolution Neural Network (CNN)-based depression detection model with specially-designed a set of audio and video descriptors. Tuka et al. (Al Hanai, Ghassemi, and Glass 2018) selected audio features and text features that were strongly related to depression severity by computing Pearson Coefficients and built a Long Short-Term Memory (LSTM) network to assess depression tendency. Haque et al. (Haque et al. 2018) proposed a causal CNN model to summarize acoustic, visual, and linguistic features into embeddings which were then used to predict depressive states. Shen et al. (Shen, Yang, and Lin 2022) proposed a depression detection approach utilizing speech characteristics and linguistic contents from participants’ interviews. Lin et al. (Lin et al. 2022) used the biological information of speech, combined with deep learning to build a rapid binary classification model of depression in the elderly. Agarwal (Agarwal, Jindal, and Singh 2023) developed machine-learning solutions to diagnose anxiety disorders from audio journals of patients.

Anxiety/Depression Detection Datasets

The public anxiety/depression datasets are quite scarce (Gratch et al. 2014; Valstar et al. 2014; Shen, Yang, and Lin 2022) due to ethical issues. To the best of our knowledge, there are only three publicly available datasets  (Gratch et al. 2014; Valstar et al. 2014; Shen, Yang, and Lin 2022) referring to depression detection while there are no publicly available multi-modal datasets including audio, facial, or text referring to anxiety detection. The DAIC-WoZ dataset (Gratch et al. 2014) contains recordings and transcripts of 142 American participants who were clinically interviewed by a computer agent automatically. AVid-Corpus (Valstar et al. 2014) contains audios and videos of German participants answering a set of queries or reciting fables. However, the text transcripts are not provided. EATD-Corpus (Shen, Yang, and Lin 2022) is released to facilitate the research in depression detection. It consists of audios and text transcripts extracted from the interviews of 162 student volunteers recruited from Tongji University. Although EATD-Corpus provides the text transcripts, its data scale is small.

MMPsy: A New Benchmark

Since the public anxiety/depression datasets are quite scarce, we construct and release a new Chinese multi-modal mental health dataset named MMPsy to facilitate the research in both anxiety detection and depression detection. Like EATD-Corpus (Shen, Yang, and Lin 2022), MMPsy consists of audios and text transcripts extracted from the interviews of more than 20 thousand primary and secondary school students in the Guangdong Province of China. All the volunteers have signed informed consent and tried their best to guarantee the authenticity of all the information provided. Each volunteer is required to answer 10 specially designed questions about anxiety and depression and complete a GAD-7 questionnaire (Mossman et al. 2017) or PHQ-9 questionnaire (Kroenke, Spitzer, and Williams 2001) for different disorders detection. The GAD-7 questionnaire consists of 7 items that measure the severity of anxiety. It is a commonly used questionnaire for psychologists to screen the anxiety severity of individuals in practice. A raw GAD-7 score can be summarized from the questionnaire. For Chinese people, an index GAD-7 score greater than or equal to 10 implies that the individual is in anxiety. Similarly, the PHQ-9 questionnaire consists of 9 items that measure the severity of depression. It is a commonly used questionnaire for psychologists to screen the depression severity of individuals. A raw PHQ-9 score can be summarized from the questionnaire. For Chinese people, an index PHQ-9 score greater than or equal to 10 implies that the individual is in depression. According to the criterion, there are 704 anxious volunteers and 7,032 non-anxious volunteers in the anxiety subset of MMPsy while there are 853 depressed volunteers and 3,394 non-depressed volunteers in the depression subset of MMPsy. In the final dataset, the overall duration of response audios about anxiety is about 145.9 hours and the overall duration of response audios about depression is about 69.5 hours.

Parts Subset ×\times \checkmark Avg Duration (Sec)
Anxiety Train 5625 563 68.05
Validation 704 70 65.97
Test 703 71 68.67
Depression Train 2715 682 59.05
Validation 340 85 55.27
Test 339 86 61.76
Table 1: Statistics for our MMPsy dataset. ×\times means the label is non-anxious or non-depressed and \checkmark denotes the data label is anxious or depressed.

The construction of MMPsy can be summarized as two steps: data collection and data preprocessing.

  • Data collection. We develop a web app to conduct the interview and to collect audio responses and the questionnaires. Our web app will ask the interviewees 10 questions and let them finish the GAD-7 questionnaires or PHQ-9 questionnaires. During the interview, the audio responses will be recorded and uploaded to the server automatically. The results of the questionnaires also will be uploaded. In this step, we collected 17,247 raw interviews about anxiety and 11,306 raw interviews about depression. To ensure the authenticity of the questionnaires, we added some extra but irrelevant questions in the questionnaires, which are used to check the response reliability.

  • Data preprocessing. Since the raw interviews are noised, we perform several preprocessing operations on the collected data. First, we filtered out the unauthentic data by checking the responses of those extra questions in the questionnaires. Second, mute audios and audios less than 1 second are removed. Besides, we also removed the silent segments identified with voice activity detection in each recording. Third, the background noises are eliminated using Spleeter (Hennequin et al. 2020). After that, Paraformer (Gao et al. 2022) is deployed to extract textual transcripts from audios. To ensure the correctness of extracted transcripts, we checked and corrected the transcript manually by listening to the audio and comparing the consistency between the semantics of auidos and the transcripts at the word level.

After data preprocessing, we obtained 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection. We randomly split these data into a training set, a validation set, and a test set in an 8:1:1 ratio. In this way, the anxiety detection part of MMPsy consists of 6,188 training volunteers, 744 validation volunteers, and 744 test volunteers while the depression detection part of MMPsy contains 3,397 training volunteers, 425 validation volunteers, and 425 test volunteers. The data statistics are listed in the Table 1. For each volunteer, we pack their audios and transcripts according to their sequential order and resegment the data in 60 seconds with a 10-second overlap like the way in the work (Wei et al. 2022). Thus, the data of a volunteer can generate multiple samples.

Mental-Perceiver

Refer to caption
Figure 1: Illustration of Mental-Perceiver.

This section describes the architecture of our proposed Mental-Perceiver as shown in Figure 1. We first encode by applying an attention module that maps multimodal input xM×Dxx\in\mathbb{R}^{M\times D_{x}} to features in a latent space z2×Dzz\in\mathbb{R}^{2\times D_{z}} by interacting with category prior p2×Dpp\in\mathbb{R}^{2\times D_{p}} which is obtained by computing the center point of different text representations from different categories. Then, we conduct deep feature extraction on latent features zz by applying a series of attention modules that take in and return features zz^{\prime} in this latent feature space. Finally, we decode by applying an attention module that maps latent arrays zz^{\prime} and the query array q2×Dqq\in\mathbb{R}^{2\times D_{q}}to the final feature representation y2×Dyy\in\mathbb{R}^{2\times D_{y}}. Based on the final feature representation yy, we apply a Linear layer to map the feature yy into class-wise logit outputs yC01×2y^{C_{0}}\in\mathbb{R}^{1\times 2} and yC11×2y^{C_{1}}\in\mathbb{R}^{1\times 2}. MM is the length of multimodal input while DxD_{x}, DzD_{z}, DqD_{q}, and DyD_{y} denote the feature dimension. With class-wise logits yC0y^{C_{0}} and yC1y^{C_{1}}, to predict the class cc^{\prime} of the input xx, we compute the mean values of corresponding vector elements in the yC0y^{C_{0}} and yC1y^{C_{1}} to obtain final logits yy^{\prime} followed by a Softmax function.

Basic Attention module

Following the pioneering work PerceiverIO (Jaegle et al. 2021), all attention modules deployed in our Mental-Perceiver are implemented as Transformer-style attention (Vaswani et al. 2017). Each attention module applies a global query-key-value (QKV) attention operation followed by a multi-layer perceptron (MLP). The MLP is independently applied to each element of the index dimension. Both the encoder and decoder take in two input feature arrays. The first array is used as input to the attention module’s key and value networks, and another array is used as input to the module’s query network for interacting and fusing with the first array. The attention module’s output has the same index dimension (the same number of elements) as the query input. Therefore, the attention module can be modeled as follows:

attention(Q,K,V)=MLP(Softmax(QKTdk)V)\text{attention}(Q,K,V)=MLP(Softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V) (1)

Category Semantic Prior

Our Mental-Perceiver extracts deep features based on the latent features zz which is produced by fusing the multimodal input xx and the latent embedding pp with an attention module. pp is used as a query and xx is used as the key and value. This means that the output zz is the result that pp extracts and fuse semantic information from xx. So, the initialization of the latent embedding pp is crucial for learning more discriminative features in the following steps to identify a user’s mental state according to the user’s audio and transcript text. Semantic prior has been shown to help greatly learn more discriminative features (Teney, Abbasnejad, and Hengel 2019; Dai et al. 2023; Ding et al. 2023). Therefore, we build semantic priors for different categories, one for the normal category and another for the mental disorder category. Then, we use these two semantic priors to initialize the latent embedding p2×Dp\in\mathbb{R}^{2\times D} with a learnable MLP for mapping the hidden size of semantic priors to DD. In Mental-Perceiver, we fixed the parameters of these two semantic priors and only optimized the parameters of the learnable MLP semantic priors. To obtain the semantic priors, we simply compute the center points for different categories.

Formally, given the text representation set EtN×H\textbf{E}_{t}\in\mathbb{R}^{N\times H} of a category on the training set, where NN is the number of data samples for the current category and HH is the hidden size of a text representation, we can compute the semantic prior pCi1×Hp^{C_{i}}\in\mathbb{R}^{1\times H} by averaging the normalized Et\textbf{E}_{t} at the index dimension. This procedure can be modeled as follows:

pCi=Avg(Norm(Et))p^{C_{i}}=Avg(Norm(\textbf{E}_{t})) (2)

where AvgAvg is the averaging function and NormNorm is the z-score normalization function that normalizes each vector separately in Et\textbf{E}_{t}. Let C0C_{0} and C1C_{1} denote the classes of normal people and psychological patients, respectively, we first obtain two semantic priors pC0p^{C_{0}} and pC1p^{C_{1}} according to Equation (2). Then, we obtain pp by concatenating pC0p^{C_{0}} and pC1p^{C_{1}} at the index dimension followed by a learnable MLP layer as follows:

p=MLP(pC0pC1)p=MLP(p^{C_{0}}\odot p^{C_{1}}) (3)

where \odot denotes the concatenation.

Encoder

The encoder, consisting of an attention module, takes charge of mapping the multimodal input xx into latent features zz by utilizing category semantic prior-enhanced latent embedding pp to query the multimodal input xx. This way will fuse the multimodal input xx and category semantic prior-enhanced latent embedding pp and build fused prior-guided latent features zz, which will be used as input to the next deep feature extraction. The encoder can be modeled as follows:

z=attention(p,x,x)z=attention(p,x,x) (4)

Deep Feature Extraction

Once we obtain the latent features zz, we conduct deep feature extraction based on the input latents zz by applying a series of attention modules that take in and return latents ziz_{i} in this latent space iteratively.

This module can be modeled as follows:

z1\displaystyle z_{1} =attention(z,z,z)\displaystyle=attention(z,z,z) (5)
z2\displaystyle z_{2} =attention(z1,z1,z1)\displaystyle=attention(z_{1},z_{1},z_{1})
\displaystyle...
zk\displaystyle z_{k} =attention(zk1,zk1,zk1)\displaystyle=attention(z_{k-1},z_{k-1},z_{k-1})

where kk is a hyper-parameter that is simply set to 8.

Decoder

The goal of the decoder is to produce a final class-wise logit output of size 2×22\times 2, given a latent representation of size 2×Dz2\times D_{z}. Let z=zkz^{\prime}=z_{k}, the decoder first applies an attention module to map the latent zz^{\prime} to output features yy. Then, the decoder applies a linear layer to map yy into the final class-wise logit output [yC0,yC1]2×2[y^{C_{0}},y^{C_{1}}]\in\mathbb{R}^{2\times 2}, where yC01×2y^{C_{0}}\in\mathbb{R}^{1\times 2} and yC11×2y^{C_{1}}\in\mathbb{R}^{1\times 2} are two logit outputs for indicating whether the multimodal input xx matches with semantic prior pC0p^{C_{0}} and pC1p^{C_{1}}. Finally, we compute the mean vector of these two output logit vectors yC0y^{C_{0}} and yC1y^{C_{1}} at the index dimension as the final classification logits yy^{\prime}. Therefore, the decoder can be modeled as follows:

y\displaystyle y =attention(q,z,z)\displaystyle=attention(q,z^{\prime},z^{\prime}) (6)
[yC0,yC1]\displaystyle[y^{C_{0}},y^{C_{1}}] =Linear(y)\displaystyle=Linear(y)
y\displaystyle y^{\prime} =Mean(yC0yC1)\displaystyle=Mean(y^{C_{0}}\odot y^{C_{1}})

Training Objectives

To optimize our Mental-Perceiver, we deploy two losses. The first one is the matching loss match\mathcal{L}_{match} and another one is the classification loss cls\mathcal{L}_{cls}. Both these two losses are binary cross-entropy loss functions. The matching loss aims to optimize the matching degree between the multimodal input xx and its corresponding category semantic prior while the classification loss optimizes the model to be able to identify the inherent mental state according to multimodal input xx with the help of category prior pp and category query qq.

Formally, given a multimodal input xx and its class CxC_{x} which can be 0 or 1, the training objectives can be modeled as follows:

match=\displaystyle\mathcal{L}_{match}= Cx(logy0C0+logy1C1)\displaystyle-C_{x}(logy^{C_{0}}_{0}+logy^{C_{1}}_{1}) (7)
(1Cx)(logy1C0+logy0C1)\displaystyle-(1-C_{x})(logy^{C_{0}}_{1}+logy^{C_{1}}_{0})
cls=\displaystyle\mathcal{L}_{cls}= Cxlogy0(1Cx)logy1\displaystyle-C_{x}logy^{\prime}_{0}-(1-C_{x})logy^{\prime}_{1}
=\displaystyle\mathcal{L}= match+cls\displaystyle\mathcal{L}_{match}+\mathcal{L}_{cls}

where y0C0y^{C_{0}}_{0} and y1C0y^{C_{0}}_{1} are the 0-th element and 1-th element in the probability distribution obtained from class-wise logits yC01×2y^{C_{0}}\in\mathbb{R}^{1\times 2} by Softmax function while y0C1y^{C_{1}}_{0} and y1C1y^{C_{1}}_{1} are the 0-th element and 1-th element in the probability distribution obtained from class-wise logits yC11×2y^{C_{1}}\in\mathbb{R}^{1\times 2} by Softmax function. Similarly, y0y^{\prime}_{0} and y1y^{\prime}_{1} are the two probabilities on normal class (0) and mental disorder class (1) that can be obtained by applying the Softmax function on yy^{\prime}.

Experiments

Datasets

We conduct experiments on both two subsets of MMPsy for anxiety detection and depression detection. We use MMPsy-Anxiety and MMPsy-Depression to represent these two subsets. Besides, we also verify our Mental-Perceiver on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WOZ) dataset (Gratch et al. 2014). DAIC-WOZ contains clinical interviews of 189 participants designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and posttraumatic stress disorder (PTSD). During each interview, several data in different formats as well as modalities are recorded simultaneously. However, only the acoustic recordings and transcriptions are chosen in this work for a fair comparison. Moreover, the given GT is an eight-item Patient Health Questionnaire depression scale (PHQ-8), which indicates the severity of depression. A PHQ-8 Score \geq 10 implies that the participant is undergoing a mental disorder.

Baselines

The main baselines to be compared are listed as follows:

  • SVM (Pedregosa et al. 2011): a robust shallow model capable of performing binary classification efficiently by using a kernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in the higher dimensional feature space.

  • RandomForest (Pedregosa et al. 2011): it is a robust ensemble learning method for classification by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees.

  • XGBoost (Chen and Guestrin 2016): it is a robust toolbox for classification via an optimized distributed gradient boosting.

  • NUSD (Wang, Ravi, and Alwan 2023): it is a deep model ECAPA-TDNN enhanced with a speaker disentanglement method that utilizes a non-uniform mechanism of adversarial SID loss maximization.

  • ConvLSTM (Wei et al. 2022): it is a Convolutional Bidirectional LSTM with a sub-attention mechanism for linking heterogeneous information.

  • PerceiverIO (Jaegle et al. 2021): it is a general-purpose architecture that handles multimodal data with fully attention design and a flexible querying mechanism.

For the shallow models SVM, RandomForest, and XGBoost, we provide the following audio features as input: F0 statistics (mean), log-energy, zero-crossing-rate, loudness, pitch period entropy, jitters, shimmers, harmonics-to-noise ratio, detrended fluctuation analysis, linear spectral coefficients-0, linear spectral frequencies-0, formants (F1), and amplitude Shannon entropy. All these features can be extracted by applying Surfboard 111https://github.com/novoic/surfboard (Lenain et al. 2020). Besides, we also extract topic words by TF-IDF as text features for these shallow models. For the deep models, we use BERT (Devlin et al. 2019) to extract features and use Mel-spectrum to represent audio.

Datasets Methods Acc UAR Sens Spec Precision Recall F1 Precision Recall F1
Normal Anxious
MMPsy-Anxiety SVM 0.83 0.64 0.41 0.87 0.94 0.87 0.9 0.24 0.41 0.3
RandomForest 0.82 0.63 0.39 0.87 0.93 0.87 0.9 0.23 0.39 0.29
XGBoost 0.88 0.6 0.25 0.94 0.93 0.94 0.93 0.3 0.25 0.27
NUSD 0.79 0.51 0.17 0.85 0.91 0.85 0.88 0.1 0.17 0.13
ConvLSTM 0.83 0.55 0.21 0.9 0.92 0.9 0.91 0.18 0.21 0.19
PerceiverIO 0.83 0.74 0.63 0.84 0.95 0.84 0.90 0.29 0.63 0.40
Mental-Perceiver (Ours) 0.85 0.76 0.65 0.87 0.96 0.87 0.92 0.34 0.65 0.45
Normal Depressed
MMPsy-Depression SVM 0.76 0.71 0.63 0.79 0.89 0.79 0.84 0.43 0.63 0.51
RandomForest 0.76 0.63 0.41 0.85 0.85 0.85 0.85 0.41 0.41 0.41
XGBoost 0.78 0.64 0.41 0.88 0.85 0.88 0.87 0.46 0.41 0.43
NUSD 0.67 0.51 0.26 0.78 0.8 0.78 0.79 0.22 0.26 0.24
ConvLSTM 0.77 0.53 0.14 0.93 0.81 0.93 0.86 0.32 0.14 0.2
PerceiverIO 0.81 0.77 0.66 0.85 0.91 0.85 0.88 0.53 0.66 0.59
Mental-Perceiver (Ours) 0.85 0.79 0.69 0.89 0.92 0.89 0.9 0.61 0.69 0.64
Normal Depressed
DAIC-WOZ SVM 0.38 0.35 0.29 0.42 0.58 0.42 0.49 0.17 0.29 0.22
RandomForest 0.7 0.54 0.14 0.94 0.72 0.94 0.82 0.5 0.14 0.22
XGBoost 0.6 0.53 0.36 0.7 0.72 0.7 0.71 0.33 0.36 0.34
NUSD 0.55 0.46 0.21 0.7 0.68 0.7 0.69 0.23 0.21 0.22
ConvLSTM 0.4 0.57 0.88 0.22 0.83 0.22 0.35 0.3 0.88 0.45
PerceiverIO 0.7 0.58 0.29 0.88 0.74 0.88 0.81 0.5 0.29 0.36
Mental-Perceiver (Ours) 0.79 0.66 0.36 0.97 0.78 0.97 0.86 0.83 0.36 0.5
Table 2: Performance Comparison with our Mental-Perceiver and various baselines on MMPsy and DAIC-WOZ. The best result is highlighted in bold.

Metrics

In classification tasks, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) from the confusion matrix are metrics used to measure the accuracy of model predictions. TP refers to instances where the model correctly predicted them as the positive class, while TN refers to instances where the model correctly predicted them as the negative class. On the contrary, FP represents cases where the model incorrectly labeled instances as positive, and FN denotes instances of the positive class that were incorrectly classified as negative.

Based on these concepts, several common performance metrics can be defined to assess the model’s performance:

  • Accuracy: Accuracy (Acc) represents the proportion of all predictions that are correctly predicted.

  • Recall: Recall measures the model’s ability to identify all true positive instances, that is, the proportion of actual positives that are correctly identified.

  • Precision: Precision denotes the proportion of samples predicted as positive by the model that are actually positive, focusing on the accuracy of positive predictions.

  • F1-Score: F1 Score ranges from 1 to 0, with a value closer to 1 indicating better model performance, particularly in scenarios where the distribution of positive and negative samples is imbalanced. The F1 Score serves as a more comprehensive evaluation metric under such conditions.

  • Sensitivity: Sensitivity (Sens), also termed true positive rate, is the ratio of positive predictions to the number of actual positives. It is hence identical to the recall of the positive class, Recall(1).

  • Specificity: Specificity (Spec) is the ratio of negative predictions to the number of actual negatives, and therefore identical to Recall(0).

  • UAR: Although Acc is often used to evaluate model performance, it suffers from the data imbalance issue. The stronger the imbalance the more accuracy tends to reflect the performance of the majority class since it is a weighted accuracy. For this reason in areas where relatively small data sets predominate, such as the bio-medical field or in the paralinguistics field in general, researchers usually prefer and report the Unweighted Average Recall (UAR), which is defined as the average of Sensitivity and Specificity.

    UAR=Sensitivity+Specificity2UAR=\frac{Sensitivity+Specificity}{2} (8)

During these metrics, we use Accuracy (Acc), Unweighted Average Recall (UAR), Sensitivity (Sens), and Specificity (Spec) as the main evaluation metrics for evaluating overall model performance. Meanwhile, we also report the Recall, Precision, and F1-score separately for different categories.

Datasets Modalities Acc UAR Sens Spec Precision Recall F1 Precision Recall F1
Normal Anxious
MMPsy-Anxiety Audio 0.87 0.52 0.10 0.95 0.91 0.95 0.93 0.16 0.10 0.12
Text 0.85 0.74 0.61 0.88 0.96 0.88 0.92 0.34 0.61 0.43
Text+Audio 0.85 0.76 0.65 0.87 0.96 0.87 0.92 0.34 0.65 0.45
Normal Depressed
MMPsy-Depression Audio 0.76 0.61 0.35 0.86 0.84 0.86 0.85 0.39 0.35 0.37
Text 0.81 0.74 0.62 0.86 0.90 0.86 0.88 0.53 0.62 0.57
Text+Audio 0.85 0.79 0.69 0.89 0.92 0.89 0.9 0.61 0.69 0.64
Table 3: Ablation study on different modalities of our Mental-Perceiver on MMPsy. The best result is highlighted in bold.

Implementation Details

We use Pytorch222http://pytorch.org to implement our framework on Linux with two NVIDIA RTX 4090 GPU cards. The feature dimension DxD_{x} is set to 768 and other dimensions DzD_{z}, DqD_{q}, and DyD_{y} are all set to 512. In each epoch, all training data is shuffled randomly and then cut into mini-batches. The text feature and audio feature are concatenated as the multimodal input. We deploy the AdamW (Loshchilov and Hutter 2017) optimizer for model optimization. We trained models for 200 epochs with an initial learning rate of 0.00003 and used LambdaLR to adjust the learning rate during training. The early stopping with patience 15 is deployed to accelerate training. We use the validation set for model selection and report the performance on the test set.

Experiment Results

Main Results

The experiment results of our Mental-Perceiver and various baselines on three datasets MMPsy-Anxiety, MMPsy-Depression, and DAIC-WOZ are shown in Table 2. From the results on different datasets, we can conclude as follows.

On the MMPsy-Anxiety dataset, our Mental-Perceiver outperforms baselines on Acc, UAR, and Sens while achieving competitive performance on Spec. This shows that our Mental-Perceiver can achieve better overall anxiety detection performance with a better trade-off between Sensitivity and Specificity. For different categories, we can observe that our Mental-Perceiver achieves relatively high performance in the normal category and achieves the best performance in the anxious category.

On MMPsy-Depression dataset, our Mental-Perceiver outperforms baselines on Acc, UAR, and Sens while achieving competitive performance on Spec. This shows that our Mental-Perceiver can achieve better overall depression detection performance with a better trade-off between Sensitivity and Specificity. For different categories, we can observe that our Mental-Perceiver achieves the best precision and F1-score in the normal category while outperforming all baselines in the depressed category on all three metrics.

On DAIC-WOZ dataset, a similar conclusion can be reached. The Mental-Perceiver outperforms baselines on Acc, UAR, and Spec. Although the Mental-Perceiver’s sensitivity is lower than the baseline ConvLSTM, ConvLSTM is very poor in specificity, indicating that there is a high rate of misdiagnosis on ConvLSTM. According to the UAR, we can observe Mental-Perceiver has a better balance between the false positive rate (1 - Specificity) and the false negative rate (1 - Sensitivity). Besides, according to the metrics on different categories, we can observe that the Mental-Perceiver achieves the best overall performance in both two categories.

Overall, our Mental-Perceiver can achieve the best overall performance on different datasets for different mental disorder detection, showing the effectiveness and universality of our Mental-Perceiver for detecting different mental disorders. Besides, different models can achieve varying degrees of performance, indicating the effectiveness and usability of our MMPsy dataset as a benchmark for developing and evaluating mental disorder detection models.

Ablation study on different modalities

To verify the superiority of multimodal text-audio for mental disorder detection, we conduct an ablation study by using only audio, only text, and text+aduio as input to the Mental-Perceiver. The experimental results are shown in Table 3. We can observe that the Mental-Perceiver with multimodal inputs can achieve the best performance across various metrics on the MMPsy-Depression while achieving the best performance on UAR, Sensitivity, Precision in the normal category, Precision in the anxious category, Recall in the anxious category, and F1 on the anxious category and reasonable and competitive performance on Acc, Specificity, Recall on the normal category, and F1 on the normal category. Overall, multimodal input helps detect mental disorders.

Datasets Modalities Acc UAR Sens Spec
MMPsy-Anxiety Mental-Perceiver 0.85 0.76 0.65 0.87
-Category Prior 0.84 0.72 0.58 0.86
-Matching Loss 0.82 0.74 0.62 0.86
MMPsy-Depression Mental-Perceiver 0.85 0.79 0.69 0.89
-Category Prior 0.82 0.68 0.49 0.88
-Matching Loss 0.8 0.72 0.57 0.86
Table 4: Ablation study on the effects of category priors and match loss on MMPsy. The best result is highlighted in bold.

The effects of category priors and matching loss

To investigate the effects of the category priors and matching loss, we conduct a study by removing the category priors and matching loss. The results are shown in Table 4. It can be seen that each component can improve the performance across various metrics, showing the effectiveness of category priors and matching loss.

Conclusion

In this work, we construct a new large-scale Multi-Modal Psychological assessment corpus (MMPsy) about anxiety and depression in adolescents who use Mandarin. The MMPsy contains audios and extracted transcripts of responses from anxious/depressed and non-anxious/non-depressed adolescent volunteers. There are 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection. We further propose a novel mental disorder estimation network, named Mental-Perceiver, to detect anxious/depressive mental states automatically according to users’ audio and corresponding transcripts. Extensive experiments on MMPsy and the public DAIC-WOZ show the effectiveness and superiority of our proposed Mental-Perceiver.

References

  • Agarwal, Jindal, and Singh (2023) Agarwal, P.; Jindal, A.; and Singh, S. 2023. Detecting anxiety from short clips of free-form speech. arXiv preprint arXiv:2312.15272.
  • Al Hanai, Ghassemi, and Glass (2018) Al Hanai, T.; Ghassemi, M. M.; and Glass, J. R. 2018. Detecting Depression with Audio/Text Sequence Modeling of Interviews. In Interspeech, 1716–1720.
  • Chen and Guestrin (2016) Chen, T.; and Guestrin, C. 2016. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
  • Cowen, Harrison, and Burns (2012) Cowen, P.; Harrison, P.; and Burns, T. 2012. Shorter Oxford textbook of psychiatry. Oxford University Press, USA.
  • Dai et al. (2023) Dai, Z.; Yi, J.; Yan, L.; Xu, Q.; Hu, L.; Zhang, Q.; Li, J.; and Wang, G. 2023. Pfemed: Few-shot medical image classification using prior guided feature enhancement. Pattern Recognition, 134: 109108.
  • de la Santé (2019) de la Santé, O. M. 2019. World health statistics 2019: monitoring health for the SDGs, sustainable development goals.
  • Devlin et al. (2019) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186. Minneapolis, Minnesota: Association for Computational Linguistics.
  • Ding et al. (2023) Ding, Z.; Wang, A.; Chen, H.; Zhang, Q.; Liu, P.; Bao, Y.; Yan, W.; and Han, J. 2023. Exploring structured semantic prior for multi label recognition with incomplete labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3398–3407.
  • Gao et al. (2022) Gao, Z.; Zhang, S.; McLoughlin, I.; and Yan, Z. 2022. Paraformer: Fast and accurate parallel transformer for non-autoregressive end-to-end speech recognition. arXiv preprint arXiv:2206.08317.
  • Giannakakis et al. (2017) Giannakakis, G.; Pediaditis, M.; Manousos, D.; Kazantzaki, E.; Chiarugi, F.; Simos, P. G.; Marias, K.; and Tsiknakis, M. 2017. Stress and anxiety detection using facial cues from videos. Biomedical Signal Processing and Control, 31: 89–101.
  • Gong and Poellabauer (2017) Gong, Y.; and Poellabauer, C. 2017. Topic modeling based multi-modal depression detection. In Proceedings of the 7th annual workshop on Audio/Visual emotion challenge, 69–76.
  • Gratch et al. (2014) Gratch, J.; Artstein, R.; Lucas, G. M.; Stratou, G.; Scherer, S.; Nazarian, A.; Wood, R.; Boberg, J.; DeVault, D.; Marsella, S.; et al. 2014. The distress analysis interview corpus of human and computer interviews. In LREC, 3123–3128. Reykjavik.
  • Haque et al. (2018) Haque, A.; Guo, M.; Miner, A. S.; and Fei-Fei, L. 2018. Measuring depression symptom severity from spoken language and 3D facial expressions. arXiv preprint arXiv:1811.08592.
  • Hecker et al. (2022) Hecker, P.; Steckhan, N.; Eyben, F.; Schuller, B. W.; and Arnrich, B. 2022. Voice analysis for neurological disorder recognition–a systematic review and perspective on emerging trends. Frontiers in Digital Health, 4: 842301.
  • Hennequin et al. (2020) Hennequin, R.; Khlif, A.; Voituret, F.; and Moussallam, M. 2020. Spleeter: a fast and efficient music source separation tool with pre-trained models. Journal of Open Source Software, 5(50): 2154.
  • Jaegle et al. (2021) Jaegle, A.; Borgeaud, S.; Alayrac, J.-B.; Doersch, C.; Ionescu, C.; Ding, D.; Koppula, S.; Zoran, D.; Brock, A.; Shelhamer, E.; et al. 2021. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795.
  • Kessler (2012) Kessler, R. C. 2012. The costs of depression. Psychiatric Clinics, 35(1): 1–14.
  • Kroenke, Spitzer, and Williams (2001) Kroenke, K.; Spitzer, R. L.; and Williams, J. B. 2001. The PHQ-9: validity of a brief depression severity measure. Journal of general internal medicine, 16(9): 606–613.
  • Lenain et al. (2020) Lenain, R.; Weston, J.; Shivkumar, A.; and Fristed, E. 2020. Surfboard: Audio Feature Extraction for Modern Machine Learning. In Interspeech.
  • Lin et al. (2022) Lin, Y.; Liyanage, B. N.; Sun, Y.; Lu, T.; Zhu, Z.; Liao, Y.; Wang, Q.; Shi, C.; and Yue, W. 2022. A deep learning-based model for detecting depression in senior population. Frontiers in Psychiatry, 13: 1016676.
  • Loshchilov and Hutter (2017) Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
  • Ma et al. (2016) Ma, X.; Yang, H.; Chen, Q.; Huang, D.; and Wang, Y. 2016. Depaudionet: An efficient deep model for audio based depression classification. In Proceedings of the 6th international workshop on audio/visual emotion challenge, 35–42.
  • Mao, Wu, and Chen (2023) Mao, K.; Wu, Y.; and Chen, J. 2023. A systematic review on automated clinical depression diagnosis. npj Mental Health Research, 2(1): 20.
  • Mossman et al. (2017) Mossman, S. A.; Luft, M. J.; Schroeder, H. K.; Varney, S. T.; Fleck, D. E.; Barzman, D. H.; Gilman, R.; DelBello, M. P.; and Strawn, J. R. 2017. The Generalized Anxiety Disorder 7-item (GAD-7) scale in adolescents with generalized anxiety disorder: signal detection and validation. Annals of clinical psychiatry: official journal of the American Academy of Clinical Psychiatrists, 29(4): 227.
  • Pedregosa et al. (2011) Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Louppe, G.; Prettenhofer, P.; Weiss, R.; Weiss, R. J.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit-learn: Machine Learning in Python. ArXiv, abs/1201.0490.
  • Shen, Yang, and Lin (2022) Shen, Y.; Yang, H.; and Lin, L. 2022. Automatic depression detection: An emotional audio-textual corpus and a gru/bilstm-based model. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6247–6251. IEEE.
  • Sun et al. (2017) Sun, B.; Zhang, Y.; He, J.; Yu, L.; Xu, Q.; Li, D.; and Wang, Z. 2017. A random forest regression method with selected-text feature for depression assessment. In Proceedings of the 7th annual workshop on Audio/Visual emotion challenge, 61–68.
  • Teney, Abbasnejad, and Hengel (2019) Teney, D.; Abbasnejad, E.; and Hengel, A. v. d. 2019. On incorporating semantic prior knowledge in deep learning through embedding-space constraints. arXiv preprint arXiv:1909.13471.
  • Valstar et al. (2014) Valstar, M.; Schuller, B.; Smith, K.; Almaev, T.; Eyben, F.; Krajewski, J.; Cowie, R.; and Pantic, M. 2014. Avec 2014: 3d dimensional affect and depression recognition challenge. In Proceedings of the 4th international workshop on audio/visual emotion challenge, 3–10.
  • Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  • Wang, Ravi, and Alwan (2023) Wang, J.; Ravi, V.; and Alwan, A. 2023. Non-uniform Speaker Disentanglement For Depression Detection From Raw Speech Signals. Interspeech, 2023: 2343–2347.
  • Wei et al. (2022) Wei, P.-C.; Peng, K.; Roitberg, A.; Yang, K.; Zhang, J.; and Stiefelhagen, R. 2022. Multi-modal depression estimation based on sub-attentional fusion. In European Conference on Computer Vision, 623–639. Springer.
  • Williamson et al. (2016) Williamson, J. R.; Godoy, E.; Cha, M.; Schwarzentruber, A.; Khorrami, P.; Gwon, Y.; Kung, H.-T.; Dagli, C.; and Quatieri, T. F. 2016. Detecting depression using vocal, facial and semantic communication cues. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, 11–18.
  • Yang et al. (2016) Yang, L.; Jiang, D.; He, L.; Pei, E.; Oveneke, M. C.; and Sahli, H. 2016. Decision tree based depression classification from audio video and language information. In Proceedings of the 6th international workshop on audio/visual emotion challenge, 89–96.
  • Yang et al. (2017) Yang, L.; Jiang, D.; Xia, X.; Pei, E.; Oveneke, M. C.; and Sahli, H. 2017. Multimodal measurement of depression using deep learning models. In Proceedings of the 7th annual workshop on audio/visual emotion challenge, 53–59.