This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Boosting Continuous Emotion Recognition with Self-Pretraining using Masked Autoencoders, Temporal Convolutional Networks, and Transformers

Weiwei Zhou, Jiada Lu, Chenkun Ling, Weifeng Wang, Shaowei Liu
Chinatelecom Cloud
{zhouweiwei,lujiada,lingchengk,wangweifeng,liusw12}@chinatelecom.cn
Abstract

Human emotion recognition holds a pivotal role in facilitating seamless human-computer interaction. This paper delineates our methodology in tackling the Valence-Arousal (VA) Estimation Challenge, Expression (Expr) Classification Challenge, and Action Unit (AU) Detection Challenge within the ambit of the 6th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Our study advocates a novel approach aimed at refining continuous emotion recognition. We achieve this by initially harnessing pre-training with Masked Autoencoders (MAE) on facial datasets, followed by fine-tuning on the aff-wild2 dataset annotated with expression (Expr) labels. The pre-trained model serves as an adept visual feature extractor, thereby enhancing the model’s robustness. Furthermore, we bolster the performance of continuous emotion recognition by integrating Temporal Convolutional Network (TCN) modules and Transformer Encoder modules into our framework.

1 Introduction

Facial Expression Recognition (FER) holds immense potential across a spectrum of applications, ranging from discerning emotions in videos to bolstering security through facial recognition systems, and even enriching virtual reality experiences. While significant strides have been made in various facial-related tasks, such as face and attribute recognition, the nuanced realm of emotional comprehension remains a challenge.

The intricacies of emotional expressions often present subtle differentiations that can introduce ambiguity or uncertainty in accurately perceiving emotions. Consequently, this complexity poses hurdles in effectively assessing an individual’s emotional state. One of the primary obstacles lies in the inadequacy of existing FER datasets to encapsulate the breadth and depth of human emotional expressions, hindering the development of robust models. Efforts to expand and diversify these datasets are imperative to enhance the efficacy and reliability of FER systems.

The appearance of AffWild and AffWild2 dataset and the corresponding challenges [13, 4, 5, 3, 7, 10, 11, 6, 8, 9, 20, 12] boost the development of affective recognition study. The Aff-Wild2 dataset contains about 600 videos with around 3M frames. The dataset is annotated with three different affect attributes: a) dimensional affect with valence and arousal; b) six basic categorical affect; c) action units of facial muscles. To facilitate the utilization of the Aff-Wild2 dataset, the 6th ABAW[14] competition was organized for affective behavior analysis in the wild.

Due to the significant success achieved by pre-training models like MAE. In the past, we attempt to utilize the MAE pre-training method as a visual feature extractor on a facial expression dataset. Subsequently, we employ TCN and Transformer for continuous emotion recognition. Our approach results in a significant improvement in the evaluation accuracy of Valence-Arousal Estimation, Action Unit Detection, and Expression Classification.

The remaining parts of the paper are presented as follows: Sec 2 describe the study of facial emotion recognition. Sec 3 describes our methodology; Sec 4 describes the experiment details and the result; Sec 5 is the conclusion of the paper.

2 Related Work

Previous studies have proposed some useful networks on the Aff-wild2 dataset. Kuhnke et al. [15] combined vision and audio information in the video and constructed a two-stream network for emotion recognition, achieving high performance. Yue Jin et al. [2] proposed a transformer-based model to merge audio and visual features.

NetEase [21] utilized the visual information from a Masked Autoencoder (MAE) model that had been pre-trained on a large-scale face image dataset in a self-supervised manner. Next, the MAE encoder was fine-tuned on the image frames from the Aff-wild2 for AU, EXPR, and VA tasks, which could be regarded as static and uni-modal training. Additionally, multi-modal and temporal information from the videos were leveraged, and a transformer-based framework was implemented to fuse the multi-modal features.

SituTech [17] utilized multi-modal feature combinations extracted by several different pre-trained models, which were applied to capture more effective emotional information.

Temporal Convolutional Network (TCN) was proposed by Colin Lea et al. [16], which hierarchically captured relationships at low-, intermediate-, and high-level time scales. Jin Fan et al. [1] proposed a model with a spatial-temporal attention mechanism to catch dynamic internal correlations with stacked TCN backbones to extract features from different window sizes.

The Transformer mechanism proposed by Vaswani et al. [19] has achieved high performance in many tasks, so many researchers exploited the Transformer for affective behavior studies. Zhao et al. [22] proposed a model with spatial and temporal Transformers for facial expression analysis. Jacob et al. [18] proposed a network to learn the relationship between action units with a transformer correlation module.

Inspired by the previous work, in this paper, we propose to use MAE as a feature extractor and design a model consisting of TCN and Transformer to enhance the performance of emotion recognition.

3 Methodology

In this section, we describe in detail our proposed method for tackling the three challenging tasks of affective behavior analysis in the wild that are addressed by the 6th ABAW Competition: Valence-Arousal Estimation, Expr Classification, and AU Detection. We explain how we design our model architecture, data processing, and training strategy for each task.

3.1 MAE Pre-training

Inspired by Netease, we conduct pre-training of our MAE on a facial image dataset. To this end, we also curate a large-scale dataset of facial expressions to learn facial features, consisting of AffectNet, RAF-DB, FER2013, and FER+. Subsequently, the MAE model is pre-trained on this dataset in a self-supervised manner. Specifically, our MAE consists of a ViT-Base encoder and a ViT decoder. The pre-training process of MAE follows a masked-reconstruction method, where images are first divided into a series of patches (16x16), with 75% of these patches randomly masked. These masked images are then fed into the MAE encoder, while the MAE decoder is tasked with reconstructing the complete image. The loss function for MAE pre-training is pixel-level L2 loss, aiming to minimize the difference between the reconstructed image and the target image. Once self-supervised learning is completed, the MAE decoder is removed and replaced with fully connected layers connected to the MAE encoder. Subsequently, Expr labels are fine-tuned to obtain a feature extractor more aligned with the distribution of aff-wild2 data.

3.2 Temporal Convolutional Network

Videos are first split into segments with a window size ww and stride ss. Given the segment window ww and stride ss, a video with nn frames would be split into [n/s]+1[n/s]+1 segments, where the ii-th segment contains frames{F(i1)s+1,,F(i1)s+w}\left\{F_{(i-1)*s+1},\ldots,F_{(i-1)*s+w}\right\}.

In other words, videos are cut into some overlapping chunks, each with a fixed number of frames. The purpose of doing this is to break down the video into smaller parts that are easier to process and analyze. Each chunk has some degree of overlap with the previous and next ones so that no information in the video is missed.

We denote visual features as fif_{i} corresponding to the ii-th segment extracted by pre-trained and fine-tuned ViT-Base encoder.

Visual feature is fed into a dedicated Temporal Convolutional Network (TCN) for temporal encoding, which can be formulated as follows:

gi= TCN (fi)g_{i}=\text{ TCN }\left(f_{i}\right)

This means that we use a special type of neural network that can capture the temporal patterns and dependencies of the features over time. The TCN takes the input feature vector and applies a series of convolutional layers with different kernel sizes and dilation rates to produce an output feature vector. The output feature vector has the same length as the input feature vector but contains more information about the temporal context. For example, the TCN can learn how the image changes over time in each segment of the video.

3.3 Temporal Encoder

We utilize a transformer encoder to model the temporal information in the video segment as well, which can be formulated as follows:

hi= TransformerEncoder (gi).h_{i}=\text{ TransformerEncoder }\left(g_{i}\right).

The Transformer encoder only models the context within a single segment, thereby ignoring the dependencies between frames across segments. To account for the context of different frames, overlapping between consecutive segments can be employed, thus enabling the capture of the dependencies between frames across segments, which means sws\leq w.

We use another type of neural network that can learn the relationships and interactions among the features within each segment. The transformer encoder takes the input feature vector and applies a series of self-attention layers and feed-forward layers to produce an output feature vector. The output feature vector has more semantic meaning and representation power than the input feature vector. For example, the transformer encoder can learn how different parts of the image relate to each other in each segment of the video. However, the transformer encoder does not consider how different segments of the video are connected or influenced by each other. To solve this problem, we can make some segments overlap with each other so that some frames are shared by two or more segments. This way, we can capture some information about how different segments affect each other. The degree of overlap is controlled by two parameters: ss is the length of a segment and ww is the sliding window size. If ss is smaller than or equal to ww, then there will be some overlap between consecutive segments.

3.3.1 Prediction

After the temporal encoder, the features hih_{i} are finally fed into MLP for regression, which can be formulated as follows:

yi=MLP(hi)y_{i}=\text{MLP}(h_{i})

where yiy_{i} are the predictions of ii-th segment. For VA challenge, yil×2y_{i}\in\mathbb{R}^{l\times 2}. For Expr challenge, yil×8y_{i}\in\mathbb{R}^{l\times 8}. For AU challenge, yil×12y_{i}\in\mathbb{R}^{l\times 12} .

The prediction vector contains the values we want to estimate for each segment. The MLP consists of several layers of neurons that can learn non-linear transformations of the input. The MLP can be trained to minimize the error between the prediction vector and the ground truth vector. The ground truth vector is the values we want to predict for each segment. Depending on what kind of challenge we are solving, we have different types of ground truth vectors and prediction vectors. For the VA challenge, we want to predict two values: valence and arousal. Valence measures how positive or negative an emotion is. Arousal measures how active or passive an emotion is. For the Expr challenge, we want to predict eight values: one for each basic expression (anger, disgust, fear, happiness, sadness, and surprise) plus neutral and other expressions. For the AU challenge, we want to predict twelve values: one for each action unit (AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU15, AU23, AU24, AU25, AU26).

3.4 Loss Functions

VA challenge: We use the Concordance Correlation Coefficient (CCC) between the predictions and the ground truth labels as the measure, which is defined as in Eq 1. It measures the correlation between two sequences xx and yy and ranges between -1 and 1, where -1 means perfect anti-correlation, 0 means no correlation, and 1 means perfect correlation. The loss is calculated as Eq 2.

CCC(x,y)=2cov(x,y)σx2+σy2+(μxμy)2 where cov(x,y)=(xμx)(yμy)\begin{split}CCC(x,y)&=\frac{2*\operatorname{cov}(x,y)}{\sigma_{x}^{2}+\sigma_{y}^{2}+\left(\mu_{x}-\mu_{y}\right)^{2}}\\ \text{ where }\operatorname{cov}(x,y)&=\sum\left(x-\mu_{x}\right)*\left(y-\mu_{y}\right)\end{split} (1)
VA =1CCC\mathcal{L}_{\text{VA }}=1-CCC (2)

Expr challenge: We use the cross-entropy loss as the loss function, which is defined as in Eq 3.

Expr =1Nic=1Myiclog(pic)\mathcal{L}_{\text{Expr }}=-\frac{1}{N}\sum_{i}\sum_{c=1}^{M}y_{ic}\log(p_{ic}) (3)

where yicy_{ic} is a binary indicator (0 or 1) if class cc is the correct classification for observation ii. picp_{ic} is the predicted probability of observation ii being in class cc, MM is the number of classes. The multiclass cross entropy loss function measures how well a model predicts the true probabilities of each class for a given observation. It penalizes wrong predictions by taking the logarithm of the predicted probabilities. The lower the loss, the better the model.

AU challenge: We employ BCEWithLogitsLoss as the loss function, which integrates a sigmoid layer and binary cross-entropy, which is defined as in Eq 4.

AU =1Ni[yilog(σ(xi))+(1yi)log(1σ(xi))]\mathcal{L}_{\text{AU }}=-\frac{1}{N}\sum_{i}[y_{i}\cdot log(\sigma(x_{i}))+(1-y_{i})\cdot log(1-\sigma(x_{i}))] (4)

where NN is the number of samples, yiy_{i} is the target label for sample ii, xix_{i} is the input logits for sample ii, σ\sigma is the sigmoid function The advantage of using BCEWithLogitsLoss over BCELoss with sigmoid is that it can avoid numerical instability and improve performance.

Task Evaluation Metric Method Fold 0 Fold 1 Fold 2 Fold 3 Fold 4
Valence CCC Ours 0.5385 0.6404 0.4926 0.5863 0.5403
Baseline 0.24 - - - -
Arousal Ours 0.6224 0.5651 0.6015 0.6812 0.6342
Baseline 0.20 - - - -
Expr F1-score Ours 0.4561 0.4478 0.4463 0.4583 0.4506
Baseline 0.23 - - - -
AU F1-score Ours 0.5762 0.5566 0.5018 0.5556 0.5819
Baseline 0.39 - - - -
Table 1: Results for the five folds of three tasks

4 Experiments and Results

4.1 Experiments Settings

All models were trained on two Nvidia GeForce GTX 3090 GPUs with each having 24GB of memory.

4.1.1 MAE Pre-training

We conducted an extensive pre-training of the MAE model on large-scale facial image datasets over 500 epochs, employing the AdamW optimizer. During this phase, we maintained a batch size of 1024 and set the learning rate to 0.0005. Subsequently, in the fine-tuning stage of MAE, we adjusted the batch size to 256 and lowered the learning rate to 0.0001, still leveraging the AdamW optimizer.

4.1.2 Task Trainging

We used the AdamW optimizer and cosine learning rate schedule with the first epoch warmup. The learning rate was set to 3e53e-5, the weight decay to 1e51e-5, the dropout probability to 0.3, and the batch size to 32.

Videos were split using a segment window of w=300w=300 and a stride of s=200s=200 for all three challenges. This meant we divided each video into segments of 300 frames with an overlap of 100 frames between consecutive segments. This approach helped capture the temporal dynamics of facial expressions and emotions.

4.2 Overall Results

Table 1 displays the experimental results of our proposed method on the validation set of the VA, Expr, and AU Challenge, where the Concordance Correlation Coefficient (CCC) is utilized as the evaluation metric for both valence and arousal prediction, and F1-score is used to evaluate the result of Expr and AU challenge. As demonstrated in the table, our proposed method outperforms the baseline significantly. These results show that our proposed approach using TCN and a Transformer-based model effectively integrates visual and audio information for improved accuracy in recognizing emotions on this dataset.

5 Conclusion

Our proposed approach utilizes a combination of a Temporal Convolutional Network (TCN) and a Transformer-based model to integrate visual and audio information for improved accuracy in recognizing emotions. The TCN captures relationships at low-, intermediate-, and high-level time scales, while the Transformer mechanism merges audio and visual features. We conducted our experiment on the Aff-Wild2 dataset, which is a widely used benchmark dataset for emotion recognition. Our results show that our method significantly outperforms the baseline.

References

  • Fan et al. [2021] Jin Fan, Ke Zhang, Yipan Huang, Yifei Zhu, and Baiping Chen. Parallel spatio-temporal attention-based tcn for multivariate time series prediction. Neural Computing and Applications, pages 1–10, 2021.
  • Jin et al. [2021] Yue Jin, Tianqing Zheng, Chao Gao, and Guoqiang Xu. A multi-modal and multi-task learning method for action unit and expression recognition. arXiv preprint arXiv:2107.04187, 2021.
  • Kollias [2022] Dimitrios Kollias. Abaw: Learning from synthetic data & multi-task learning challenges. arXiv preprint arXiv:2207.01138, 2022.
  • Kollias [2023] Dimitrios Kollias. Multi-label compound expression recognition: C-expr database & network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5589–5598, 2023.
  • Kollias and Zafeiriou [2019] Dimitrios Kollias and Stefanos Zafeiriou. Expression, affect, action unit recognition: Aff-wild2, multi-task learning and arcface. arXiv preprint arXiv:1910.04855, 2019.
  • Kollias and Zafeiriou [2021a] Dimitrios Kollias and Stefanos Zafeiriou. Affect analysis in-the-wild: Valence-arousal, expressions, action units and a unified framework. arXiv preprint arXiv:2103.15792, 2021a.
  • Kollias and Zafeiriou [2021b] Dimitrios Kollias and Stefanos Zafeiriou. Analysing affective behavior in the second abaw2 competition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3652–3660, 2021b.
  • Kollias et al. [2019a] Dimitrios Kollias, Viktoriia Sharmanska, and Stefanos Zafeiriou. Face behavior a la carte: Expressions, affect and action units in a single network. arXiv preprint arXiv:1910.11111, 2019a.
  • Kollias et al. [2019b] Dimitrios Kollias, Panagiotis Tzirakis, Mihalis A Nicolaou, Athanasios Papaioannou, Guoying Zhao, Björn Schuller, Irene Kotsia, and Stefanos Zafeiriou. Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond. International Journal of Computer Vision, pages 1–23, 2019b.
  • Kollias et al. [2020] D Kollias, A Schulc, E Hajiyev, and S Zafeiriou. Analysing affective behavior in the first abaw 2020 competition. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG), pages 794–800, 2020.
  • Kollias et al. [2021] Dimitrios Kollias, Viktoriia Sharmanska, and Stefanos Zafeiriou. Distribution matching for heterogeneous multi-task learning: a large-scale face study. arXiv preprint arXiv:2105.03790, 2021.
  • Kollias et al. [2023a] Dimitrios Kollias, Panagiotis Tzirakis, Alice Baird, Alan Cowen, and Stefanos Zafeiriou. Abaw: Valence-arousal estimation, expression recognition, action unit detection & emotional reaction intensity estimation challenges, 2023a.
  • Kollias et al. [2023b] Dimitrios Kollias, Panagiotis Tzirakis, Alice Baird, Alan Cowen, and Stefanos Zafeiriou. Abaw: Valence-arousal estimation, expression recognition, action unit detection & emotional reaction intensity estimation challenges. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5888–5897, 2023b.
  • Kollias et al. [2024] Dimitrios Kollias, Panagiotis Tzirakis, Alan Cowen, Stefanos Zafeiriou, Chunchang Shao, and Guanyu Hu. The 6th affective behavior analysis in-the-wild (abaw) competition. arXiv preprint arXiv:2402.19344, 2024.
  • Kuhnke et al. [2020] Felix Kuhnke, Lars Rumberg, and Jörn Ostermann. Two-stream aural-visual affect analysis in the wild. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), pages 600–605. IEEE, 2020.
  • Lea et al. [2016] Colin Lea, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks: A unified approach to action segmentation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14, pages 47–54. Springer, 2016.
  • Liu et al. [2023] Chuanhe Liu, Xinjie Zhang, Xiaolong Liu, Tenggan Zhang, Liyu Meng, Yuchen Liu, Yuanyuan Deng, and Wenqiang Jiang. Multi-modal expression recognition with ensemble method. arXiv preprint arXiv:2303.10033, 2023.
  • Miriam Jacob and Stenger [2021] Geethu Miriam Jacob and Björn Stenger. Facial action unit detection with transformers. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7676–7685, 2021.
  • Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  • Zafeiriou et al. [2017] Stefanos Zafeiriou, Dimitrios Kollias, Mihalis A Nicolaou, Athanasios Papaioannou, Guoying Zhao, and Irene Kotsia. Aff-wild: Valence and arousal ‘in-the-wild’challenge. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1980–1987. IEEE, 2017.
  • Zhang et al. [2023] Wei Zhang, Bowen Ma, Feng Qiu, and Yu Ding. Multi-modal facial affective analysis based on masked autoencoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5792–5801, 2023.
  • Zhao and Liu [2021] Zengqun Zhao and Qingshan Liu. Former-dfer: Dynamic facial expression recognition transformer. In Proceedings of the 29th ACM International Conference on Multimedia, pages 1553–1561, 2021.