This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

M3V: A multi-modal multi-view approach for Device-Directed Speech Detection

Abstract

With the goal of more natural and human-like interaction with virtual voice assistants, recent research in the field has focused on full duplex interaction mode without relying on repeated wake-up words. This requires that in scenes with complex sound sources, the voice assistant must classify utterances as device-oriented or non-device-oriented. The dual-encoder structure, which is jointly modeled by text and speech, has become the paradigm of device-directed speech detection. However, in practice, these models often produce incorrect predictions for unaligned input pairs due to the unavoidable errors of automatic speech recognition (ASR). To address this challenge, we propose M3V, a multi-modal multi-view approach for device-directed speech detection, which frames we frame the problem as a multi-view learning task that introduces unimodal views and a text-audio alignment view in the network besides the multi-modal. Experimental results show that M3V significantly outperforms models trained using only single or multi-modality and surpasses human judgment performance on ASR error data for the first time.

Index Terms—  virtual assistants, device-directed speech detection, multi-modal, multi-view

1 Introduction

Recently, virtual assistants (VAs) are becoming more prominent in all aspects of our lives, the most representative of which include mobile phone intelligent assistant Siri, home smart speaker Alexa, and car intelligent voice assistant NOMI. Users typically trigger an interaction via a wake-up word, such as ”Hi NOMI” or press a physical button directly on the smart device [1]. However, such an interaction is unnatural from a human dialogue perspective because it does not conform to the habits of human voice interaction which does not rely on wake-up words or physical triggers all the time. In order to make the interaction more smooth and humanized, a lot of intelligent assistants have begun to explore a full duplex interaction mode without repeated wake-up words [2].

In these cases, an obvious challenge is that the scene sound source is complex. For example, in the scene of the in-vehicle intelligent assistant, the information received by the intelligent device may either come from the voice of the user to the device, the conversation information between the users, or the voice of the device itself. Therefore, it is necessary for the VAs to distinguish between device-directed and non-device-directed [3].

Many VAs address the above problems through multi-modal approaches, including voice and text. Mallidi et al. [4] use acoustics, ASR decoder, and ASR 1-best hypothesis to model a classifier to distinguish device-directed queries from background speech in the context of interactions with voice assistants. Gillespie et al. [5] propose a system to combine acoustic features with word-level semantic lexical features to improve directedness classification. Vilaysouk et al. [6] integrate ASR decoder-based features and word embeddings as additional inputs to the final classification stage of the model. The dual-encoder structure, which is jointly modeled by text and speech, has become the paradigm of device-directed speech detection.

Refer to caption
Fig. 1: Overview architecture of the M3V. Given a text-audio pair as input, M3V projects it to four views: unimodal-view (VaV_{a}, VtV_{t}), multi-modal-view(VmV_{m}), aligned-view (ValginV_{algin}). The predicted probabilities of these views are subject to arbitration by a policy decision module.

Despite the advances, a crucial limitation of the above models is that they are mostly trained with supervised learning objectives only, where each text-audio pair is optimized with labeled ground truth and are thus never exposed to incorrect pairs during training, which hurts its generalization. This problem is even more severe in real-world VAs applications, as the input text is transformed by the ASR technique. Due to the error propagation of ASR, these models often produce incorrect predictions for unaligned input pairs. Additionally, the persistent modality gaps between heterogeneous modalities will further amplify the impact of ASR transcription errors, increasing the difficulty of analyzing multi-modal data.

Given the above problems, we propose a multi-modal multi-view approach (M3V) for device-directed speech detection, which reduces the false dependence on ASR. Specifically, we use the pre-trained language model GPT2 [7] to model text information and the pre-training model Wav2vec2 [8] to model audio information so as to make full use of the information of each mode. Then, we frame the task as a multi-view learning problem to induce text-audio alignment information from a multi-modal model into our network using a contrastive loss function. In particular, we obtain multi-views, including unimodal information, multi-modal information, and alignment information. These perspectives provide comprehensive and multi-faceted information for the task that can be combined for decision-making modules to eliminate the influence of ASR errors. Experimental results show that M3V significantly outperforms models trained using only single or multi-modality and surpasses human judgment performance for the first time.

2 Method

We propose a new framework that can judge whether it is device-oriented dialogue from multi-modal and multi-views. The architecture of the model is shown in Figure 1. The model of M3V is mainly divided into two stages, multi-modal learning based on device-oriented detection tasks and multi-view learning induces text-audio alignment information into the network using a contrastive loss function that obtains four views of information. In addition, M3V contains a policy decision module including two policies that can utilize four views of information for comprehensive evaluation of downstream tasks. We describe each component in detail in this section.

2.1 Multi-Modal Learning

In multi-modal learning, we use text-audio pairs as the input of the model. Let the processed audio be XaX_{a} and the text be represented by XtX_{t}. Given an audio-text pair of NN in a batch {Xa(i),Xt(i)}\{X^{(i)}_{a},X^{(i)}_{t}\} where i{1,2,,N}i\in\{1,2,...,N\}, the problem is a binary classification problem to classify whether it’s device-directed or non-device-directed.

From the pairs, the audio and text are passed through an audio encoder and a text encoder, respectively. We choose a Wav2vec2 as the audio encoder which is proposed in [8] and a GPT2 [7] as the text encoder which is a transformer-based language model. Let fa{f_{a}} represent the audio encoder and ft{f_{t}} represent the text encoder. For an audio-text pair in a batch:

Ai=Pooling(fa(Xa(i)));Ti=Pooling(ft(Xt(i))),i[1,N]A_{i}=\text{Pooling}({f_{a}}(\textbf{\footnotesize{X}}^{(i)}_{a}));T_{i}=\text{Pooling}({f_{t}}(\textbf{\footnotesize{X}}^{(i)}_{t})),~{}~{}i\in[1,N] (1)

where AidA_{i}\in\mathbb{R}^{d} is the audio representation of dimensionality d{d} and TivT_{i}\in\mathbb{R}^{v} is the text representation of dimensionality t{t}. A Pooling layer (e.g.,\mathit{e.g.,} mean-pooling) is applied to aggregate the frame-level features into an utterance-level representation.

By combining text representation and audio representation, we get multi-modal representation MiM_{i}.

Mi=Contact(Ai,Ti),i[1,N]M_{i}=\text{Contact}({A_{i},T_{i}}),~{}~{}i\in[1,N] (2)

Then we input the text representation Ti{T_{i}}, audio representation Ai{A_{i}}, multi-modal representations Mi{M_{i}} into their respective feed forward network (FFN), and output multi-modal view result VmV_{m} and unimodal view results VaV_{a}, Vt{V_{t}} after softmax:

Va=FFNa(Ai);Vt=FFNt(Ti);Vm=FFNm(Mi){V_{a}}=\text{FFN}_{a}({A_{i}});{V_{t}}=\text{FFN}_{t}({T_{i}});{V_{m}}=\text{FFN}_{m}({M_{i}}) (3)

2.2 Multi-view Learning

In the previous section, we introduced the implementation of M3V from a multi-modal view, but in practical applications, we found that it is limited. Specifically, due to the unavoidable errors of ASR, these models often produce incorrect predictions for unaligned input pairs. In order to solve this problem, we frame the problem as a multi-view learning task that introduces unimodal views and a text-audio alignment view in the network besides the multi-modal view. As shown in Fig 1, the left of the model is a multi-modal learning network, which can produce multi-modal view and uni-modal view information. The right side is a contrastive model, which is capable of inferring whether text and audio are consistent by exploiting text and audio information that is only available during training. For a contrastive model, we want the vectors of text-audio sources to be close to the uniform utterance. For this, we follow CLAP [9] and employ InfoNCE loss [10] to measure dependencies between the audio and text modality. Consider the audio and text view of a min-batch as AiA_{i} and TiT{i}, then the loss of contrastive model c\mathcal{L}_{c} can be defined as:

za(i)=ϕa(Ai),zt(i)=ϕt(Ti)\textbf{z}^{(i)}_{a}=\phi_{a}(A_{i}),~{}~{}\textbf{z}^{(i)}_{t}=\phi_{t}(T_{i})\\ (4)
c=logexp(sim(za(i),zt(i))/τ)zt(j)exp(sim(za(i),zt(j))/τ)\mathcal{L}_{c}=-\log\frac{\mathrm{exp}(\mathrm{sim}(\textbf{z}^{(i)}_{a},\textbf{z}^{(i)}_{t})/\tau)}{\sum_{\textbf{z}^{(j)}_{t}\in\mathcal{B}}\mathrm{exp}(\mathrm{sim}(\textbf{z}^{(i)}_{a},\textbf{z}^{(j)}_{t})/\tau)}\\ (5)

where ={zβ(1),zβ(2),,zβ(N)}\mathcal{B}=\{\textbf{z}^{(1)}_{\beta},\textbf{z}^{(2)}_{\beta},...,\textbf{z}^{(N)}_{\beta}\} (β{a,t}\beta\in\{a,t\}) is a set of hidden representations, which contains a positive sample zβ(i)\textbf{z}^{(i)}_{\beta} and N-1 negative samples. ϕa\phi_{a} is a learned projection function, sim()sim() is a similarity function (e.g.,\mathit{e.g.,} dot product), τ\tau is a temperature parameter to scale the range of logits. After the above learning, we have four views, which together affect the downstream task decision-making.

2.3 Adaptive Learning

We use two disjoint networks to share their learned information through the loss function [11]. The overall learning of the model is performed by minimizing:

=λa+γt+αm+βc\mathcal{L}=\lambda\mathcal{L}_{a}+\gamma\mathcal{L}_{t}+\alpha\mathcal{L}_{m}+\beta\mathcal{L}_{c} (6)

where λ\lambda, γ\gamma, α\alpha, β\beta are the interaction weights that determine the contribution of each regularization component to the overall loss \mathcal{L}. Each of these component losses using cross-entropy loss is responsible for achieving the desired subspace properties. Specially, we use the automatic weighted loss proposed by Lukas et al. [12]

2.4 Policy Decision Module

After the training of the above model, in the actual inference process, we obtain four probability scores, respectively from the four views. To better adapt to specific downstream tasks, we propose three applied strategies to improve the overall accuracy of device-directed tasks. For final results, TrueTrue means device-directed and FalseFalse non-device-directed.

Policy 1: In policy 1, we set five thresholds by evaluating the valid dataset, which acts on two policies respectively. When the alignment score SalignS_{align} is greater than the high threshold of alignment TalignhighT_{align-high}, the algorithm trusts the result of the text header. When the alignment score SalignS_{align} is less than the low threshold of alignment TalignlowT_{align-low}, the algorithm trusts the results of the audio header. When the alignment score is between the high threshold TalignhighT_{align-high} and the low threshold TalignlowT_{align-low}, the algorithm determines the final result according to the multi-modal fusion head.

Algorithm 1 The final decision of model and Policy 1.
1:The output of model ValignV_{align}, VaudioV_{audio}, VtextV_{text}, VmultiV_{multi}. And the threshold of above scores TalignlowT_{align-low}, TalignhighT_{align-high}, TaudioT_{audio}, TtextT_{text}, TmultiT_{multi}
2:The label after Policy 1.
3:if Valign>TalignhighV_{align}>T_{align-high} then
4:     return Vtext>TtextV_{text}>T_{text};
5:else if Valign<TalignlowV_{align}<T_{align-low} then
6:     return Vaudio>TaudioV_{audio}>T_{audio};
7:else
8:     return Vmulti>TmultiV_{multi}>T_{multi};
9:end if
Algorithm 2 The final decision of model and Policy 2.
The output of model ValignV_{align},VaudioV_{audio},VtextV_{text},VmultiV_{multi} .And the threshold of fusion scores TfusionT_{fusion}
2:The label after Policy 2.
VfusionV_{fusion} = SVM(ValignV_{align},VaudioV_{audio},VtextV_{text},VmultiV_{multi})
4:if Vfusion>TfusionV_{fusion}>T_{fusion} then
     return true;
6:else
     return false;
8:end if

Policy 2: In policy 2, we will train a classifier, input the scores of four view results into the classifier, and determine whether it is a device-oriented dialogue by the prediction score by the classifier VfusionV_{fusion} and the fusion threshold TfusionT_{fusion}. We tried several different machine learning classifiers, such as GDBT and SVM, and finally chose SVM with a better effect.

3 Experiments

3.1 Datasets

We use real recordings of natural human interactions with in-vehicle virtual assistant NOMI for training and testing the models. The training dataset consists of 340 hours of audio data comprised of 500k utterances. The normal test data consists of 3.6 hours of audio data with 48k utterances. In addition, in order to prove that our model also has benefit compatibility in the case of weak ASR system capability, we selected a batch of ASR error data from real vehicle data, including 560 utterances with 55.60% character error rate (CER). The model performance is evaluated in terms of equal error rate (EER) and accuracy.

Modality View Model Align ACC Text ACC Audio ACC Merge ACC EER \bigtriangledown
Single Single GPT2 —— 91.41 —— —— 10.61 ——
Single Single Wav2vec2 —— —— 92.41 —— 12.12 ——
Multi Single Multi-modal —— —— —— 95.27 6.59 ——
Multi Multi M3V Model 85.48 91.43 95.62 96.27 4.94 \uparrow 1.65
Table 1: Compare the performance of M3V. The col with \bigtriangledown means the improvements of our model compared to the Multi-modal in EER. A smaller ERR indicates better model performance.

3.2 Multi-Modal and Multi-View Experiments

For comparison, we first train models with a single modality separately. We select the backbone on several encoders currently used in the text encoders including GPT2 [7], BERT [13], Roberta [14], etc and audio encoders including Wav2vec2 [8], Whisper [15], SpeechTransformer [16], etc. Finally, we choose the encoder with the highest accuracy in each modality of training data. For audio modality, we use Wav2vec2 to model the sequence of speech frames. For text modality, we employ a GPT2 model which is based on the transformer. Then, we combine text and audio by training the Multi-modal as the baseline of M3V comparison.

As shown in Table 1, by combining speech and recognized text, the multi-modal approaches significantly boost unimodal approaches both text only and audio only, achieving 95.27% accuracy and 6.59% EER. In comparison to multi-modal methods, the proposed approach outperforms the direct concatenation approaches, showing the advantage of learned alignment between speech and text.

Our multi-view experiments use utterance-level representations to calculate the contrastive loss. Due to two disjoint networks sharing their learned information through the loss function, alignment information is learned in the multi-modal fusion head, resulting in a higher accuracy of 96.27%. On this basis, the individual and aligned modal results are also output. Results from multiple perspectives will be used for downstream strategies.

3.3 Policy decision experiments

In the policy experiment evaluation module, we select 560 utterances with ASR errors to evaluate the generalization of the model in a challenging scenario. In order to quantify the improvement of the method, we first performed an accuracy evaluation of human performance. We selected five annotators to label these data and finally took the average as the accuracy of human performance.

As shown in Table 2, the accuracy of manual annotation is 92.60%. By combining speech and text, the multi-modal model is improved by 0.79% compared to the manual annotation, and the accuracy of using M3V model is improved by 2.40%, reaching an accuracy of 95% in the ASR error data.

Model ACC \bigtriangledown
Human performance 92.60 ——
Multi-modal 93.39 \uparrow 0.79
M3V Model 95.00 \uparrow 2.40
++ Policy I 95.54 \uparrow 2.94
++ Policy II 95.71 \uparrow 3.11
Table 2: Compare the performance of policies on the ASR error dataset. The col with \bigtriangledown means the model’s improvements compared to manual annotation in accuracy.

By feeding the multi-view results of the model into the policy decision module, the accuracy of the task is further significantly improved. Compared with manual annotation, the accuracy of Policy I is improved by 2.94%, while Policy II is more significantly improved by 3.11%, achieving the best effect on the ASR error dataset. The above experiments proved that the model plus strategy method could reduce the dependence on ASR error and improve model robustness.

4 CONCLUSION

In this paper, we propose M3V, a multi-modal and multi-view approach for device-directed speech detection. Not only can the method learn the unimodal and multi-modal information after multi-modal fusion, but also it learns the alignment information between text-audio through disjoint networks. We show that M3V outputs for downstream task decision-making and performs well across ASR error data, comparing with the unimodal and the direct concatenation multi-modal. This model achieves an accuracy of 96.41% on the normal test set and 95.71% on the ASR error test set. Especially, M3V surpasses human judgment performance for the first time. As a continuation of this work, we are considering incorporating more perspective information into the model, such as higher-order features in dialogue, to improve accuracy.

References

  • [1] Vineet Garg, Ognjen Rudovic, Pranay Dighe, Ahmed H Abdelaziz, Erik Marchi, Saurabh Adya, Chandra Dhir, and Ahmed Tewfik, “Device-directed speech detection: Regularization via distillation for weakly-supervised models,” arXiv preprint arXiv:2203.15975, 2022.
  • [2] Che-Wei Huang, Roland Maas, Sri Harish Mallidi, and Björn Hoffmeister, “A study for improving device-directed speech detection toward frictionless human-machine interaction.,” in INTERSPEECH, 2019, pp. 3342–3346.
  • [3] Ognjen Oggi Rudovic, Akanksha Bindal, Vineet Garg, Pramod Simha, Pranay Dighe, and Sachin Kajarekar, “Streaming on-device detection of device directed speech from voice and touch-based invocation,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 491–495.
  • [4] Sri Harish Mallidi, Roland Maas, Kyle Goehner, Ariya Rastrow, Spyros Matsoukas, and Björn Hoffmeister, “Device-directed utterance detection,” Proc. Interspeech 2018, pp. 1225–1228, 2018.
  • [5] Kellen Gillespie, Ioannis C Konstantakopoulos, Xingzhi Guo, Vishal Thanvantri Vasudevan, and Abhinav Sethy, “Improving device directedness classification of utterances with semantic lexical features,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7859–7863.
  • [6] Vilayphone Vilaysouk, Amr Nour-Eldin, and Dermot Connolly, “Improving identification of system-directed speech utterances by deep learning of asr-based word embeddings and confidence metrics,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6379–6382.
  • [7] Klemens Lagler, Michael Schindelegger, Johannes Böhm, Hana Krásná, and Tobias Nilsson, “Gpt2: Empirical slant delay model for radio space geodetic techniques,” Geophysical research letters, vol. 40, no. 6, pp. 1069–1073, 2013.
  • [8] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” Advances in Neural Information Processing Systems, vol. 33, pp. 12449–12460, 2020.
  • [9] Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang, “Clap: Learning audio concepts from natural language supervision,” arXiv preprint arXiv:2206.04769, 2022.
  • [10] Aaron van den Oord, Yazhe Li, and Oriol Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  • [11] Zheng Lian, Ya Li, Jianhua Tao, and Jian Huang, “Speech emotion recognition via contrastive loss under siamese networks,” in Proceedings of the Joint Workshop of the 4th Workshop on Affective Social Multimedia Computing and First Multi-Modal Affective Computing of Large-Scale Multimedia Data, 2018, pp. 21–26.
  • [12] Lukas Liebel and Marco Körner, “Auxiliary tasks in multi-task learning,” arXiv preprint arXiv:1805.06334, 2018.
  • [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  • [14] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  • [15] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever, “Robust speech recognition via large-scale weak supervision,” Tech. Rep., Technical report, OpenAI, 2022. URL https://cdn. openai. com/papers/whisper. pdf, 2022.
  • [16] Linhao Dong, Shuang Xu, and Bo Xu, “Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5884–5888.