This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Federated Self-Learning with Weak Supervision for Speech Recognition

Abstract

Automatic speech recognition (ASR) models with low-footprint are increasingly being deployed on edge devices for conversational agents, which enhances privacy. We study the problem of federated continual incremental learning for recurrent neural network-transducer (RNN-T) ASR models in the privacy-enhancing scheme of learning on-device, without access to ground truth human transcripts or machine transcriptions from a stronger ASR model. In particular, we study the performance of a self-learning based scheme, with a paired teacher model updated through an exponential moving average of ASR models. Further, we propose using possibly noisy weak-supervision signals such as feedback scores and natural language understanding semantics determined from user behavior across multiple turns in a session of interactions with the conversational agent. These signals are leveraged in a multi-task policy-gradient training approach to improve the performance of self-learning for ASR. Finally, we show how catastrophic forgetting can be mitigated by combining on-device learning with a memory-replay approach using selected historical datasets. These innovations allow for 10%10\% relative improvement in WER on new use cases with minimal degradation on other test sets in the absence of strong-supervision signals such as ground-truth transcriptions.

Index Terms: Automatic Speech Recognition, Weak Supervision, Self Learning, Federated Learning

1 Introduction

On-device deployment of voice technologies enables use of conversational agents in settings without a reliable network connection to the cloud. It enables lower-latency responses by removing the need for utterances to be transmitted to the cloud for processing. Offline use, vehicular control, and healthcare are new use cases within this paradigm. When ASR is deployed on-device, models need to be adapted for specific acoustic or linguistic content specific to the deployment as well as temporal adaptation to distribution shifts in use across time. In this work, we look at continually and incrementally updating ASR models with resource constraints of memory and compute at the device in federated settings, i.e., privacy-enhancing features where (1) utterances are not transmitted to the cloud, (2) persistent storage of audio is not required, and (3) human ground-truth annotations of the audio need not be obtained.

Privacy-preserving machine learning [1] can enable learning from user data while mitigating privacy risks. Federated learning (FL) [2] is one of the most popular privacy-preserving learning frameworks which involves training models on-device, with data not leaving edge devices. In FL, multiple model updates from a number of participating devices are aggregated securely on a central server at every round. FL has been demonstrated to perform well in speech applications such as speech recognition [3], keyword spotting [4], and speaker verification [5] among others. Mixed centralized and federated training was done in [6] and layer-wise representation learning in [7]. However, the aforementioned works involve training a model from scratch instead of fine-tuning a well-trained model. In addition, previous works considered static data which does not change across rounds. Differently from previous work, we consider FL settings, where the model is initialized to a well-trained model and streaming data on devices, which are not persisted across rounds. In [8], authors look at domain adaptation of ASR in a federated setting. We additionally look at incorporating weak supervision to learn from alternate sources of feedback.

Semi-supervised learning (SSL) deals with training and improving ASR using unlabelled audio, such as the audio available at devices. Unsupervised approaches such as data2vec [9] or WavLM [10] use contrastive objective functions to pretrain speech models that are then finetuned. Alternatively, a common paradigm is to use a stronger teacher model to label unlabelled data [11] however this approach cannot be applied to the resource constrained setting of on-device learning. Noisy student learning or iterative pseudo-labelling approaches [12, 13] use the ASR model to self-label clean audio with the model trained to predict the same label with augmented version of the audio. Here the audio could be additionally filtered to include cases where the model does not have low confidence. We build off the work in [14] where hybrid HMM-DNN and connectionist temporal classification (CTC) ASR models are updated using a paired teacher model updated using an exponential moving average of the student model. These methods have not been applied to recurrent neural network-transducer (RNN-T) ASR models [15] that are streaming compatible and widely used across ASR applications.

We combine self-learning in this work with weak supervision. In conversational agents, users interact across multiple turns in a session. As shown in prior works [16], later interactions can be used to determine if a request has been correctly handled. If a user cancels or repeats their request, dissatisfaction is signalled. The semantics of the terminal request can be used as feedback for the initial request. Although this is not the ground truth transcription, we use such signals to update ASR models. Users can also be prompted for an explicit feedback signal as another example for a feedback score. We use the REINFORCE [17, 18, 19, 20] framework to update models using arbitrary rewards.

Contributions: We look at incremental updates to ASR models using unlabelled audio on edge devices with federated, compute and memory constraints. We show on public and internal datasets that:

  • Self-learning with a paired teacher model updated through exponential moving average of ASR can be used to improve the performance of RNN-T by 10%10\% on new use cases;

  • Rehearsal training using historical datasets for generating model updates (pseudo-devices) at the cloud mitigates catastrophic-forgetting [21] on other test sets in self-training;

  • Self-learning performance is improved by including weak supervision of NLU semantics or noisy feedback scores integrated through a policy-gradient approach.

2 Methods

2.1 RNN-T ASR model architecture

The RNN-T [15] architecture used for real-time speech recognition consists of a model that predicts the probability P(𝐲|𝐱)P\left(\mathbf{y}|\mathbf{x}\right) of labels 𝐲=(y1,,yU)\mathbf{y}=\left(y_{1},...,y_{U}\right) given acoustic features 𝐱=(x1,,xT)\mathbf{x}=\left(x_{1},...,x_{T}\right). It comprises an encoder, a prediction network, and a joint network. The encoder is analogous to an acoustic model that takes a sequence of acoustic input features, and outputs encoded hidden representations. The prediction network corresponds to a language model that accepts the previous output label predictions, and maps them to corresponding hidden representations. The joint network is a feed forward network that takes both the encoder and prediction network hidden representations, and predicts the final output label probabilities with softmax normalization. A model with parameters ww produces mm-hypotheses (y1,ym\textbf{y}_{1},...\textbf{y}_{m}) given input x with probability pw(yi|x)p_{w}\left(\textbf{y}_{i}|\textbf{x}\right).

Refer to caption
Fig. 1: Workflow of federated self-learning with weak supervision: a paired teacher model produces a label with clean audio. The self-label loss enforces consistency for the student with augmented audio. Alternate weak supervision loss minimizes the expected feedback score for a hypothesis inferred from multi-turn session data.

2.2 Federated Self-Learning for ASR

Algorithm 1 Federated self-learning for updating ASR with optional weak supervision including cloud pseudo-devices
0:  NN number of local steps per round, η,ηk\eta,\eta_{k} - global and on-device learning rates, δ\delta exponential moving average rate (EMA), uu EMA update frequency, ()\mathcal{L}\left(\cdot\right) RNN-T loss function optionally including weak()\mathcal{L}_{weak}\left(\cdot\right) weak supervision loss if such weak feedback is available on-device, 𝒟ht\mathcal{D}_{ht} data used for rehearsal training with ground-truth labels, 𝒞\mathcal{C} cloud pseudo-devices.
0:  w𝒢rw_{\mathcal{G}}^{r} incrementally updated global model, w𝒯rw_{\mathcal{T}}^{r} updated teacher model
1:  Init. w𝒢0w_{\mathcal{G}}^{0} {start training with a pre-trained model}
2:  for each round r=1,2,r=1,2,\dots do
3:     𝒮\mathcal{S}\leftarrow (sample a subset of devices)
4:     for each device k𝒮𝒞k\in\mathcal{S}\cup\mathcal{C} in parallel do
5:         wkr=w𝒢r1w_{k}^{r}=w_{\mathcal{G}}^{r-1} {Global models broadcasted}
6:         if k𝒮k\in\mathcal{S} then
7:            𝒟train\mathcal{D}_{train}\leftarrow (draw from 𝒟ssl\mathcal{D}_{ssl} filtered utterances on device kk transcribed by teacher model w𝒯rw^{r}_{\mathcal{T}}, including weak supervision information zz and audio augmentation)
8:         else if k𝒞k\in\mathcal{C} then
9:            𝒟train\mathcal{D}_{train}\leftarrow (draw from 𝒟ht\mathcal{D}_{ht} for ASR rehearsal training on cloud pseudo-devices)
10:         end if
11:         for batch bi,i{1,,N}b_{i},\forall i\in\{1,\ldots,N\} from 𝒟train\mathcal{D}_{train}  do
12:            wkrw_{k}^{r}\leftarrow optimizerk\texttt{optimizer}_{k}.update(ηk,(wkr;bi)\eta_{k},\nabla\mathcal{L}(w_{k}^{r};b_{i}))
13:         end for
14:         Δwkr=wkrw𝒢r1\Delta w^{r}_{k}=w^{r}_{k}-w^{r-1}_{\mathcal{G}} {Transmit model delta to cloud}
15:     end for
16:     w𝒢rw_{\mathcal{G}}^{r}\leftarrow optimizer.update(η,1|𝒮𝒞|k𝒮𝒞Δwkr)\eta,\frac{1}{|\mathcal{S}\cup\mathcal{C}|}\sum_{k\in\mathcal{S}\cup\mathcal{C}}\Delta w_{k}^{r})
17:     w𝒯r{δw𝒯r1+(1δ)w𝒢r,ru0w𝒯r1else.w_{\mathcal{T}}^{r}\leftarrow\begin{cases}\delta w_{\mathcal{T}}^{r-1}+(1-\delta)w_{\mathcal{G}}^{r}~{},r\equiv_{u}0\\ w_{\mathcal{T}}^{r-1}\quad\textrm{else}.\end{cases} {EMA update}
18:  end for

Semi-supervised learning approaches typically employ a strong teacher model to machine transcribe audio data, which enables learning in the absence of human labeled supervised data. In compute, communication and memory constrained settings such as on-device federated learning, larger teacher models with higher resource requirements may not be feasible. In this work, we conform with federated constraints, and assume that the teacher model is of an equivalent configuration to the student model, can be stored and run on-device, and is used to process audio for machine labeling.

Algorithm 1 presents the details of the self-learning method. In each training round, we have unlabelled audio from the device for which we obtain the labels using the paired teacher model filtered to exclude utterances of very low or high confidence. Multiple local update steps may be taken on each device (similar to FedAvg [2]), or a single gradient update step may be taken (similar to FedSGD). The gradients are obtained using unlabeled audio on-device, with an augmented form of the audio and the teacher label. The server update step uses the aggregated local model deltas as a pseudo-gradient for its update. Finally, at the end of each training round based on an update frequency, the teacher model is updated using an exponential moving average (EMA [14]) of itself and the latest updated global student ASR model. This setup is illustrated in Fig. 1.

To help the model mitigate error feedback loops and catastrophic forgetting on older test sets, batches consisting of historical utterances with ground truth transcriptions can be included along with the self-learning updates that use unlabeled data. This process is termed as rehearsal training. The rehearsal updates are performed on the cloud by treating the cloud servers as a pseudo-devices and serves as a regularization term to prevent worsening ASR performance.

2.3 Weak supervision

Weak supervision signals can be used to further improve the performance of the system by leveraging information beyond just the unlabeled audio that self-learning relies on. This work exploits information weaker than the ground truth ASR transcription, which could be recovered from user interactions with the conversational agent. For example, if a user stops, cancels or repeats a request in the subsequent turn of a dialog, it indicates that the previous query was unsuccessfully processed by the device. We study updating ASR models with the help of such a feedback score, potentially indicating whether the user’s request was unsuccessful. Further, the correct natural language understanding (NLU) semantics in the form of the correct slot value may eventually be recovered, for e.g., through an explicit re-invocation by the user. Hence, we also study leveraging weak feedback in the form of the NLU slot labels. An example of weak supervision for an utterance can be seen in Table 1.

In this work, we demonstrate the impact of weak supervision labels in two forms: (1) machine generated NLU semantics: from an alternate spoken language understanding (SLU) built from ASR\rightarrowNLU as a proxy for inferred semantics from user session data; (2) synthetic user feedback scores: a proxy for real user corrections, and available only for the hypothesis served to the user. This framework can accommodate many types of weak supervision information.

Table 1: Examples of weak supervision available for an utterance. Here, semantic cost (fraction of slots incorrect) is illustrated as the feedback signal.
Transcription play Halo by Beyonce in main speaker
ASR hypothesis play Hello by Beyond in main speaker
NLU semantics PlaySong, Artist:Beyonce, Song: Halo, Device: Main speaker
semantic cost 2/3

2.3.1 Weak Supervision: NLU semantics

Machine generated NLU semantics from an alternative ASR and NLU model are used as a form of weak NLU feedback, e.g. prior work [16] has used NLU feedback generated by rewriting utterances. Treating the NLU semantics zz consisting of the slot type and values from this alternate system as ground truth, we can compute a semantic cost metric M(z,yi)M(z,\textbf{y}_{i}) for an ASR hypothesis. The semantic cost metric is computed for a given hypothesis, as the fraction of slots that have an error. A slot is considered to have an error if the tokens within the slot are not all present in the hypothesis. For the purpose of experimentation, we also study the impact of using the alternate system’s ASR transcript in addition to the NLU semantics. In this case, the cost MM can include the word error rate (WER) obtained comparing 𝐲i\mathbf{y}_{i} with the alternate transcript ztz_{t}. For ease of exposition, we consider zz to encapsulate both semantics and transcription ztz_{t}.

To leverage feedback from these possibly erroneous NLU semantics, we train a model with weight ww where the self-learning loss is augmented (summed) with this loss term from the weak NLU signal:

weak(w,𝐱,z)=𝔼ypw(𝐲|𝐱)[M(𝐲,z)]\displaystyle\mathcal{L}_{\textrm{weak}}(w,\mathbf{x},z)=\operatorname{\mathbb{E}}_{y\sim p_{w}(\mathbf{y}|\mathbf{x})}[M(\mathbf{y},z)]
ip^w(𝐲i|𝐱)M(𝐲i,z)\displaystyle\approx\sum_{i}\hat{p}_{w}(\mathbf{y}_{i}|\mathbf{x})M(\mathbf{y}_{i},z) (1)
wweak(w,𝐱,z)iM(𝐲i,z)wp^w(𝐲i|𝐱),\displaystyle\implies\nabla_{w}\mathcal{L}_{\textrm{weak}}(w,\mathbf{x},z)\approx\sum_{i}M(\mathbf{y}_{i},z)\nabla_{w}\hat{p}_{w}(\mathbf{y}_{i}|\mathbf{x}),

where p^w(𝐲i|𝐱)=pw(𝐲i|𝐱)/jpw(𝐲j|𝐱)\hat{p}_{w}(\mathbf{y}_{i}|\mathbf{x})=p_{w}(\mathbf{y}_{i}|\mathbf{x})/\sum_{j}p_{w}(\mathbf{y}_{j}|\mathbf{x}) is the normalized probability of the hypothesis. By making an assumption in  (1), that the probability mass is concentrated in the n-best hypothesis of ASR, the expectation can be approximated by only considering this subset of hypotheses [20]. We note that pwp_{w} is a differentiable function of ww and hence a gradient w\nabla_{w}\mathcal{L} can be computed.

2.3.2 Weak Supervision: Feedback Scores

In Sec. 2.3.1, we made an assumption that we can obtain weak NLU semantics, and thus have feedback for any hypothesis 𝐲i\mathbf{y}_{i}. Here, we add a constraint that weak supervision is only available for the hypothesis served to the user. The formulation with this constraint, termed weak supervision based on feedback scores, more closely simulate real user feedback where the user has provided feedback only for the served recognition.

We study two forms of feedback scores - (1) the semantic score as detailed in Sec. 2.3.1 applied only to the served hypothesis and (2) a binary feedback cost based on the sentence error rate with the true transcription ztz_{t}, M(𝐲,zt)=𝟙(𝐲zt)M(\mathbf{y},z_{t})=\mathbbm{1}(\mathbf{y}\neq z_{t}) (as a proxy for binary user corrections). To simulate an estimation error of the feedback from user interactions, we add a noise term to the feedback signal obtained i.e. M(𝐲,z)=M(𝐲,z)+UM^{\prime}(\mathbf{y},z)=M(\mathbf{y},z)+U, with random variable UU arising from an arbitrary noise distribution. This helps capture asymmetry and non-uniformity in the feedback from user interactions.

The learning is performed with a policy gradient setup. We use the n-best hypotheses to approximate the output lattice/space. A hypothesis (action) is selected from it by sampling based on the normalized n-best hypotheses probabilities. For the selected hypothesis, we use the feedback M(𝐲,z)M^{\prime}(\mathbf{y},z) described above as a reward function for the policy gradient method to update ww which in turn parameterizes p^w(𝐲i|𝐱)\hat{p}_{w}(\mathbf{y}_{i}|\mathbf{x}). We use the REINFORCE [17, 20] trick in conjunction with the above to obtain gradients so as to update ww. Now,

wweak(w,𝐱,z)\displaystyle\nabla_{w}\mathcal{L}_{\textrm{weak}}(w,\mathbf{x},z) =𝔼𝐲pw(𝐲|x)[M(𝐲,z)wlog(pw(𝐲|𝐱))]\displaystyle=\operatorname{\mathbb{E}}_{\mathbf{y}\sim p_{w}(\mathbf{y}|x)}[M(\mathbf{y},z)\nabla_{w}\log(p_{w}(\mathbf{y}|\mathbf{x}))]
M(𝐲,z)wlog(pw(𝐲|𝐱)),𝐲pw(|𝐱),\displaystyle\approx M^{\prime}(\mathbf{y},z)\nabla_{w}log(p_{w}(\mathbf{y}|\mathbf{x})),\mathbf{y}\sim p_{w}(\cdot|\mathbf{x}),

where we take a sampling approximation of size 1 as an estimate of the expectation. With the above setup in place, this framework falls into the premise of Algorithm 1.

3 Experiments

Data
Our federated continual training experiments are run from January to June 20212021. We use an internal voice-assistant dataset with de-identified utterances totalling 45004500 hours in this time period from 800800K devices. We make only a single pass through this data as one of the constraints is that persistent audio storage is not feasible.

We evaluate the models on in-house human transcribed (HT) test sets. There is no speaker overlap between the train and evaluation datasets. General comprises a 3737-hour test set in 20212021 and older test sets in 20202020. Delta comprises a 2222-hour HT test set that records a change in frequency of words in 20212021 over 20202020. The transcriptions are filtered based on 11-gram, 22-gram and 33-grams that are 55x more frequent in 20212021 than 20202020. This test set captures changes in the data distribution such as new use cases and is crucial to measure the impact of continual learning.

We also demonstrate results on models trained on public test sets. We use RNN-T models pretrained on the 960960 hour Librispeech dataset [22] and finetuned using self-learning with weak supervision on the 56 hour SLURP dataset [23]. For the public SLURP dataset, we evaluate on the test partition with 1313K utterances.

Model
The RNN-T model used contains  6060M parameters with a 5×10245\times 1024 LSTM encoder, a 2×10242\times 1024 LSTM prediction network and a feed-forward joint network with tanh activation [24]. The input embeddings of the prediction network are 512512 dimensional. We use a 25002500 sub-word piece tokenizer [25]. The audio features are 6464 dimensional log-mel filter-bank energy features that are computed on a 2525ms window, with a 1010ms shift. SpecAugment [26] is used for the audio features. The features computed on 33 consecutive 1010ms frames are stacked and sub-sampled to result in 192192 dimensional features at a 3030ms frame rate, provided as input to the ASR model.

A 480K480K-hour pre-training dataset (where 120K hours are human transcribed and rest machine transcribed) is utilized for pre-training the baseline. Experiments using multiple losses, have equally weighted losses (no tuning). All results shown are using FedSGD with 400400 devices randomly chosen for each of 30003000 training rounds, batch size 1616 and server-side Adam optimizer. For rehearsal training, 4040 cloud pseudo-devices additionally used with historic transcribed data.

Metric
The performance of these models on the voice-assistant data is measured in terms of relative word error rate reduction (WERR) over the initial baseline model at the start of 2021. Positive WERR values represent improvements, while negative ones show degradations. Absolute WER numbers are reported on SLURP experiments.

4 Results

Table 2: Performance of federated self-learning with weak supervision on the SLURP dataset, including examples of corrected utterances.
Setting WER
Initial 28.70
Oracle supervised finetuning 16.95
Self-learning
Teacher not updated 23.52
Teacher updated with EMA 18.95
   +weak-supervision 18.79
Truth please help me turn on the robot vacuum cleaner
Initial please tell me turn on the roblox i can clean
Self-learn please tell me turn on the robot vacuum cleaner
Truth look for this playback in audiobook and play for me
Initial look for display light audiobook and play for me
Self-learn look for this playback in audiobook and play for me
Truth olly what else do i have on the list
Initial what else do i have in the list
Self-learn ollie what else do i have on the list
Table 3: Performance of federated self-learning with weak supervision on voice-assistant data. WERR numbers are relative to WER of the initial model. Multiple forms of weak supervision such as ASR and NLU labels from an alternate SLU model, and NLU feedback scores for the hypothesis served are contrasted.
Weak Supervision method Teacher Update General WERR Delta WERR
- - -8.16 -0.02
- \checkmark -6.12 8.29
ASR \checkmark -1.84 11.43
ASR + NLU \checkmark -1.22 11.56
NLU feedback-score \checkmark -1.64 12.06

Federated self-learning with weak supervision: We see the performance of self-learning of a pretrained RNN-T model on the public SLURP dataset in Table 2 that shows self-learning improving the performance by 19%19\% with additional gains from using weak supervision composed of NLU feedback scores. We note that limited gains arise from weak supervision as SLURP has sparse annotations for transcript tokens or few slots per utterance. In few corrected examples, we see self-learning with weak supervision correcting deletion errors and even learning new words like the keyword ‘olly’.

In Table 3, the performance of self-learning coupled with weak supervision is depicted for continual learning with a single pass on the internal dataset. First, we observe that if we do not update the paired teacher model with EMA, performance on the new use case does not improve. If we only do self-learning for ASR, there is an improvement of 8.38.3% on the new use case test set. Coupling this with an ASR based weak supervision (where each hypothesis gets a feedback score of the WER computed using a teacher model), we see more improvement that increases as feedback includes the NLU component. We also see similar improvement using only the NLU-based feedback-score obtained only for the served hypothesis as opposed to obtaining a score for all possible hypotheses.

Noisy feedback: Table 4 shows the result of federated learning only with noisy feedback for a single served hypothesis from ASR. Here we consider noisy feedback of the form, M(𝐲,z)=M(𝐲,z)+(1)M(𝐲,z)UM^{\prime}(\mathbf{y},z)=M(\mathbf{y},z)+(-1)^{M(\mathbf{y},z)}U^{\prime}, where random variable Up(U|U[0,1]),U𝒩(0,σ2)U^{\prime}\sim p(U|U\in[0,1]),U\sim\mathcal{N}(0,\sigma^{2}) is drawn from a normal random variable with variance σ2\sigma^{2} truncated to be in the range [0,1][0,1]. We then add different levels of noise to measure its impact. In a noisy version of a binary feedback score,

𝔼[M(𝐲,z)]\displaystyle\operatorname{\mathbb{E}}{[M^{\prime}(\mathbf{y},z)]} =𝔼[M(𝐲,z)]+μ𝔼[(1)M(𝐲,z)]\displaystyle=\operatorname{\mathbb{E}}[M(\mathbf{y},z)]+\mu\operatorname{\mathbb{E}}[(-1)^{M(\mathbf{y},z)}]
=(12μ)𝔼[M(𝐲,z)]+μ\displaystyle=(1-2\mu)\operatorname{\mathbb{E}}[M(\mathbf{y},z)]+\mu
w𝔼[M(𝐲,z)]\displaystyle\implies\nabla_{w}\operatorname{\mathbb{E}}[M^{\prime}(\mathbf{y},z)] =(12μ)w𝔼[M(𝐲,z)],\displaystyle=(1-2\mu)\nabla_{w}\operatorname{\mathbb{E}}[M(\mathbf{y},z)],

where μ=E[U]\mu=\mathrm{E}[U^{\prime}]. Thus if the mean is less than 0.50.5, gradient update with the noisy feedback, in expectation, is in the same direction as the gradient update with the true feedback. We demonstrate that even at a high level of noise of σ=0.4\sigma=0.4 we are still able to improve the model on the delta dataset significantly.

Table 4: Performance of learning with only noisy feedback scores on voice-assistant data
Setting Delta WERR
binary feedback without noise 14.45
binary feedback + noise (σ=0.1\sigma=0.1) 9.05
binary feedback + noise (σ=0.2\sigma=0.2) 7.41
binary feedback + noise (σ=0.4\sigma=0.4) 4.40
Table 5: We study (i) the effect of rehearsal training in mitigating the catastrophic forgetting (left) and (ii) the effect of hyper parameters (right) in self-learning on voice-assistant data
Setting Delta WERR General (2020) WERR
Self-learning 14.08 -13.63
+ rehearsal training 12.47 -5.85
ema δ\delta, update uu Delta WERR
0.999, 10 14.08
0.999, 100 10.38
0.999, 200 11.56
0.9999, 10 12.64
0.9999, 100 11.03
0.975, 1 diverge

EMA hyperparameters and rehearsal training: In Table 5, we first see the impact of rehearsal training on mitigating catastrophic forgetting - we observe reduced regression on the older 2020 test set at the expense of performance of new Delta test sets. Delta test set results are not comparable across prior tables as amount of computation, catastrophic forgetting differ. We also study the impact of EMA hyperparameters, higher δ\delta implies lower weight to new updates and update frequency uu determines how often the teacher model is updated. Improved performance is seen for frequent updates with a lower EMA value. We also observed training diverging when the teacher model is updated to the student model after each step, suggesting that an error feedback loop takes place.

5 Conclusion

We focused on the federated continual learning problem for ASR where an ASR model deployed on-device is updated ensuring that (1) human ground-truth transcriptions are not available, (2) large device compute and memory are not required to run strong teacher models for labelling the audio (3) audio is not persisted or sent to the cloud. We demonstrated that using a paired teacher model to generate labels for the unlabelled audio and where the teacher model is updated using an exponential moving average of the RNN-T model can improve RNN-T performance by 10%10\% on new use cases with larger improvement on public SLURP dataset and only 10%10\% away from the fully supervised setting. Rehearsal training using historical datasets with ground-truth transcriptions mitigates catastrophic forgetting and error feedback loops. We made use of weak supervision signals such as machine generated NLU semantics or simulated noisy feedback scores from interactions of a user in a policy-gradient approach which further improved the performance of self-learning.

Acknowledgments: We thank Gurpreet, Aaron, Buddha, Bach, Harish, Ehry and Shehzad for helpful discussions.

References

  • [1] M. Al-Rubaie and J. M. Chang, “Privacy-preserving machine learning: Threats and solutions,” IEEE Security & Privacy, vol. 17, no. 2, pp. 49–58, 2019.
  • [2] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics.   PMLR, 2017, pp. 1273–1282.
  • [3] D. Guliani, F. Beaufays, and G. Motta, “Training speech recognition models with federated learning: A quality/cost framework,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2021, pp. 3080–3084.
  • [4] A. Hard, K. Partridge, C. Nguyen, N. Subrahmanya, A. Shah, P. Zhu, I. L. Moreno, and R. Mathews, “Training keyword spotting models on non-iid data with federated learning,” arXiv preprint arXiv:2005.10406, 2020.
  • [5] F. Granqvist, M. Seigel, R. van Dalen, Á. Cahill, S. Shum, and M. Paulik, “Improving on-device speaker verification using federated learning with privacy,” arXiv preprint arXiv:2008.02651, 2020.
  • [6] A. Hard, K. Partridge, N. Chen, S. Augenstein, A. Shah, H. J. Park, A. Park, S. Ng, J. Nguyen, I. L. Moreno et al., “Production federated keyword spotting via distillation, filtering, and joint federated-centralized training,” arXiv preprint arXiv:2204.06322, 2022.
  • [7] Z. Huo, D. Hwang, K. C. Sim, S. Garg, A. Misra, N. Siddhartha, T. Strohman, and F. Beaufays, “Incremental layer-wise self-supervised learning for efficient unsupervised speech domain adaptation on device,” Proc. Interspeech 2022, pp. 4845–4849, 2022.
  • [8] J. Jia, J. Mahadeokar, W. Zheng, Y. Shangguan, O. Kalinli, and F. Seide, “Federated domain adaptation for asr with full self-supervision,” arXiv preprint arXiv:2203.15966, 2022.
  • [9] A. Baevski, W.-N. Hsu, Q. Xu, A. Babu, J. Gu, and M. Auli, “Data2vec: A general framework for self-supervised learning in speech, vision and language,” arXiv preprint arXiv:2202.03555, 2022.
  • [10] S. Chen, C. Wang, Z. Chen, Y. Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao et al., “Wavlm: Large-scale self-supervised pre-training for full stack speech processing,” arXiv preprint arXiv:2110.13900, 2021.
  • [11] S. H. K. Parthasarathi and N. Strom, “Lessons from building acoustic models with a million hours of speech,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 6670–6674.
  • [12] Q. Xu, A. Baevski, T. Likhomanenko, P. Tomasello, A. Conneau, R. Collobert, G. Synnaeve, and M. Auli, “Self-training and pre-training are complementary for speech recognition,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2021, pp. 3030–3034.
  • [13] Y. Chen, W. Wang, and C. Wang, “Semi-supervised asr by end-to-end self-training,” arXiv preprint arXiv:2001.09128, 2020.
  • [14] V. Manohar, T. Likhomanenko, Q. Xu, W.-N. Hsu, R. Collobert, Y. Saraf, G. Zweig, and A. Mohamed, “Kaizen: Continuously improving teacher using exponential moving average for semi-supervised speech recognition,” arXiv preprint arXiv:2106.07759, 2021.
  • [15] A. Graves, “Sequence transduction with recurrent neural networks,” arXiv preprint arXiv:1211.3711, 2012.
  • [16] P. Ponnusamy, A. R. Ghias, C. Guo, and R. Sarikaya, “Feedback-based self-learning in large-scale conversational ai agents,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 08, 2020, pp. 13 180–13 187.
  • [17] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992.
  • [18] K. Veselỳ, A. Ghoshal, L. Burget, and D. Povey, “Sequence-discriminative training of deep neural networks.” in Interspeech, vol. 2013, 2013, pp. 2345–2349.
  • [19] R. Prabhavalkar, T. N. Sainath, Y. Wu, P. Nguyen, Z. Chen, C.-C. Chiu, and A. Kannan, “Minimum word error rate training for attention-based sequence-to-sequence models,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2018, pp. 4839–4843.
  • [20] M. Rao, P. Dheram, G. Tiwari, A. Raju, J. Droppo, A. Rastrow, and A. Stolcke, “Do as i mean, not as i say: Sequence loss training for spoken language understanding,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2021, pp. 7473–7477.
  • [21] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  • [22] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2015, pp. 5206–5210.
  • [23] E. Bastianelli, A. Vanzo, P. Swietojanski, and V. Rieser, “Slurp: A spoken language understanding resource package,” arXiv preprint arXiv:2011.13205, 2020.
  • [24] A. Graves, A.-r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE international conference on acoustics, speech and signal processing.   IEEE, 2013, pp. 6645–6649.
  • [25] T. Kudo, “Subword regularization: Improving neural network translation models with multiple subword candidates,” in ACL, 2018.
  • [26] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “Specaugment: A simple data augmentation method for automatic speech recognition,” arXiv preprint arXiv:1904.08779, 2019.