A Deep-Bayesian Framework for Adaptive Speech Duration Modification
Abstract
We propose the first method to adaptively modify the duration of a given speech signal. Our approach uses a Bayesian framework to define a latent attention map that links frames of the input and target utterances. We train a masked convolutional encoder-decoder network to produce this attention map via a stochastic version of the mean absolute error loss function; our model also predicts the length of the target speech signal using the encoder embeddings. The predicted length determines the number of steps for the decoder operation. During inference, we generate the attention map as a proxy for the similarity matrix between the given input speech and an unknown target speech signal. Using this similarity matrix, we compute a warping path of alignment between the two signals. Our experiments demonstrate that this adaptive framework produces similar results to dynamic time warping, which relies on a known target signal, on both voice conversion and emotion conversion tasks. We also show that our technique results in a high quality of generated speech that is on par with state-of-the-art vocoders.
Index Terms:
Prosody, Encoder-Decoder, Attention, Adaptive Duration Modification, Dynamic Time WarpingI Introduction
Human speech is a rich and varied mode of communication that encompasses both language/semantic information and the mood/intent of the speaker. The latter is primarily conveyed by prosodic features, such as pitch, energy, and speaking rate. There are many applications where understanding and manipulating these prosodic features is required. Consider voice conversion systems. Pitch and energy modifications are used to inject emotional cues into the speech or to change the overall speaking style [1, 2, 3, 4, 5]. Prosodic features are also used to evaluate the quality of human machine dialog systems [6], and they play a significant role in speaker identification and recognition systems [7].
While there are many approaches for automated pitch and energy modification [8, 9, 10, 11, 12], comparatively little progress has been made in changing the speaking rate of an utterance. In fact, the speaking rate plays a crucial role in conveying emotion [13] and in diagnosing human speech pathologies [14]. The speaking rate is difficult to manipulate because, unlike pitch or energy, there is no explicit coding for the signal duration. Rather, it is implicitly defined by a collection of frame-wise spectral representations (e.g., the short time Fourier transform or Mel-frequency cepstral coefficients). As a result, duration modification algorithms are not adaptive; they either require considerable user supervision, or they are geared towards aligning two known speech signals.
Perhaps the earliest duration modification method is the time-domain phase overlap add (TD-PSOLA) algorithm [15]. TD-PSOLA modifies the pitch and duration of a speech signal by replicating and interpolating between individual frames. However, the user must manually specify both the portion of speech to modify and the exact manner in which it should be altered. Hence, the method is neither automated nor adaptive. An alternative approach is dynamic time warping (DTW), which finds the optimal time alignment between two parallel speech utterances [16]. DTW constructs a pairwise similarity matrix between all frames of the two utterances and estimates a warping path between the starting and ending points of the utterances based on a Viterbi-like decoding of the similarity matrix. While simple, DTW requires both the source and target utterances to be known a priori. Hence, it cannot be used for on-the-fly modification of new signals.
Finally, recent advancements in deep learning have led to a new generation of neural vocoders, which disentangle the semantic content from the speaking style [17, 18, 19]. These vocoders can alter the speaking rate via the learned style embeddings. While these models represent seminal contributions to speech synthesis, the latent representations are learned in an unsupervised manner, which makes it difficult to control the output speaking voice. Another drawback of these methods is the computational overhead and data resources required to train the models and generate new speech [20].
In this paper, we introduce the first fully-automated adaptive speech duration modification scheme. Our approach combines the representation capabilities of deep neural networks with the structured simplicity of dynamic decoding. Namely, we model the alignment between a source and target utterance via a latent attention map; these maps are used as the similarity matrix for backtracking. We train a masked convolutional encoder-decoder network to estimate these attention maps using a stochastic mean absolute error (MAE) formulation. We demonstrate our framework on a voice conversion task using the CMU-Arctic dataset [21] and on three multi-speaker emotion conversion tasks using the VESUS dataset [22]. Our experiments confirm that the proposed model can perform open-loop duration modification and produces high-quality speech. Finally, our approach differs fundamentally from the conventional DTW [16] algorithm which requires both, the source and target utterances to warp one onto the other.
II Method

Fig.1 illustrates our underlying generative process. Given an utterance , we first estimate the length of the (unknown) target utterance and subsequently use it to estimate a mask for the attention map. The mask restricts the domain of the attention vectors at each frame to mitigate distortion of the output speech. We use paired data to train an encoder-decoder network to generate the attention vectors. During testing, we first generate the attention map from the input and use it to produce the target speech .
II-A Loss Function
Let denote the input speech. In this work, corresponds to the filter-bank energies, where is the number of filter-banks, and is the number of temporal frames in the utterance. Similarly, we denote target speech as . Notice that the target utterance length may differ from .
Our generative process for the target speech is as follows:
(1) |
where is the length of the target utterance, and is the target features at time . The parameters of the distributions are unknown and we implicitly estimate them via a deep neural network parameterized by and .
By treating the unknown parameters as functions of the input , we obtain the following estimating equations for the target sequence length and frame-wise filter-bank energies:
(2) |
The functions and correspond to deep networks. The variable is an attention vector that combines frame-wise features of the source utterance to generate the target frame . Notice that the residual, which cannot be explained by the input utterance, depends on the predictions at previous time steps. This autoregressive property allows the neural network to learn a time-varying component that can differentiate between the speakers or emotions.
During training, we use paired data to maximize the likelihood of the target speech signal with respect to the neural network weights . This likelihood can be written as:
(3) |
where, the second term of Eq. (7) can be expanded as follows:
(4) |
The variable in Eq. (8) denotes the attention mask and is introduced for convenience; it is a deterministic function of the source speech length and the estimated target length .
We use a variational free energy formulation [23] to derive an upper bound to our data log-likelihood (see supplemental materials for complete derivation). This bound can be translated into the following neural network loss function:
(5) |
Here, and are model hyperparameters and implicitly contain the variances of Laplace distributions in Eq. (1). The distribution is a variational distribution which is approximated by the fully convolutional neural network in Fig. 3.


II-B Masking
The mask is used to constrain the scope of the attention mechanism to be similar in time-scale to the input. This procedure is important for two reasons. From a speech quality perspective, large swings in speaking rate may generate unintelligible speech. From an estimation perspective, the utterances contains hundreds (sometimes thousands) of frames. It is difficult to robustly train a deep network to generate such long attention vectors using smaller datasets.
We use the masks derived from Itakura parallelogram [24], as illustrated in Fig. 2. The Itakura parallelogram is used to speedup DTW algorithms when the speaking rates in the source and target utterances are expected to be similar [24]. The slope of the Itakura parallelogram specifies the minimum and maximum speaking rates that the reconstructed utterances are allowed to possess in comparison to the input speech.
II-C Neural Network Architecture
We adapt the neural network architecture from [25] by adding skip connections to the last layer and changing the configuration of the attention module. Fig. 3 shows the encoder, decoder and the new attention module of the convolutional neural network. The encoder is responsible for generating feature embeddings for the decoder and for predicting the relative length of target speech. The sample operation in Fig. 3 is responsible for generating a sample from the attention distribution required for reconstruction and backpropagation.

We train our model using the Adam optimizer [26] with a fixed learning rate of . The input is an 80-dimensional vector of Mel-filterbank energies. The projection layer expands this input to dimensions. Both the encoder and decoder consist of convolutional layers, each followed by gated linear unit. We use data augmentation to stabilize the network. Specifically, we reverse the input-output sequences and randomly extract intervals (with probability ) from the full utterance. Our full model training procedure is described in the supplementary materials. The source code can be download from: https://engineering.jhu.edu/nsa/links/.
II-D DTW Back-Tracking
Our final step is to use the attention map produced by the decoder as a proxy for the DTW similarity matrix between the source and target speech frames. Effectively, we use the robust dynamic programming operation to get a path of alignment within the mask boundary, rather than rely on the noisy spectral reconstruction (see Algorithm 1). To avoid skipping phonemes, the path is constrained to take at most one horizontal or vertical step consecutively while backtracking. We finally use this alignment as a lookup table to synthesize the target speech from the input via the WORLD vocoder [27].
III Experimental Results
We evaluate our model on two multi-speaker datasets: CMU-ARCTIC [21] and VESUS [22]. We query three properties of our model on four tasks, as described below.
III-A Data and Voice Morphing Tasks
CMU-ARCTIC has 4 American English speakers (two male, two female), who are paired according to gender for voice conversion. We train our duration modification framework using utterances from the database and use the remaining utterances for testing the open-loop modification properties.
VESUS is an emotional speech corpus containing phrases read by speakers in emotion classes: neutral, angry, happy, and sad. VESUS also contains crowd-sourced emotional annotations. Here, we primarily use those utterances that are correctly annotated by at least half of the listeners.
We train three duration models corresponding to the three neutral-emotional pairs. This results in the following split:
-
•
Neutral to Angry Conversion: 2385 utterances for training, 72 for validation and, 61 for testing.
-
•
Neutral to Happy Conversion: 2431 utterances for training, 43 for validation and, 43 for testing.
-
•
Neutral to Sad Conversion: 2371 utterances for training, 75 for validation and, 63 for testing.
Given the small sample size due to shorter sequences, we fine-tune the model trained on CMU-ARCTIC for each emotion conversion task in lieu of training the networks from scratch.


III-B Length Prediction
As seen in Fig. 3, we use the encoder embeddings to predict the length of the target utterance as a ratio of the source utterance length. Fig. 4 shows the error in predicting this ratio in a ms/sec. Notice that our framework mispredicts the utterance lengths by only ms/sec and ms/sec on CMU-ARCTIC and VESUS, respectively. Duration prediction is particularly challenging on VESUS due to marked differences between neutral and emotional utterances. However, our framework performs well even in this challenging scenario, likely due to our fusion of deep representation and Bayesian regularization.
III-C Attention Alignment
Next, we compare the alignment between source and target speech frames using our method with the original DTW algorithm. Recall that DTW requires access to the target speech utterance, whereas our approach does not. To compare the warping paths, we code the horizontal, diagonal, and vertical moves of the backtracking procedure into three classes. We then compute the edit distance between the DTW alignment and the attention map based alignment. Fig. 5 illustrates the match ratio normalized by the average length. As seen, the match ratio varies between and , which suggests that our approach captures the general characteristics of an unseen target utterance. To our knowledge, this is the first demonstration of an adaptive duration modification framework.
Fig. 7 shows the effect of modifying the slope of the Itakura parallelogram and the horizontal/vertical movement constraint during DTW. As expected, relaxing the slope constraint and increasing the number of horizontal/vertical moves provide more flexibility in adjusting the speaking rate of generated speech. However, this flexibility can lead to missing or distorted phonemes, suggesting a trade-off between changing the speaking rhythm and preserving naturalness. Our framework allows the user to tune these knobs for their own application.

III-D Reconstruction Quality
Finally, we crowd source the mean opinion score (MOS) for the re-synthesized speech in the test set using Amazon mechanical turk (AMT). As seen in Fig. 6, our framework achieves an average MOS between across the four tasks. This performance is at par with state-of-the-art neural vocoders trained on hundreds of hours of speech. We note that CMU-ARCTIC task has the lowest MOS, perhaps due to the longer and more complex utterances. Interestingly, the MOS is unaffected by errors in length prediction, as evidenced by the VESUS neutral-angry emotion conversion task. This suggests that our approach of combining the neural network attention weights with a structured DTW algorithm provides robustness to both the speech characteristics and estimation errors.
IV Conclusions
We have presented a novel deep-Bayesian framework for adaptive speech duration modification. Our model used a convolutional encoder-decoder architecture to estimate attention maps to associate frames of the input speech with frames of the target. The attention maps are modeled as latent variables, which lead to a stochastic formulation of the MAE loss for model training. During testing, the attention map is directly used to approximate the similarity matrix for a DTW-style backtracking procedure. We evaluated our framework on a voice conversion and three separate emotion conversion tasks. Overall, our framework produces similar duration modification as the vanilla DTW but without requiring access to the target utterance. Further, we show that the re-synthesized speech has similar quality to most state-of-the-art neural vocoders.
References
- [1] J. A. Russell, J.-A. Bachorowski, and J.-M. Fernandez-Dols, “Facial and vocal expressions of emotion,” Annual Review of Psychology, vol. 54, pp. 329–349, 11 2003.
- [2] D. Schacter, D. T. Gilbert, and D. M. Wegner, Psychology (2nd Edition). Worth Publications, 2011.
- [3] R. Shankar, H.-W. Hsieh, N. Charon, and A. Venkataraman, “Automated Emotion Morphing in Speech Based on Diffeomorphic Curve Registration and Highway Networks,” in Proc. Interspeech 2019, 2019, pp. 4499–4503.
- [4] R. Shankar, J. Sager, and A. Venkataraman, “A Multi-Speaker Emotion Morphing Model Using Highway Networks and Maximum Likelihood Objective,” in Proc. Interspeech 2019, 2019, pp. 2848–2852.
- [5] R. Valle, J. Li, R. Prenger, and B. Catanzaro, “Mellotron: Multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens,” 2019.
- [6] M. Swerts and E. Krahmer, “On the use of prosody for on-line evaluation of spoken dialogue systems,” 04 2000.
- [7] S. J. Park, C. Sigouin, J. Kreiman, P. Keating, J. Guo, G. Yeung, F.-Y. Kuo, and A. Alwan, “Speaker identity and voice quality: Modeling human responses and automatic speaker recognition,” in Interspeech 2016, 2016, pp. 1044–1048.
- [8] T. Toda, A. W. Black, and K. Tokuda, “Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 8, pp. 2222–2235, Nov 2007.
- [9] R. Aihara, R. Takashima, T. Takiguchi, and Y. Ariki, “Gmm-based emotional voice conversion using spectrum and prosody features,” American Journal of Signal Processing, vol. 2, pp. 134–138, 12 2012.
- [10] T. Kaneko and H. Kameoka, “Parallel-data-free voice conversion using cycle-consistent adversarial networks,” CoRR, vol. abs/1711.11293, 2017.
- [11] R. Shankar, J. Sager, and A. Venkataraman, “Non-Parallel Emotion Conversion Using a Deep-Generative Hybrid Network and an Adversarial Pair Discriminator,” in Proc. Interspeech 2020, 2020, pp. 3396–3400.
- [12] R. Shankar, H.-W. Hsieh, N. Charon, and A. Venkataraman, “Multi-Speaker Emotion Conversion via Latent Variable Regularization and a Chained Encoder-Decoder-Predictor Network,” in Proc. Interspeech 2020, 2020, pp. 3391–3395.
- [13] J. Schmidt, E. Janse, and O. Scharenborg, “Perception of emotion in conversational speech by younger and older listeners,” Frontiers in Psychology, vol. 7, p. 781, 2016.
- [14] S. P. Bayerl, F. Hönig, J. Reister, and K. Riedhammer, “Towards automated assessment of stuttering and stuttering therapy,” 2020.
- [15] F. Charpentier and M. Stella, “Diphone synthesis using an overlap-add technique for speech waveforms concatenation,” ICASSP ’86. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 11, pp. 2015–2018, 1986.
- [16] Dynamic Time Warping (DTW). Dordrecht: Springer Netherlands, 2008, pp. 570–570.
- [17] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” CoRR, vol. abs/1609.03499, 2016.
- [18] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. J. Skerry-Ryan, R. A. Saurous, Y. Agiomyrgiannakis, and Y. Wu, “Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions,” CoRR, vol. abs/1712.05884, 2017.
- [19] Y. Wang, R. J. Skerry-Ryan, Y. Xiao, D. Stanton, J. Shor, E. Battenberg, R. Clark, and R. A. Saurous, “Uncovering latent style factors for expressive speech synthesis,” CoRR, vol. abs/1711.00520, 2017.
- [20] Y. Yasuda, X. Wang, and J. Yamagishi, “Investigation of learning abilities on linguistic features in sequence-to-sequence text-to-speech synthesis,” 2020.
- [21] J. Kominek and A. W Black, “The cmu arctic speech databases,” SSW5-2004, 01 2004.
- [22] J. Sager, R. Shankar, J. Reinhold, and A. Venkataraman, “VESUS: A Crowd-Annotated Database to Study Emotion Production and Perception in Spoken English,” in Proc. Interspeech 2019, 2019, pp. 316–320.
- [23] M. J. Beal, “Variational algorithms for approximate bayesian inference,” Ph.D. dissertation, UCL (University College London), 2003.
- [24] F. Itakura, “Minimum prediction residual principle applied to speech recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 23, no. 1, pp. 67–72, 1975.
- [25] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional sequence to sequence learning,” 2017.
- [26] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2015.
- [27] M. Morise, F. YOKOMORI, and K. Ozawa, “World: A vocoder-based high-quality speech synthesis system for real-time applications,” IEICE Transactions on Information and Systems, vol. E99.D, pp. 1877–1884, 07 2016.
Supplemental Materials:
A Deep-Bayesian Framework for Adaptive Speech Duration Modification
IV-A Loss derivation
We use a convolutional neural network to predict the length and target speech frame using the following expression:
(6) |
We maximize the log likelihood of the observed data to estimate the weights of the neural network denoted by . This data likelihood can be written as:
(7) |
By expanding the second term of Eq. (7) and using the conditional independece from the graphical model, we have:
(8) |
The attention mask is a deterministically constructed from the source speech length and the estimated target length .
In this work, we encode the attention as a one-hot vector across the input frames of the source speech. Therefore, it follows a multinomial distribution. For simplicity, we model as conditionally independent of the utterance length given the mask and the input . Specifically, taking the of Eq. (7) and combining with Eq. (8) yields:
(9) |
The distribution above is an approximating distribution for the attention vectors implemented by a convolutional network. The first inequality uses the convexity of the function, and the second inequality comes from the fact that entropy . Notice that we have implicitly assumed has a uniform distribution over the masked region. This is a reasonable assumption given that the masking process reduces the attention domain to a small region. However, the form of is not penalized for deviating from this uniform distribution during training. This flexibility allows the network to learn realistic attention vectors during autoregressive decoding. Eq. (9) can be easily translated into a neural network loss function which we minimize for :
(10) |
and are the model hyperparameters that adjusts the trade-off between the two objectives and implicitly contain the variances of the Laplace distributions introduced in the main text. Notice that the loss in Eq. (10) computes an expectation over the attention maps. We use the Monte-Carlo estimate by sampling from the attention map at each time-step. The training procedure is therefore stochastic in nature due to this random sampling. We mix this stochastic version with the maximum aposteriori estimate (MAP) of the attention vector with probability of 0.1 in the beginning of training procedure.
IV-B Training Algorithm
We start with a small threshold in line 8 (i.e., low contribution of the stochastic loss) to prevent the model from diverging in sub-optimal directions. The MAP estimate helps in this regard. Once, the number of training epochs exceed a fix value, we increase threshold to place more emphasis on the stochastic loss. Empirically, we found this to be extremely helpful in generating monotonic attention. We fixed the slope of attention mask in line 5 to based on the relative difference in length observed in the training datasets.