This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Unifying Cosine and PLDA Back-ends for Speaker Verification

Abstract

State-of-art speaker verification (SV) systems use a back-end model to score the similarity of speaker embeddings extracted from a neural network model. The commonly used back-end models are the cosine scoring and the probabilistic linear discriminant analysis (PLDA) scoring. With the recently developed neural embeddings, the theoretically more appealing PLDA approach is found to have no advantage against or even be inferior the simple cosine scoring in terms of SV system performance. This paper presents an investigation on the relation between the two scoring approaches, aiming to explain the above counter-intuitive observation. It is shown that the cosine scoring is essentially a special case of PLDA scoring. In other words, by properly setting the parameters of PLDA, the two back-ends become equivalent. As a consequence, the cosine scoring not only inherits the basic assumptions for the PLDA but also introduces additional assumptions on the properties of input embeddings. Experiments show that the dimensional independence assumption required by the cosine scoring contributes most to the performance gap between the two methods under the domain-matched condition. When there is severe domain mismatch and the dimensional independence assumption does not hold, the PLDA would perform better than the cosine for domain adaptation.

Index Terms: speaker verification, cosine, PLDA, dimensional independence

1 Introduction

Speaker verification (SV) is the task of verifying the identity of a person from the characteristics of his or her voice. It has been widely studied for decades with significant performance advancement. State-of-the-art SV systems are predominantly embedding based, comprising a front-end embedding extractor and a back-end scoring model. The front-end module transforms input speech into a compact embedding representation of speaker-related acoustic characteristics. The back-end model computes the similarity of two input speaker embeddings and determines whether they are from the same person.

There are two commonly used back-end scoring methods. One is the cosine scoring, which assumes the input embeddings are angularly discriminative. The SV score is defined as the cosine similarity of two embeddings x1x_{1} and x2x_{2}, which are mean-subtracted and length-normalized [1], i.e.,

xixiμxiμ2, for i=1,2\displaystyle x_{i}\leftarrow\frac{x_{i}-\mu}{||x_{i}-\mu||_{2}},\text{ for }i=1,2 (1)
Scos(x1,x2)=x1Tx2\displaystyle S_{\text{cos}}(x_{1},x_{2})=x_{1}^{T}x_{2} (2)

The other method of back-end scoring is based on probabilistic linear discriminant analysis (PLDA) [2]. It takes the assumption that the embeddings (also mean-subtracted and length-normalized) are in general Gaussian distributed.

It has been noted that the standard PLDA back-end performs significantly better than the cosine back-end on conventional i-vector embeddings [3]. Unfortunately, with the powerful neural speaker embeddings that are widely used nowadays [4], the superiority of PLDA vanishes and even turns into inferiority. This phenomenon has been evident in our experimental studies, especially when the front-end is trained with the additive angular margin softmax loss[5, 6].

The observation of PLDA being not as good as the cosine similarity is against the common sense of the back-end model design. Compared to the cosine, PLDA has more learnable parameters and incorporates additional speaker labels for training. Consequently, PLDA is generally considered to be more effective in discriminating speaker representations. This contradiction between experimental observations and theoretical expectation deserves thoughtful investigations on PLDA. In [7, 8, 9], Cai et al argued that the problem should have arise from the neural speaker embeddings. It is noted that embeddings extracted from neural networks tend to be non-Gaussian for individual speakers and the distributions across different speakers are non-homogeneous. These irregular distributions cause the performance degradation of verification systems with the PLDA back-end. In relation to this perspective, a series of regularization approaches have been proposed to force the neural embeddings to be homogeneously Gaussian distributed, e.g., Gaussian-constrained loss [7], variational auto-encoder[8] and discriminative normalization flow[9, 10].

In this paper, we try to present and substantiate a very different point of view from that in previous research. We argue that the suspected irregular distribution of speaker embeddings does not necessarily contribute to the inferiority of PLDA versus the cosine. Our view is based on the evidence that the cosine can be regarded as a special case of PLDA. This is indeed true but we have not yet found any work mentioning it. Existing studies have been treating the PLDA and the cosine scoring methods separately. We provide a short proof to unify them. It is noted that the cosine scoring, as a special case of PLDA, also assumes speaker embeddings to be homogeneous Gaussian distributed. Therefore, if the neural speaker embeddings are distributed irregularly as previously hypothesized, both back-ends should exhibit performance degradation.

By unifying the cosine and the PLDA back-ends, it can be shown that the cosine scoring puts stricter assumptions on the embeddings than PLDA. Details of these assumptions are explained in Section 3. Among them, the dimensional independence assumption is found to play a key role in explaining the performance gap between the two back-ends. It is evidenced by incorporating the dimensional independence assumption into the training of PLDA, leading to the diagonal PLDA (DPLDA). This variation of PLDA shows a significant performance improvement under the domain-matched condition. However, when severe domain mismatch exists and back-end adaptation is needed, PLDA performs better than both the cosine and DPLDA. This is because the dimension independence assumption does not hold. Analysis on the between-/within-class covariance of speaker embeddings supports these statements.

2 Review of PLDA

Theoretically PLDA is a probabilistic extension to the classical linear discriminant analysis (LDA)[11]. It incorporates a Gaussian prior on the class centroids in LDA. Among the variants of PLDA, the two-covariance PLDA[12] has been commonly used in speaker verification systems. A straightforward way to explain two-covariance PLDA is by using probabilistic graphical model[13].

2.1 Modeling

Refer to caption
Figure 1: The probabilistic graphical model of two-covariance PLDA

Consider NN speech utterances coming from MM speakers, where the mm-th speaker is associated with nmn_{m} utterances. With a front-end embedding extractor, each utterance can be represented by an embedding of DD dimensions. The embedding of the nn-th utterance from the mm-th speaker is denoted as xm,nx_{m,n}. Let 𝒳={xm,n}1,1M,nm\mathcal{X}=\{x_{m,n}\}_{1,1}^{M,n_{m}} represent these per-utterance embeddings. Additionally, PLDA supposes the existence of per-speaker embeddings 𝒴={ym}m=1M\mathcal{Y}=\{y_{m}\}_{m=1}^{M}. They are referred to as latent speaker identity variables in [14].

With the graphical model shown in Fig.1, these embeddings are generated as follows,

  • Randomly draw the per-speaker embedding ym𝒩(ym;μ,B1)y_{m}\sim\mathcal{N}(y_{m};\mu,B^{-1}), for m=1,,Mm=1,\cdots,M;

  • Randomly draw the per-utterance embedding xm,n𝒩(xm,n;ym,W1)x_{m,n}\sim\mathcal{N}(x_{m,n};y_{m},W^{-1}), for n=1,,nmn=1,\cdots,n_{m}.

where θ={μ,B,W}\theta=\{\mu,B,W\} denotes the model parameters of PLDA. Note that BB and WW are precision matrices. The joint distribution pθ(𝒳,𝒴)p_{\theta}(\mathcal{X},\mathcal{Y}) can be derived as,

pθ(𝒳,𝒴)exp(12m=1M[(ymμ)TB(ymμ)+n=1nm(xm,nym)TW(xm,nym)])\begin{split}p_{\theta}(\mathcal{X},\mathcal{Y})\propto\text{exp}(-\frac{1}{2}\sum_{m=1}^{M}&\Bigl{[}(y_{m}-\mu)^{T}B(y_{m}-\mu)\\ +&\sum_{n=1}^{n_{m}}(x_{m,n}-y_{m})^{T}W(x_{m,n}-y_{m})\Bigl{]})\end{split} (3)

2.2 Training

Estimation of PLDA model parameters can be done with the iterative E-M algorithm, as described in Algorithm 1. The algorithm requires initialization of model parameters. In kaldi[15], the initialization strategy is to set B=W=IB=W=I and μ=0\mu=0.

Algorithm 1 E-M training of two-covariance PLDA
  Input: per-utterance embeddings 𝒳={xm,n}1,1M,nm\mathcal{X}=\{x_{m,n}\}_{1,1}^{M,n_{m}}
  Initialization: B=W=I,μ=0B=W=I,\mu=0
  repeat
     (E-step): Infer the latent variable ym|𝒳y_{m}|\mathcal{X}
         Lm=B+nmWL_{m}=B+n_{m}W
         ym|𝒳𝒩(Lm1(Bμ+Wn=1nmxm,n),Lm1)y_{m}|\mathcal{X}\sim\mathcal{N}(L_{m}^{-1}(B\mu+W\sum_{n=1}^{n_{m}}x_{m,n}),L_{m}^{-1})
     (M-step): Update θ\theta by maxθ𝔼𝒴logpθ(𝒳,𝒴)\max_{\theta}\mathbb{E}_{\mathcal{Y}}\log p_{\theta}(\mathcal{X},\mathcal{Y})
         μ=1Mm𝔼[ym|𝒳]\mu=\frac{1}{M}\sum_{m}\mathbb{E}[y_{m}|\mathcal{X}]
         B1=1Mm𝔼[ymymT|𝒳]μμTB^{-1}=\frac{1}{M}\sum_{m}\mathbb{E}[y_{m}y_{m}^{T}|\mathcal{X}]-\mu\mu^{T}
         W1=1Nmn𝔼[(ymxm,n)(ymxm,n)T|𝒳]W^{-1}=\frac{1}{N}\sum_{m}\sum_{n}\mathbb{E}[(y_{m}-x_{m,n})(y_{m}-x_{m,n})^{T}|\mathcal{X}]
  until Convergence 
  Return B,W,μB,W,\mu

2.3 Scoring

Assuming the embeddings are mean-subtracted and length-normalized, we let μ0\mu\approx 0 to simplify the scoring function. Given two per-utterance embeddings xi,xjx_{i},x_{j}, the PLDA generates a log-likelihood ratio (LLR) that measures the relative likelihood of the two embeddings coming from the same speaker. The LLR is defined as,

SPLDA(xi,xj)\displaystyle S_{\text{PLDA}}(x_{i},x_{j}) =logp(xi,xj|1)p(xi,xj|0)\displaystyle=\log\frac{p(x_{i},x_{j}|\mathcal{H}_{1})}{p(x_{i},x_{j}|\mathcal{H}_{0})}
=logp(xi,xj)p(xi)p(xj)\displaystyle=\log\frac{p(x_{i},x_{j})}{p(x_{i})p(x_{j})} (4)

where 1\mathcal{H}_{1} and 0\mathcal{H}_{0} represent the same-speaker and different-speaker hypotheses. To derive the score function, without loss of generality, consider a set of n1n_{1} embeddings 𝒳1={x1,n}n=1n1\mathcal{X}_{1}=\{x_{1,n}\}_{n=1}^{n_{1}} that come from the same speaker. It can be proved that

logp(𝒳1)=\displaystyle\log p(\mathcal{X}_{1})= (5)
12(n12μ1TW(B+n1W)1Wμ1n=1n1x1,nTWx1,n\displaystyle\frac{1}{2}\left(n_{1}^{2}\mu_{1}^{T}W(B+n_{1}W)^{-1}W\mu_{1}-\sum_{n=1}^{n_{1}}x_{1,n}^{T}Wx_{1,n}\right.
+log|B|+n1log|W|log|B+n1W|n1Dlog(2π))\displaystyle+\log|B|+n_{1}\log|W|-\log|B+n_{1}W|-n_{1}D\log(2\pi)\biggr{)}

where μ1=1n1n=1n1x1,n\mu_{1}=\frac{1}{n_{1}}\sum_{n=1}^{n_{1}}x_{1,n}. By applying Eq.5 into Eq.4, the LLR can be expressed as

SPLDA(xi,xj)=˙12(xiTQxi+xjTQxj+2xiTPxj)S_{\text{PLDA}}(x_{i},x_{j})\dot{=}\frac{1}{2}\left(x_{i}^{T}Qx_{i}+x_{j}^{T}Qx_{j}+2x_{i}^{T}Px_{j}\right) (6)

where =˙\dot{=} means equivalence up to a negligible additive constant, and

Q\displaystyle Q =W((B+2W)1(B+W)1)W\displaystyle=W((B+2W)^{-1}-(B+W)^{-1})W (7)
P\displaystyle P =W(B+2W)1W\displaystyle=W(B+2W)^{-1}W (8)

Note that Q0Q\prec 0 and P+Q0P+Q\succeq 0.

3 Cosine as a typical PLDA

Relating Eq.6 to Eq.2 for the cosine similarity measure, it is noted that when Q=P=I-Q=P=I, the LLR of PLDA degrades into the cosine similarity, as xiTxi=1x_{i}^{T}x_{i}=1. It is also noted that the condition of Q=P=I-Q=P=I is not required. PLDA is equivalent to the cosine if and only if Q=αIQ=\alpha I and P=βIP=\beta I, where α<0,α+β0\alpha<0,\alpha+\beta\geq 0.

Given W0W\succ 0, we have

W\displaystyle W =β(βα)αI\displaystyle=\frac{\beta(\beta-\alpha)}{-\alpha}I (9)
B\displaystyle B =β(β+α)(βα)α2I\displaystyle=\frac{\beta(\beta+\alpha)(\beta-\alpha)}{\alpha^{2}}I (10)

Without loss of generality, we let W=B=IW=B=I. In other words, the cosine is a typical PLDA with both within-class covariance W1W^{-1} and between-class covariance B1B^{-1} fixed as an identity matrix.

So far we consider only the simplest pairwise scoring. In the general case of many-vs-many scoring, the PLDA and cosine are also closely related. For example, let us consider two sets of embeddings 𝒳1\mathcal{X}_{1} and 𝒳2\mathcal{X}_{2} of size K1K_{1} and K2K_{2}, respectively. Their centroids are denoted by μ1\mu_{1} and μ2\mu_{2}. It can be shown,

SPLDA(𝒳1,𝒳2)\displaystyle S_{\text{PLDA}}(\mathcal{X}_{1},\mathcal{X}_{2}) =K1K21+K1+K2Scos(μ1,μ2)+12C(K1,K2)\displaystyle=\frac{K_{1}K_{2}}{1+K_{1}+K_{2}}S_{\text{cos}}(\mu_{1},\mu_{2})+\frac{1}{2}C(K_{1},K_{2}) (11)
C(K1,K2)\displaystyle C(K_{1},K_{2}) =K12+K221+K1+K2K121+K1K221+K2\displaystyle=\frac{K_{1}^{2}+K_{2}^{2}}{1+K_{1}+K_{2}}-\frac{K_{1}^{2}}{1+K_{1}}-\frac{K_{2}^{2}}{1+K_{2}}
+log(1+K1K21+K1+K2)\displaystyle+\log(1+\frac{K_{1}K_{2}}{1+K_{1}+K_{2}}) (12)

under the condition of W=B=IW=B=I. The term C(K1,K2)C(K_{1},K_{2}) depends only on K1K_{1} and K2K_{2}.

This has shown that the cosine puts more stringent assumptions than PLDA on the input embeddings. These assumptions are:

  1. 1.

    (dim-indep) Dimensions of speaker embeddings are mutually uncorrelated or independent;

  2. 2.

    Based on 1), all dimensions share the same variance value.

As the embeddings are assumed to be Gaussian, dimensional uncorrelatedness is equivalent to dimensional independence.

3.1 Diagonal PLDA

With Gaussian distributed embeddings, the dim-indep assumption implies that speaker embeddings have diagonal covariance. To analyse the significance of this assumption to the performance of SV backend, a diagonal constraint is applied to updating BB and WW in Algorithm 1, i.e.,

B1\displaystyle B^{-1} =diag(1Mm𝔼[ym2|𝒳]μ2)\displaystyle=\text{diag}(\frac{1}{M}\sum_{m}\mathbb{E}[y_{m}^{\circ 2}|\mathcal{X}]-\mu^{\circ 2}) (13)
W1\displaystyle W^{-1} =diag(1Nmn𝔼[(ymxm,n)2|𝒳])\displaystyle=\text{diag}(\frac{1}{N}\sum_{m}\sum_{n}\mathbb{E}[(y_{m}-x_{m,n})^{\circ 2}|\mathcal{X}]) (14)

where 2\circ 2 denotes the Hadamard square. The PLDA trained in this way is named as the diagonal PLDA (DPLDA). The relationship between DPLDA and PLDA is similar to that between the diagonal GMM and the full-covariance GMM.

4 Experimental setup

Experiments are carried out with the Voxceleb1+2 [16] and the CNCeleb1 databases[17]. A vanilla ResNet34[18] model is trained with 1029K utterances from 5994 speakers in the training set of Voxceleb2. Following the state-of-the-art training configuration111https://github.com/TaoRuijie/ECAPA-TDNN, data augmentation with speed perturbation, reverberation and spectrum augmentation[19] is applied. The AAM-softmax loss[5] is adopted to produce angular-discriminative speaker embeddings.

The input features to ResNet34 are 80-dimension filterbank coefficients with mean normalization over a sliding window of up to 3 seconds long. Voice activity detection is carried out with the default configuration in kaldi222https://github.com/kaldi-asr/kaldi/blob/master/egs/voxceleb/v2/conf. The front-end module is trained to generate 256-dimension speaker embeddings, which are subsequently mean-subtracted and length-normalized. The PLDA backend is implemented in kaldi and modified to the DPLDA according to Eq. 13-14.

Performance evaluation is carried out on the test set in VoxCeleb1 and CNCeleb1. The evaluation metrics are equal error rate (EER) and decision cost function (DCF) with ptar=0.01p_{\text{tar}}=0.01 or 0.0010.001.

4.1 Performance comparison between backends

As shown in Table 1, the performance gap between cosine and PLDA backends can be observed from the experiment on VoxCeleb. Cosine outperforms PLDA by relatively improvements of 51.61%51.61\% in terms of equal error rate (EER) and 50.73%50.73\% in terms of minimum Decision Cost Function with Ptar=0.01P_{\text{tar}}=0.01 (DCF0.010.01). The performance difference becomes much more significant with DCF0.0010.001, e.g., 0.30620.3062 by PLDA versus 0.11370.1137 by the cosine. Similar results are noted on other test sets of VoxCeleb1 ((not listed here for page limit)).

The conventional setting of using LDA to preprocess raw speaker embeddings before PLDA is evaluated. It is labelled as LDA+PLDA in Table 1. Using LDA appears to have a negative effect on PLDA. This may be due to the absence of the dim-indep constraint on LDA. We argue that it is unnecessary to apply LDA to regularize the embeddings. The commonly used LDA preprocessing is removed in the following experiments.

Table 1: Comparison of backends on VoxCeleb.
EER% DCF0.01 DCF0.001
cos 1.06 0.1083 0.1137
PLDA 1.86 0.2198 0.3062
LDA+PLDA 2.17 0.2476 0.3715
DPLDA 1.11 0.1200 0.1426

The DPLDA incorporates the dim-indep constraint into PLDA training. As shown in Table 1, it improves the EER of PLDA from 1.86%1.86\% to 1.11%1.11\%, which is comparable to cosine. This clearly confirms the importance of dim-indep.

4.2 Performance degradation in Iterative PLDA training

According to the derivation in Section 3, PLDA implemented in Algorithm 1 is initialized as the cosine, e.g., B=W=IB=W=I. However, the PLDA has been shown to be inferior to the cosine by the results in Table 1. Logically it would be expected that the performance of PLDA degrades in the iterative EM training. Fig 2 shows the plot of EERs versus number of training iterations. Initially PLDA achieves exactly the same performance as cosine. In the first iteration, the EER seriously increases from 1.06% to 1.707%. For DPLDA, the dim-indep constraint shows an effect of counteracting the degradation.

Refer to caption
Figure 2: PLDA gets worse in its iterative EM training

4.3 When domain mismatch exists

The superiority of cosine over PLDA has been evidenced on the VoxCeleb dataset, of which both training and test data come from the same domain, e.g., interviews collected from YouTube. In many real-world scenarios, domain mismatch between training and test data commonly exists. A practical solution is to acquire certain amount of in-domain data and update the backend accordingly. The following experiment is to analyse the effect of domain mismatch on the performance of backend models.

The CNCeleb1 dataset is adopted as the domain-mismatched data. It is a multi-genre dataset of Chinese speech with very different acoustic conditions from VoxCeleb. The ResNet34 trained on VoxCeleb is deployed to exact embeddings from the utterances in CNCeleb1. The backends are trained and evaluated on the training and test embeddings of CNCeleb1.

As shown in Table2, the performance of both cosine and DPLDA are inferior to PLDA. Due to that the dim-indep assumption no longer holds, the diagonal constraint on covariance does not bring any performance improvement to cosine and DPLDA.

Table 2: Comparison of backends on CNCeleb1
EER% DCF0.01 DCF0.001
cos 10.11 0.5308 0.7175
PLDA 8.90 0.4773 0.6331
DPLDA 10.24 0.5491 0.8277

4.4 Analysis of between-/within-class covariances

To analyze the correlation of individual dimensions of the embeddings, the between-class and within-class covariances, B01B_{0}^{-1} and W01W_{0}^{-1}, are computed as follows,

B01\displaystyle B_{0}^{-1} =1MMnmymymTμ0μ0T\displaystyle=\frac{1}{M}\sum_{M}n_{m}y_{m}y_{m}^{T}-\mu_{0}\mu_{0}^{T} (15)
W01\displaystyle W_{0}^{-1} =1Mm=1Mn=1nm(xm,nym)(xm,nym)T\displaystyle=\frac{1}{M}\sum_{m=1}^{M}\sum_{n=1}^{n_{m}}(x_{m,n}-y_{m})(x_{m,n}-y_{m})^{T} (16)

where μ0=1Nm=1Mn=1nmxm,n\mu_{0}=\frac{1}{N}\sum_{m=1}^{M}\sum_{n=1}^{n_{m}}x_{m,n} and ym=1nmn=1nmxm,ny_{m}=\frac{1}{n_{m}}\sum_{n=1}^{n_{m}}x_{m,n}. These are the training equations of LDA and closely related to the M-step of PLDA. Note that for visualization, the elements in B01B_{0}^{-1} and W01W_{0}^{-1} are converted into their absolute value.

In Fig.3, both between-class and within-class covariances show clearly diagonal patterns, in the domain-matched case (plot on the top). This provides additional evidence to support the dim-indep assumption aforementioned. However, this assumption would be broken with strong domain-mismatched data in CNCeleb. As shown by the two sub-plots in the bottom of Fig 3, even though the within-class covariance plot on the right shows a nice diagonal pattern, it tends to vanish for the between-class covariance (plot on the left). Off-diagonal elements have large absolute value and the dimension correlation pattern appears, suggesting the broken of dim-indep. The numerical measure of diagonal index also confirms this observation.

Refer to caption
Figure 3: between-class (left) and within-class (right) covariance of embeddings on the training data of VoxCeleb (top) and CN-Celeb (bottom). The diagonal index is computed as trace(G)/sum(G)\text{trace}(G)/\text{sum}(G) for a non-negative covariance matrix GG.

5 Conclusion

The reason why PLDA appears to be inferior to the cosine scoring with neural speaker embeddings has been exposed with both theoretical and experimental evidence. It has been shown that the cosine scoring is essentially a special case of PLDA. Hence, the non-Gaussian distribution of speaker embeddings should not be held responsible for explaining the performance difference between the PLDA and cosine back-ends. Instead, it should be attributed to the dimensional independence assumption made by the cosine, as evidenced in our experimental results and analysis. Nevertheless, this assumption fits well only in the domain-matched condition. When severe domain mismatch exists, the assumption no longer holds and PLDA can work better than the cosine. Further improvements on PLDA need to take this assumption into consideration. It is worth noting that the AAM-softmax loss should have the benefit of regularizing embeddings to be homogeneous Gaussian, considering good performance of the cosine scoring.

References

  • [1] D. Garcia-Romero and C. Y. Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems,” in Twelfth annual conference of the international speech communication association, 2011.
  • [2] S. Ioffe, “Probabilistic linear discriminant analysis,” in European Conference on Computer Vision.   Springer, 2006, pp. 531–542.
  • [3] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2010.
  • [4] H. Zeinali, S. Wang, A. Silnova, P. Matějka, and O. Plchot, “But system description to voxceleb speaker recognition challenge 2019,” arXiv preprint arXiv:1910.12592, 2019.
  • [5] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4690–4699.
  • [6] X. Xiang, S. Wang, H. Huang, Y. Qian, and K. Yu, “Margin matters: Towards more discriminative deep neural network embeddings for speaker recognition,” in 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC).   IEEE, 2019, pp. 1652–1656.
  • [7] L. Li, Z. Tang, Y. Shi, and D. Wang, “Gaussian-constrained training for speaker verification,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 6036–6040.
  • [8] Y. Zhang, L. Li, and D. Wang, “Vae-based regularization for deep speaker embedding,” arXiv preprint arXiv:1904.03617, 2019.
  • [9] Y. Cai, L. Li, A. Abel, X. Zhu, and D. Wang, “Deep normalization for speaker vectors,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 733–744, 2020.
  • [10] L. Li, D. Wang, and T. F. Zheng, “Neural discriminant analysis for deep speaker embedding,” arXiv preprint arXiv:2005.11905, 2020.
  • [11] S. Balakrishnama and A. Ganapathiraju, “Linear discriminant analysis-a brief tutorial,” Institute for Signal and information Processing, vol. 18, no. 1998, pp. 1–8, 1998.
  • [12] A. Sizov, K. A. Lee, and T. Kinnunen, “Unifying probabilistic linear discriminant analysis variants in biometric authentication,” in Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR).   Springer, 2014, pp. 464–475.
  • [13] M. I. Jordan, “An introduction to probabilistic graphical models,” 2003.
  • [14] N. Brümmer and E. De Villiers, “The speaker partitioning problem.” in Odyssey, 2010, p. 34.
  • [15] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldi speech recognition toolkit,” in Proc. of ASRU, no. EPFL-CONF-192584.   IEEE Signal Processing Society, 2011.
  • [16] A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: a large-scale speaker identification dataset,” arXiv preprint arXiv:1706.08612, 2017.
  • [17] L. Li, R. Liu, J. Kang, Y. Fan, H. Cui, Y. Cai, R. Vipperla, T. F. Zheng, and D. Wang, “Cn-celeb: multi-genre speaker recognition,” Speech Communication, 2022.
  • [18] J. S. Chung, J. Huh, S. Mun, M. Lee, H. S. Heo, S. Choe, C. Ham, S. Jung, B.-J. Lee, and I. Han, “In defence of metric learning for speaker recognition,” arXiv preprint arXiv:2003.11982, 2020.
  • [19] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, “Specaugment: A simple data augmentation method for automatic speech recognition,” arXiv preprint arXiv:1904.08779, 2019.