This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Acoustic Model Optimization over Multiple Data Sources: Merging and Valuation

Victor Junqiu Wei, Weicheng Wang
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
[email protected], [email protected]
&Di Jiang, Conghui Tan, Rongzhong Lian
AI Group, WeBank Co., Ltd, Shenzhen, China
{dijiang,martintan,ronlian}@webank.com
Abstract

Due to the rising awareness of privacy protection and the voluminous scale of speech data, it is becoming infeasible for Automatic Speech Recognition (ASR) system developers to train the acoustic model with complete data as before. For example, the data may be owned by different curators, and it is not allowed to share with others. In this paper, we propose a novel paradigm to solve salient problems plaguing the ASR field. In the first stage, multiple acoustic models are trained based upon different subsets of the complete speech data, while in the second phase, two novel algorithms are utilized to generate a high-quality acoustic model based upon those trained on data subsets. We first propose the Genetic Merge Algorithm (GMA), which is a highly specialized algorithm for optimizing acoustic models but suffers from low efficiency. We further propose the SGD-Based Optimizational Merge Algorithm (SOMA), which effectively alleviates the efficiency bottleneck of GMA and maintains superior model accuracy. Extensive experiments on public data show that the proposed methods can significantly outperform the state-of-the-art. Furthermore, we introduce Shapley Value to estimate the contribution score of the trained models, which is useful for evaluating the effectiveness of the data and providing fair incentives to their curators.

1 Introduction

footnotetext: This paper is an extended version of [1].

Automatic Speech Recognition (ASR) has already become an indispensable part of modern intelligence systems such as voice assistants and client service robots. An effective ASR system relies on a robust acoustic model that is trained over a huge amount of speech data collected from a wide range of domains. However, in real-life scenarios, training an acoustic model with complete data is increasingly infeasible due to the following three reasons:

  • R1: Speech data from different domains are owned by distinct curators, who are usually unwilling to share these data due to privacy concerns. Meanwhile, to encourage curators to contribute to the training process, providing fair incentives for their participation is also challenging.

  • R2: Speech data of multiple curators may be distributed across different computing centers. Traditional distributed computing paradigms, e.g., ParameterServer, require frequent information exchange between different computing nodes. Thus, if the data are remotely distributed, the information exchange is problematic.

  • R3: The speech data from a single curator can be voluminous. Traditional optimizing methods like Stochastic Gradient Descent (SGD) are slow, even for their parallel or asynchronous versions. Instead, one would like to process data in a fully parallel way.

To tackle the issues mentioned above, we propose to optimize the acoustic model in a two-stage fashion: (1) we first train models on different parts of the data independently until they converge on local data; (2) after local training, we gather all separately trained models and merge them into a single one. By this means, only one round of communication is required to train the model. This technique has already been applied in some existing applications [2, 3, 4]. For example, [2] proposed to use this technique to solve the privacy and communication problems (i.e., R1 and R2) in federated learning. Kaldi [3], one of the most widely-used speech recognition toolkits, has adopted it as the default training scheme to deal with the efficiency challenge (i.e., R3) [4].

However, in terms of merging models, most of the existing work still relies on the simplistic technique of averaging over all the models. Although some progress has been made in improving performance, a better model merging strategy is still an open problem. Besides, since the training data are collected from multiple curators, it is important to fairly measure the contributions of different data, e.g., to offer the curators corresponding rewards. Nevertheless, evaluating participants’ contributions is currently under-exploited.

In this paper, we propose a new optimization paradigm. Unlike existing studies, we merge the models in a more data-efficient way, which improves the model quality with a limited amount of data. Moreover, the Shapley Value is introduced to evaluate the contribution of the data provided by each curator to the target model so that curators can achieve fair incentive rewards. In detail, we first propose a merge method based on the genetic algorithm, named GMA, which is capable of yielding models of great recognition accuracy. However, its practicality is heavily limited by its poor efficiency. To further tackle this issue, we convert the model merging problem into a mathematical optimization problem via a novel formulation and develop a new optimization method based on SGD to solve it, which leads to a new method called SOMA.

This paper differs from the preliminary conference version [1] in the following ways. First of all, in order to evaluate the contribution of different data curators and offer fair rewards to them, we have proposed a novel method concerning the valuation of the source models which is present in a newly added section (Section 7). We also conducted experiments on the valuation of the source models in Section 8.5. Secondly, an extra group of experiments has been conducted, where we empirically evaluated the co-contribution scores and studied their relationships to other statistical quantities. Thirdly, the related work section (Section 2) has been expanded. We reviewed the related literature more comprehensively, and the works related to incentive mechanisms are also included. Fourthly, more details have been added to some existing technical parts to make them easier to read.

Experiments suggest SOMA can produce models comparable to GMA, but with much less computational cost. Without loss of generality, we focus on the scenario of merging several DNN acoustic models with homogeneous structures into a single one. This practice can be straightforwardly applied in the state-of-the-art ASR systems based upon DNN-HMM or End-to-End architectures.

2 Related Work

2.1 Automatic Speech Recognition

Automatic speech recognition (ASR) is a technology for smoother interaction between people and machines. Its applications include mobile speech recognition technology, smart speakers, and other smart voice products. The ASR converts speech fragments into text, which typically includes an acoustic model (AM) and a language model (LM). Recently, there has been some research on end-to-end ASR that does not build AM and LM explicitly. However, traditional ASR systems are still dominating in practice due to their high performance and reliability.

The acoustic model aims to find the probability distribution on a phoneme sequence given a piece of speech. Before the rise of deep learning, practical acoustic models were implemented with statistical models such as the classical Gaussian mixture model (GMM) and the Hidden Markov model (HMM). GMM and HMM have a critical position due to their mathematical elegance and capability to model time-varying sequences [5]. However, they fail to capture complex sequential information and non-linear speech features. With the rise of deep learning, deep learning-based acoustic models, such as CNN, LSTM, encoder-decoder frameworks, and attention mechanisms, have further improved performance, becoming dominating in the field of speech recognition.

Subsequent researchers found discriminative training can get relatively better results [6][7][8]. Some research proposed better feature extraction models to replace GMM, including neural network (NN), restricted Boltzmann machine (RBM), deep belief network (DBN), and deep neural network (DNN) [9]. The outstanding performance of the hybrid model has also attracted much attention.

Deep learning methods enhance the speech signal representation compared to traditional acoustic models such as GMM. Inspired by the GMM-HMM structure, researchers propose to replace GMM with DNN, resulting in a popular DNN-HMM hybrid system, which utilizes the advantages of modeling sequential information of HMM and the superior ability to capture speech representation. In 2012, DNN trained on very large-scale data successfully reduced the word error rate (WER)[10], stressing the paramount potential of DNN’s ability to learn the hierarchical structure of the representation from the input data. Further advanced deep models, such as the recurrent neural network (RNN) (including long short-term memory (LSTM) and gated recurrent unit (GRU)) and convolutional neural network (CNN), quickly surpassed DNN. They enhance the ability to capture the rich structural information from speech [11, 12, 13, 14, 15, 16, 17]. The lack of annotated data promotes unsupervised representation learning research. For the unsupervised representation learning of speech, the autoencoder (AE), the restricted Boltzmann machine (RBM), and the deep belief network (DBN) are widely used [18].

With the recent interest in generative models, variational auto-encoder (VAE), generative adversarial network (GAN), and deep autoregressive models are also used in AM [19, 20]. Among them, VAE and GAN can achieve unsupervised acquisition of speech features [21, 22]. Combined with the characteristics of the data in practical applications, the feature capture capabilities of deep learning under different settings have helped speech recognition.

The adaptation of the acoustic model is also of our interest in this paper. The existing method of AM adaptation focus on addressing the feature shift (covariate shift) of speech, such as speaker, environmental noise, pitch, loudness, etc. [23]. While in federated learning scenarios, we aim to merge the AM from curators with different distribution to form a stronger AM that fits all participants.

2.1.1 Language model

In the ASR system, the language model (LM) is another critical component that facilitates converting the output of AM to a logical and natural sentence. Language models have achieved significant performance improvements with natural language processing development, leading to better ASR performance.

The primary language model is the bag-of-words model (BoW), a one-hot form of text representation. This method is suitable for processing discrete data and extending features but does not consider the order between words[24]. Although this type of method is simple, the semantic representation is not exact. In ASR, the most commonly used language model is NN-gram, which models the next-word probability by counting the co-occurrence. Simple as NN-gram is, its robustness and interpretability make it a worthy choice as the LM in ASR systems.

In contrast to statistical language models, Begio et al. proposed the concept of neural network language models in 2003 [25]. The performance of the language model has been effectively improved with the emergence of the word vector model. The most representative ones are Word2Vec [26], and Glove [27], whose expression is to convert each word into a vector representation with richer semantic information. As contextual words are very informative, researchers [28] propose to apply CNN instead of NN-gram to capture context from a larger receptive field. Meanwhile, the success of RNNLM shows the significant importance of modeling long-range sequential context.

On the other side, pre-training models have now become the main idea of language models [7, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. ELMo [45] uses LSTM structure and two-way settings to fully consider contextual information and reflect the characteristics of different dimensions. The pre-trained model can cope with different downstream tasks and has a better overall performance. Transformer [46] (and its variants [47, 48, 49, 50]), a powerful attention-based architecture, has been proved of its overwhelming ability in modeling deep structural information from data [51, 31]. Transformer is soon applied to large-scale pre-trained language models, including GPT [51], BERT [32], GPT2 [33] and subsequent ALBERT [34], RoBERTa [35], etc. However, it may be inefficient to apply pre-trained language models such as BERT and GPT to LM in ASR [52]. Therefore, it is an open problem on how to utilize and merge LM in federated ASR.

2.1.2 End-to-end ASR model

Some researchers consider ASR as a sequence recognition task. Inspired by other sequence modeling tasks such as dialogue generation and machine translation, it is reasonable that ASR performance can be improved by replacing the module-based approach with an end-to-end manner, which avoids error accumulation between modules. The end-to-end ASR model is elegant and straightforward without introducing noises from AM and LM, which consist of an encoder-decoder structure based on attention mechanism [53, 54].

However, end-to-end ASR is still challenging in real applications. For example, errors in previous time-steps may propagate to subsequent decisions, especially when the input speech is long. The mismatch between training and evaluation objects usually brings suboptimal performance [55]. Tjandra et al. [56] propose to integrate sequence-to-sequence approaches with reinforcement learning to tackle the gap between training object and evaluation utility. Li et al. [57] introduce a new layer-wise optimizer for the end-to-end speech recognition model, which enhances training convergence. Pham et al. [58] propose use self-attention via the Transformer architecture to replace the traditional end-to-end model and shows better performance than previous approaches. However, as end-to-end ASR systems consume many computational resources, it is not suitable to deploy on edge devices and in a federated learning scenario.

2.2 Acoustic Model Optimization

Existing optimization methods of training deep neural networks can be divided into three categories: first-order optimization methods represented by the widely used stochastic gradient methods (SGD), high-order optimization methods such as Newton’s method, and heuristic optimization methods such as the coordinate descent method [59].

Among them, SGD [60, 61] are widely used for training acoustic model [62]. With the characteristic that each iteration is independent of the total amount of data, SGD has achieved excellent performance in the training of sparsely distributed data [63]. Furthermore, the variants of SGD such as Adam [64] get a lot of attention and appreciation. Stochastic averaging gradient (SAG) and stochastic variance-reduced gradient (SVRG) based on gradient descent are proposed to solve the problem of the slow convergence speed. Experiments in [65, 66, 67] suggest that SVRG matches basic SGD, but it needs to process the whole dataset in a number of iterations. Researchers propose a scheme for applying SVRG in a distributed setting and observe its better performance. Nevertheless, it is difficult to implement it due to the operations on a large amount of data [68, 69].

Researches in the heuristic optimization method also have made progress. Talbi et al. propose to use variation operators (e.g., mutation, crossover) to select and reproduce DNNs on their representations [70]. Cui et al. proposed to use a combinational optimization strategy with SGD algorithm and evolutionary learning [71]. As for the distributed computation setting, asynchronous SGD under the ParameterServer framework is widely adopted [72].

Moreover, Taming et al. [73] propose an asynchronous SGD (A-SGD) algorithm using lower-precision arithmetic, avoiding a variety of problems on modern hardware while maintaining good performance SGD. A-SGD is further applied to industrial applications composed of large deep neural networks, improving those large distributed machine learning clusters’ performance [74, 75]. However, the limitation of the communication conditions between the participants and the central server prevents the federated system from achieving the best performance with conventional optimization methods. Some previous works help relieve the aforementioned limitations [76, 77, 78]. Povey et al., [4] proposed a method of training acoustic models on subsets of data independently and then merging them periodically.

Model combination in previous works [76, 77, 78] does not merge the models into a single one in the training phase, where all the models are respectively evaluated, and their outputs are combined to produce the final results during inference. Hence, they are more related to ensemble learning [79]. Model ensembling increases the burden of prediction, such as computational cost and communication rounds, in the federated learning scenario.

2.3 Incentive Mechanism in Federated Scenario

Researchers have proposed many incentive mechanisms deliberately designed for FL, which reward and react to these participants with different quality based on contribution evaluation to encourage more participants to contribute their high-quality data to the FL process. Researchers leverage the idea from game theory to address this problem [80, 81, 82, 83].

Some researchers apply Stackelberg games-based incentive mechanism to FL since the parameter server and participants in FL can be regarded as leaders and followers in Stackelberg games. Stackelberg games are used to analyze the federated scenario and to find the benefit equilibria between the organizer and the participants [84, 85]. Sarikaya et al. [86] proposed to mitigate the delays in completion of each training batch by analytically obtaining an equilibrium solution of a Stackelberg game. Khan et al. [87] model the incentive-based interaction between a global server and participants for FL via a Stackelberg game to motivate the participants to join in the FL process. However, since the participants in FL do not upload data but upload parameters instead, the utility functions are invisible to the server. Thus, Stackelberg games only work to incentivize network resources’ contribution but can not calculate each participant’s contribution to the model effect.

Some researchers use incentive mechanisms inspired by Contract Theory to motivate participants to join in FL [88, 89]. Kang et al. [90] accomplish the pairwise contribution qualification by introducing reputation and let participants evaluate each other, and combine contract theory to motivate participants with high-quality data. A hierarchical incentive mechanism design for FL is proposed in [91], which considers multiple model owners and the formation of multiple federations. It analyzes the equilibrium that the model reaches after iterations of merges and splits, applying solutions from Stackelberg games. For a fair distribution of payoffs, they adopt a coalitional game approach based on each model owner’s marginal contribution to the federation. Zhan et al. [92] design a deep reinforcement learning-based (DRL) incentive mechanism for FL to motivate edge nodes to contribute model training. VCG-based FL incentive mechanisms are designed for incentivizing data owners to contribute all their data and truthfully report their costs in FL settings [93, 94]. Zeng et al. [95] and Le et al. [96] formulate the incentive mechanism between the participants of federated learning as an auction. Yu et al. propose the FL incentivizer (FLI), which dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff received and the waiting time for receiving payoffs [97, 98]. Recently, researchers introduce Shapley Value or its approximation to evaluate the contributions of participants in FL [99, 100].

3 Problem Setup

Assume that we have nn acoustic models {MS,1,MS,2,,MS,n}\{M_{S,1},M_{S,2},\dots,\\ M_{S,n}\} with the same structures but different parameters since they are trained on different data. We call these nn acoustic models as the source models. Our goal is to merge them into one target model MTM_{T}, which possesses the same structure as source models but has better performance.

For each acoustic model MiM_{i}, we assume it has LL DNN layers, and its parameter of ll-th layer is denoted as WilW_{i}^{l} (1lL1\leq l\leq L). WilW_{i}^{l} includes all types of trainable parameters on that layer, such as weight and bias. For notational simplicity, we use the operation on model MiM_{i} to denote the operation on all its parameters WilW_{i}^{l} with l=1,,Ll=1,\dots,L. For example, (Mi+Mj)/2(M_{i}+M_{j})/2 is the model generated by averaging all the corresponding parameters of MiM_{i} and MjM_{j}.

In order to select the best MTM_{T}, we need some data to evaluate the quality of MTM_{T}, and we refer those data as the validation data. In contrast, the data used for training {MS,1,MS,2,,MS,n}\{M_{S,1},M_{S,2},\dots,M_{S,n}\} are called as the training data. Though extra validation data is required for our scheme, it will be shown later that the model quality can be greatly improved with very little validation data, which implies it is possible to obtain acoustic models of similar performance with less training data. To measure the performance of acoustic models, we utilize the widely used metric Word Error Rate (WER), which is defined as the minimum edit distance between the ASR hypothesis and the ground truth over the number of words in the ground truth.

4 Genetic Merge Algorithm

Since we aim at discovering a better target acoustic model from a set of source models, a relatively straightforward approach is to consider the source models as the initial population and apply the genetic algorithm [101]. Genetic algorithms are a class of heuristic search algorithms inspired by biological evolution. It optimizes a group of candidates by repeatedly generating new individuals via operations like mutation and crossover and then offers the ones with large fitness the right to produce offsprings, just like how biological evolution works. A genetic algorithm has two important factors that determine its performance: the scheme of generating offsprings and the strategy of selecting the fittest individuals. In the following, we propose the Genetic Merge Algorithm (GMA), which is calibrated for the scenario of acoustic model optimization in ASR systems.

In GMA, the scheme for generating offspring includes four different operators. The first three are classical in GA: reproduction, mutation and crossover. Specifically, we choose to use single point mutation operator and the one-point crossover operator respectively. Moreover, inspired by the phenomenon discovered in [2] that directly averaging two neural network models with the same initialization but trained on different data can lead to a better one, we propose a new operator called linear interpolation operator. In detail, these four operators work as follows:

  • Reproduction directly copies the existing models into next generation.

  • Mutation randomly changes one bit in the binary expression of the parameters for the selected model.

  • Crossover takes two parent models as the input. It randomly draws an integer ll (1l<L1\leq l<L), and the first ll layers of these two models are swapped. For example, if M1M_{1} and M2M_{2} are two input parent models, then two generated offsprings are:

    Mnew,1=\displaystyle M_{\text{new},1}= {W11,,W1l,W2l+1,,W2L},\displaystyle\{W_{1}^{1},\dots,W_{1}^{l},W_{2}^{l+1},\dots,W_{2}^{L}\},
    Mnew,2=\displaystyle M_{\text{new},2}= {W21,,W2l,W1l+1,,W1L}.\displaystyle\{W_{2}^{1},\dots,W_{2}^{l},W_{1}^{l+1},\dots,W_{1}^{L}\}.
  • Linear interpolation linearly combines all the parameters of two parent models in a weighted way to generate one new model, i.e.,

    Mnew=λM1+(1λ)M2,M_{\text{new}}=\lambda M_{1}+(1-\lambda)M_{2},

    where λ\lambda is an interpolation coefficient randomly sampled from (0,1)(0,1). Obviously, this operator is an extension of simple average.

For the normalization layers like batch normalization, we view the statistical variables on these layers (such as the batch mean and variance in batch normalization) as parameters, and apply genetic operators to these statistical variables.

We utilize WER as the measurement of the fitness and lower WER implies better fitness. For each generation, we evaluate the WER of each acoustic model on the validation set and choose the top-KK with the lowest WERs as the parents of the next generation, where KK is a hyper-parameter. Furthermore, inspired by the fact that the simple average of all the source models is already a good choice for MTM_{T} again, we also include the averaged model i=1nMS,i/n\sum_{i=1}^{n}M_{S,i}/n into the initial population. Such initialization provides a better start for GMA and reduces the time needed for converging. Besides, it ensures that the final acoustic model generated by GMA is always better than the simple average.

The complete workflow of GMA is presented in Algorithm 1. Besides KK, GMA requires three additional hyperparameters: p1p_{1}, p2p_{2} and p3p_{3}, which are the probabilities that mutation, crossover and linear interpolation operators are applied to generate offsprings respectively. It is worth noting that larger p1,p2,p3p_{1},p_{2},p_{3} and KK bring more diversity into the population and usually lead to better searching results, but they also increase computation costs.

input : source models MS,1,MS,2,,MS,nM_{S,1},M_{S,2},\dots,M_{S,n}
1 Initialize P={MS,1,,MS,n,i=1nMS,i/n}P=\{M_{S,1},\dots,M_{S,n},\sum_{i=1}^{n}M_{S,i}/n\}
2 while not converged do
3       P=PP^{\prime}=P // Reproduction
4       foreach MiM_{i} in PP do
5             With probability p1p_{1} let P=PMutation(Mi)P^{\prime}=P^{\prime}\cup\text{Mutation}(M_{i})
6      Randomly shuffle PP
7       foreach adjacent models Mi,Mi+1M_{i},M_{i+1} in PP do
8             With probability p2p_{2} let P=PCrossover(Mi,Mi+1)P^{\prime}=P^{\prime}\cup\text{Crossover}(M_{i},M_{i+1})
9             With probability p3p_{3} let P=PLinearInterpolation(Mi,Mi+1)P^{\prime}=P^{\prime}\cup\text{LinearInterpolation}(M_{i},M_{i+1})
10      Compute the WERs of the models in PP^{\prime} on validation set
11       Let PP be the set of top-KK models in PP^{\prime}
output : MT=M_{T}=model in PP with lowest WER
Algorithm 1 Genetic Merge Algorithm (GMA)

5 SGD-Based Optimizational Merge Algorithm

With its superior performance in generating high-quality acoustic models, GMA suffers from extremely low efficiency. Processing a few acoustic models with GMA on a small validation data (e.g., 5 source models on the validation set containing 10 hours of speech data) already requires several days, making it hardly applicable real-life applications.

To tackle this issue, we propose the SGD-Based Optimizational Merge Algorithm (SOMA), which enjoys similar performance as GMA but much more efficient. The major challenge of developing this method is how to convert our acoustic model merging problem into a mathematical optimization problem that SGD can be applied.

The key observation is that any model MM generated by GMA can be layer-wisely presented by the following formula:

Wl=\displaystyle W^{l}= i=1nθilWS,il+ΔWl\displaystyle\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l}+\Delta W^{l} (1)
s.t. θil0,i=1nθil=1\displaystyle\theta_{i}^{l}\geq 0,\ \sum_{i=1}^{n}\theta_{i}^{l}=1

for all its layers WlW^{l} (1lL1\leq l\leq L). Here the summation term i=1nθilWS,il\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l} corresponds to the linear interpolation and crossover operators, while the extra variable ΔWl\Delta W^{l} catches the change introduced by mutation. This fact is rigorously justified by the following proposition:

Proposition 1

MM in the form of (1) covers any model generated by GMA. Besides, term ΔWl\Delta W^{l} is brought by the mutation operation. In other words, it always holds that ΔWl=0\Delta W^{l}=0 if the mutation operator is not applied.

Proof 1

We prove this proposition by induction.

For any source model MS,iM_{S,i}, it is obvious that (1) can recover it by setting ΔWl=0\Delta W^{l}=0, θil=1\theta_{i}^{l}=1 and all other θjl=0\theta_{j}^{l}=0 with jij\neq i. For the simple average i=1nMS,i/n\sum_{i=1}^{n}M_{S,i}/n, we can choose θil=1/n\theta_{i}^{l}=1/n for all ii and also ΔWl=0\Delta W^{l}=0 for all layers. Hence, all the models in the initial generation of GMA can be presented by (1) with ΔWl=0\Delta W^{l}=0.

Assume all the models in one generation satisfies equation (1), then we proceed to show the proposition also holds for the next generation:

It is trivial for the model generated by reproduction, since it is exactly the same as its parent.

For the model generated by the mutation operator, it can be formulated into (1) by changing the variable ΔWl\Delta W^{l} of its parent while inheriting the other terms.

Since the crossover operator just swaps some layers of two models, and the internal structures of each layer is still preserved, while the constraints in (1) are imposed layer-wisely, the resulting models should still also stick to the pattern of (1) once their parents do, but with the corresponding θil\theta_{i}^{l} and ΔWl\Delta W^{l} swapped.

As for the linear interpolation operator, assume M1M_{1} and M2M_{2} are two input parents, then their layers can be written as:

W1l=i=1nαilWS,il+ΔW1l,\displaystyle W_{1}^{l}=\sum_{i=1}^{n}\alpha_{i}^{l}W_{S,i}^{l}+\Delta W^{l}_{1},
W2l=i=1nβilWS,il+ΔW2l,\displaystyle W_{2}^{l}=\sum_{i=1}^{n}\beta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}_{2},

where αil\alpha_{i}^{l} and βil\beta_{i}^{l} are two groups of realization for θil\theta_{i}^{l} and satisfy the constraints for θil\theta_{i}^{l} in (1). Thus, the parameter of the linearly interpolated model should be:

Wnewl=\displaystyle W_{\text{new}}^{l}= λW1l+(1λ)W2l\displaystyle\lambda W_{1}^{l}+(1-\lambda)W_{2}^{l}
=\displaystyle= λ(i=1nαilWS,il+ΔW1l)+(1λ)(i=1nβilWS,il+ΔW2l)\displaystyle\lambda\big{(}\sum_{i=1}^{n}\alpha_{i}^{l}W_{S,i}^{l}+\Delta W^{l}_{1}\big{)}+(1-\lambda)\big{(}\sum_{i=1}^{n}\beta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}_{2}\big{)}
=\displaystyle= i=1n(λαil+(1λ)βil)θnew,ilWS,il+λΔW1l+(1λ)ΔW2lΔWnewl.\displaystyle\sum_{i=1}^{n}\underbrace{\big{(}\lambda\alpha_{i}^{l}+(1-\lambda)\beta_{i}^{l}\big{)}}_{\theta^{l}_{\text{new},i}}W^{l}_{S,i}+\underbrace{\lambda\Delta W^{l}_{1}+(1-\lambda)\Delta W^{l}_{2}}_{\Delta W^{l}_{\text{new}}}.

In the next, we need to show θnew,il\theta_{\text{new},i}^{l} satisfies the constraints in (1). The non-negativity of θil\theta_{i}^{l} is obvious, considering that αil0\alpha_{i}^{l}\geq 0, βil0\beta_{i}^{l}\geq 0 and θ(0,1)\theta\in(0,1). Besides,

i=1nθnew,il=\displaystyle\sum_{i=1}^{n}\theta_{\text{new},i}^{l}= i=1n(λαil+(1λ)βil)\displaystyle\sum_{i=1}^{n}\big{(}\lambda\alpha_{i}^{l}+(1-\lambda)\beta_{i}^{l}\big{)}
=\displaystyle= λi=1nαil+(1λ)i=1nβil\displaystyle\lambda\sum_{i=1}^{n}\alpha_{i}^{l}+(1-\lambda)\sum_{i=1}^{n}\beta_{i}^{l}
=\displaystyle= λ+(1λ)\displaystyle\lambda+(1-\lambda)
=\displaystyle= 1,\displaystyle 1,

where the facts i=1nαil=1\sum_{i=1}^{n}\alpha_{i}^{l}=1 and i=1nβil=1\sum_{i=1}^{n}\beta_{i}^{l}=1 are applied to the second equality. Hence, the model generated by linear interpolation also satisfies the desired pattern.

Now we can conclude that all the offsprings produced by any operator still follow the pattern in (1), which completes the mathematical induction.

Finally, from the above argument, it can be seen that ΔWl\Delta W^{l} of the generated model of both crossover and linear interpolation operators must be 0 once the corresponding terms of their parents are 0. Along with the fact that ΔWl=0\Delta W^{l}=0 for all the models in the initial generation, we can conclude that ΔWl\Delta W^{l} is introduced by mutation if it is non-zero.

Now, we have already defined the pattern how the target model should follow. Then, we can formulate our optimization problem as:

minWl,θil,ΔWl(M)s.t. Wl=i=1nθilWS,il+ΔWlθil0,i=1nθil=1,\begin{array}[]{cl}\min_{W^{l},\theta_{i}^{l},\Delta W^{l}}&\ell(M)\\ \\ \text{s.t. }&W^{l}=\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}\\ \\ &\theta_{i}^{l}\geq 0,\ \sum_{i=1}^{n}\theta_{i}^{l}=1,\end{array} (2)

where MM is the model consisting of parameters {W1,,WL}\{W^{1},\dots,W^{L}\}, and (M)\ell(M) is the loss function of model MM on the validation data. Any common training criterion for DNN-based acoustic model can be used as the loss function here, such as maximum mutual information (MMI, [102]).

Finally, due to the different nature between SGD and genetic algorithms, it is much easier for SGD to overfit the validation data when solving (2). This is because ΔWl\Delta W^{l} can be arbitrary in our current formulation, and it is possible that ΔWl\Delta W^{l} becomes large enough to dominate the other terms. To avoid such a problem, we impose an extra restriction on ΔWl\Delta W^{l} that its magnitude can not exceed that of all parameters WlW^{l} up to a constant factor ρ0\rho\geq 0, e.g., ρ=0.01\rho=0.01. As a result, our formal formulation for model merging turns into:

minWl,θil,ΔWl(M)s.t. Wl=i=1nθilWS,il+ΔWlθil0,i=1nθil=1ΔWlρWl.\begin{array}[]{cl}\min_{W^{l},\theta_{i}^{l},\Delta W^{l}}&\ell(M)\\ \\ \text{s.t. }&W^{l}=\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}\\ \\ &\theta_{i}^{l}\geq 0,\ \sum_{i=1}^{n}\theta_{i}^{l}=1\\ \\ &\|\Delta W^{l}\|\leq\rho\|W^{l}\|.\end{array} (3)

It is possible to solve this problem by using other optimization methods than SGD. But we choose SGD because of two reasons: 1) it is well known that SGD is much faster than traditional optimization methods; 2) our goal is to optimize a DNN, whose objective function is non-convex and has many local minimums, while it is already proven that SGD has stronger ability to escape local minimums [103] and usually converge to better solution in practice.

5.1 Solving Optimization Problem

Though we have already formulated our problem into an optimization problem, it is still unclear how to solve it, because this problem seems complicated by having many constraints. In this subsection, we develop a new approach to solve it.

To solve (3), one of our basic strategies is that we will not directly update the variable WlW^{l} by SGD. Instead, we only update θil\theta_{i}^{l} and ΔWl\Delta W^{l}, while the value of WlW^{l} is inferred from these two groups of variables according to the constraint

Wl=i=1nθilWS,il+ΔWlW^{l}=\sum_{i=1}^{n}\theta_{i}^{l}W^{l}_{S,i}+\Delta W^{l}

at the beginning of each iteration. And the latest estimation of WlW^{l} will work as the bridge for updating θil\theta_{i}^{l} and ΔWl\Delta W^{l}.

To do updates for θil\theta_{i}^{l} and ΔWl\Delta W^{l}, we first need to compute the gradients of them. According to the chain rule, the gradient of each θil\theta_{i}^{l} can be derived as:

θil=WlWlθil=WlWS,il,\frac{\partial\ell}{\partial\theta_{i}^{l}}=\frac{\partial\ell}{\partial W^{l}}\cdot\frac{\partial W^{l}}{\partial\theta_{i}^{l}}=\frac{\partial\ell}{\partial W^{l}}\cdot W^{l}_{S,i},

where \cdot standards for the dot product of matrices, i.e., AB=ijAijBijA\cdot B=\sum_{i}\sum_{j}A_{ij}B_{ij}. Therefore, we just need to compute the stochastic gradient of the current model M={W1,,WL}M=\{W^{1},\dots,W^{L}\} by back-propagation as normal, then the computation of the derivative of θil\theta_{i}^{l} becomes trivial from the above equation. As for ΔWl\Delta W^{l}, by chain rule again, we can show its gradient is exactly the same as the gradient of WlW^{l}.

After conducting one step of SGD, we further need to do projection operations to ensure the other two constraints are still satisfied. The constraint

θil0andi=1nθil=1,\theta_{i}^{l}\geq 0\quad\text{and}\quad\sum_{i=1}^{n}\theta_{i}^{l}=1, (4)

is the so-called simplex constraint, which is well-studied in optimization literature, and efficient methods for dealing with it are already known [104, 105]. Hence, we will not dive into the detail of how this projection should be done. Instead, we directly use the projection method proposed in [105], and denote the resulting projection operator as Π()\Pi(\cdot). While for the constraint

ΔWlρWl,\left\|\Delta W^{l}\right\|\leq\rho\left\|W^{l}\right\|, (5)

we actually just need to scale ΔWl\Delta W^{l} to make it smaller, once we find ΔWl\Delta W^{l} violates this constraint. Besides, it is trivial to show that the optimal scaling factor should be:

γl=ρWlΔWl.\gamma_{l}=\frac{\rho\|W^{l}\|}{\|\Delta W^{l}\|}.

Now we have completed all building blocks of our algorithm. The full algorithm is summarized in Algorithm 2. Here η>0\eta>0 is the step size (a.k.a., learning rate) hyperparameter for SGD. In the initialization stage, we just set θil=1/n\theta_{i}^{l}=1/n and ΔWl=0\Delta W^{l}=0, which implies we choose the model obtained by simple average as the initial model. The motivation for this is the same as including the averaged model in the initial population of GMA: it provides faster convergence and guarantees to produce a better model than the simple average baseline.

input : source models MS,1,MS,2,,MS,nM_{S,1},M_{S,2},\dots,M_{S,n}
1 Initialize θil=1/n\theta_{i}^{l}=1/n and ΔWl=0\Delta W^{l}=0 for all ii and ll
2 while not converged do
       // The following operations are done for all ii and ll
3       Let Wl=i=1nθilWS,il+ΔWlW^{l}=\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}
4       Draw a batch of samples from validation data, and compute Wl\frac{\partial\ell}{\partial W^{l}} by back-propagation
5       Update by SGD:
θil=θilηWlWS,i\displaystyle\theta_{i}^{l}=\theta_{i}^{l}-\eta\frac{\partial\ell}{\partial W^{l}}\cdot W_{S,i}
ΔWl=ΔWlηWl\displaystyle\Delta W^{l}=\Delta W^{l}-\eta\frac{\partial\ell}{\partial W^{l}}
6       Let θl=Π(θl)\theta^{l}=\Pi(\theta^{l}) // Π()\Pi(\cdot) is the projection operator for (4)
7       if (5) not hold then
8             Let ΔWl=ΔWlρWlΔWl\Delta W^{l}=\Delta W^{l}\cdot\frac{\rho\|W^{l}\|}{\|\Delta W^{l}\|}
output : MTM_{T} with parameters WTl=i=1nθilWS,il+ΔWlW^{l}_{T}=\sum_{i=1}^{n}\theta_{i}^{l}W_{S,i}^{l}+\Delta W^{l}
Algorithm 2 SGD-Based Optimizational Merging Algorithm (SOMA)

GMA is slow because it works in a guess-and-trail fashion: it perturbs models fully random, and adopts performance tests to determine whether the perturbation can bring improvement or not. However, when dealing with DNN, which has complicated and delicate structures, most of the perturbations are unhelpful. Thus it wastes most of the time in doing useless experiments. As a comparison, SOMA optimizes the model in a guided way: it always moves the model towards the negative gradient direction, which can improve the model at every iteration with high probability.

6 Discussion on Privacy Issues

Privacy protection is a crucial issue in cooperative training over multiple parties. It is desired by each data curator to guarantee that the other participants cannot know its private training data, neither via direct access nor by indirect inference. Though we do not have a special design for privacy protection in our frame, our methods, including GMA and SOMA, have innate advantages to avoid privacy leakage during cooperative training:

  • First, it is obvious that our framework need not to share private data, which avoids the direct exposure of privacy.

  • Our methods aggregate the models in an unpredictable and complicated way. Many privacy attacking methods (e.g., [106, 107]) assume the central server aggregates the clients’ information in a simple way such as federated averaging and SGD, which just average the clients’ model or gradients for updates. On the contrary, our merging strategies assign dynamic weights to different models and different layers when combining local models, or even mutate the parameters of the merged model. And all these operations are invisible to all the participants except the central server. This nonlinear and blox-box transformation is disastrous for many privacy attackers.

  • Our methods only require one round of training and merging. To the best of our knowledge, all the existing privacy attacking methods for cooperative training either need to watch the models’ changes over different rounds [108, 107, 109], or inject certain backdoor onto the local model to upload and observe other participants’ reaction in the following rounds [107, 110]. While our one-round framework eliminates all these possibilities, making privacy attacks via these means impossible, even for the central server.

Of course, there exist other types of privacy attacks, e.g., it is possible to “reverse engineering” the final model to recover some information of the training data [111]. To reduce this risk, we can include extra protection schemes such as differential privacy (DP, [112]) onto the local training stage. But as a work focusing on cooperative training, we will not discuss the risks out of the training process in depth.

7 Source Model Valuation Based on the Shapley Value

Models’ performance on the test set is a widely used criterion for evaluating its quality. However, for source models used in the merging paradigm, their test set performances cannot represent their contribution to the entire training process. For example, one model may be trained on a data set similar to another data we already have. Even though this model may have low WER, it is not helpful for improving our merged model. Therefore, a fairer way to evaluate a source model is to estimate how much excess benefit it can bring to the generated target model. In our solution, we adopt a cooperative game theory concept, Shapley value [113], to evaluate each source model for model merging.

We assume that I={MS,1,,MS,n}I=\{M_{S,1},\dots,M_{S,n}\} are the entire set of source models that participate in the merge process. The marginal contribution ϕi\phi_{i} of model MS,iM_{S,i} is described as the difference between the performance of the target model generated with models MS,iM_{S,i} and without MS,iM_{S,i}:

ϕi=U(S{MS,i})U(S)\phi_{i}=U(S\cup\{M_{S,i}\})-U(S) (6)

where SI\{MS,i}S\subseteq I\backslash\{M_{S,i}\}, and U()U(\cdot) represents the utility function. In our setting, U(S)U(S) is defined as 1WER1-\text{WER} of the model merged from SS.

Therefore, the Shapley Value of model MS,iM_{S,i} is defined as the average marginal contribution ϕi\phi_{i} under all possible subsets SS consist of other models in II:

si=SI\{MS,i}1n(|S|N1)[U(S{MS,i})U(S)]s_{i}=\sum_{S\subseteq I\backslash\{M_{S,i}\}}\frac{1}{n\tbinom{|S|}{N-1}}[U(S\cup\{M_{S,i}\})-U(S)] (7)

The complexity of the traditional SV calculation method is exponential and requires repeating a large number of experiments. We follow the method from Jia et al. [114] who adopt group testing to estimate the Shapley Value of each source model, significantly reducing the time required for valuation.

In the group testing approach, we first conduct a set of tests. In each test, we randomly select a part of the source model SI={MS,1,,MS,n}S\subset I=\{M_{S,1},\dots,M_{S,n}\}, merge them into a target model using SOMA, and calculate the utility function U(S)U(S). Next, we calculate the difference between each of the two source models’ utility functions ΔUij\Delta U_{ij}. Finally, due to the group rationality of Shapley value, we can get s^\hat{s}, the estimated Shapley Value, by solving a feasibility problem. The specific steps are shown in Algorithm 3.

input : source models I={MS,1,,MS,n}I=\{M_{S,1},\dots,M_{S,n}\}, the number of tests TT
1 define : U(S)=1WER(SOMA(S))U(S)=1-\mathrm{WER}(\mathrm{SOMA}(S)) for any SIS\subset I
2 Let Z=2k=1n11kZ=2\sum_{k=1}^{n-1}\frac{1}{k} and qk=1Z(1k+1nk)q_{k}=\frac{1}{Z}\big{(}\frac{1}{k}+\frac{1}{n-k}\big{)} for k=1,,n1k=1,\dots,n-1
3 Initialize βti=0\beta_{ti}=0, for t=1,,T,i=1,,nt=1,\dots,T,i=1,\dots,n
4 for t=1toTt=1\ to\ T do
5       Random sample ltl_{t} from {1,,n1}\{1,\dots,n-1\} with probability P(lt=k)=qkP(l_{t}=k)=q_{k}
6       Sample a subset of source models: SIS\subset I and |S|=lt|S|=l_{t}
7       Let βti=1\beta_{ti}=1 for all MS,iSM_{S,i}\in S
8       Compute ut=U(S)u_{t}=U(S)
9ΔUij=ZTt=1Tut(βtiβtj)\Delta U_{ij}=\frac{Z}{T}\sum_{t=1}^{T}u_{t}(\beta_{ti}-\beta_{tj}) for i=1,,n,j=1,,ni=1,\dots,n,j=1,\dots,n\ and ji\ j\geq i
10 Find s^\hat{s} by solving the feasibility problem:
11 i=1ns^i=U(I)\quad\sum_{i=1}^{n}\hat{s}_{i}=U(I)|(s^is^j)ΔUij|ϵ2n|(\hat{s}_{i}-\hat{s}_{j})-\Delta U_{ij}|\leq\frac{\epsilon}{2\sqrt{n}}i,j{i,,n}\forall i,j\in\{i,\dots,n\}
output : the Shapley Value estimation s^\hat{s}
Algorithm 3 SV Estimation of source models.

With sufficient number of tests TT, we can assure s^\hat{s} is an (ϵ,δ)(\epsilon,\delta)-approximation of the true Shapley value ss, i.e., P(s^sϵ)1δP(\|\hat{s}-s\|\leq\epsilon)\geq 1-\delta:

Theorem 1 (Theorem 3 of [114])

Algorithm 3 returns an (ϵ,δ)(\epsilon,\delta)-approximation to the SV if the number of tests TT satisfies T8logn(n1)2δ/((1qtot2)h(ϵZn(1qtot2)))T\geq 8\log\frac{n(n-1)}{2\delta}/\left((1-q_{tot}^{2})h\Big{(}\frac{\epsilon}{Z\sqrt{n}(1-q^{2}_{tot})}\Big{)}\right), where qtot=n2nq1+k=2n1qk[1+2k(nk)n(n1)]q_{tot}=\frac{n-2}{n}q_{1}+\sum_{k=2}^{n-1}q_{k}\left[1+\frac{2k(n-k)}{n(n-1)}\right] and h(u)=(1+u)log(1+u)uh(u)=(1+u)\log(1+u)-u.

By simplifying the bound into the asymptotic form, this theorem implies that T=𝒪(nϵ(logn)2log1ϵδ)T=\mathcal{O}\left(\frac{n}{\epsilon}\big{(}\log n\big{)}^{2}\log\frac{1}{\epsilon\delta}\right) is enough for an (ϵ,δ)(\epsilon,\delta)-approximation. This is much more affordable compared to the naive computation of the SV, which needs 𝒪(2n)\mathcal{O}(2^{n}) tests.

8 Experiments

8.1 Experimental Setup

In order to ensure the reproducibility of the experiments, we conduct all experiments on public datasets. Specifically, we collected five speech dataset from the OpenSLRhttp://www.openslr.org/resources.php website, which are SLR18, SLR33, SLR38, SLR48 and SLR62. All of them contain Chinese speech recordings in wav format with a sampling rate of 16kHz. Each dataset includes training, validation and test sets For dataset which does not have splits in advance, we randomly split it into training, validation and test sets with proportions 60%: 20%: 20%. The detailed statistics of all the datasets are presented in Table 1.

Table 1: Statistics of the datasets
Dataset Name Training Validation Test
no. wav duration/h no. wav duration/h no. wav duration/h
SLR18 THCHS-30 7984 20.4 2657 6.7 2747 7.0
SLR33 Aishell 120418 151.2 14331 18.1 7176 10.0
SLR38
Free ST Chinese
Mandarin Corpus
61698 65.9 20395 21.8 20507 22.0
SLR47
Primewords Chi-
nese Corpus Set 1
30366 59.6 10092 19.8 10212 20.1
SLR62 aidatatang_200zh 164905 139.9 24216 20.2 48144 40.2
Sum 385371 437.1 71691 86.7 88786 99.3

As a testbed, we develop a full-fledged ASR system through the open-source toolkit Kaldi [3]. Its built-in “Chain” model is used as the acoustic model of the ASR system. The DNN component of the “Chain” model is implemented by Time Delay Neural Network (TDNN) [6] and the other components of the model such as HMM are pre-trained. The backoff nn-gram model with n=3n=3 is used as the language model, which is trained with the SRILM toolkit [115]. The whole system is deployed on a machine with CentOS, Intel Xeon CPU of 72 cores, NVIDIA Tesla K80 GPU and 314GB memory.

Five TDNNs (i.e., the DNN components of the corresponding “Chain” models) with the same initialization are trained on the training sets of the five datasets respectively. The five TDNNs play the roles of the source models. Considering the slow speed of GMA, we sample a subset from the collection of the validation data of the five datasets with a proportion 10% as the validation set. Both model merging methods will work on this set to optimize the target model by default. All the reported WERs in our experiments are computed on the test set unless otherwise specified.

For the hyperparameters in GMA, we set K=15K=15, p1=0.5p_{1}=0.5 and p2=p3=0.1p_{2}=p_{3}=0.1. The parameter ρ\rho in SOMA is chosen to be 0.10.1.

8.2 Effectiveness Evaluation

Refer to caption
Figure 1: WERs of the source models and target models generated by different methods.

We first compare the performance of models generated by different methods: direct average, GMA and SOMA. Both GMA and SOMA have been run for enough time until they converge. In detail, SOMA has been run for 10 iterations, where we define one iteration as one passes through the sampled validation data. While for GMA, it has been run for 100 generations with a population size K=15K=15, which results in more than 1000 iterations since each generated model needs to be evaluated on the validation set once.

The WERs of all the models, including the source models, are reported in Figure 1. Due to the different sizes and qualities of the training data, the performances of the source models vary a lot, which brings challenges to model merging. Though direct average achieves a WER lower than most of the source models, it is still slightly worse than the best one. As a comparison, GMA and SOMA obviously outperform all of them. Between them, GMA works better than SOMA, but their difference (0.2%0.2\%) is quite limited.

8.3 Efficiency Evaluation

Refer to caption
Figure 2: Convergence curves of GMA and SOMA.
Refer to caption
Figure 3: WERs of the target models optimized on different sizes of the validation set. Log-scale is adopted for xx-axis.

Though GMA and SOMA have close performance in terms of the generated model quality, they differ substantially in their efficiency. To demonstrate this, we draw the convergence curves of the first 1000 iterations in Figure 2, where the WERs of the best-so-far generated models over different iterations are reported. It can be observed that SOMA converges quickly, so that the WER is greatly reduced after just one iteration, and it converges in less than 10 iterations. However, GMA fails to generate a better model than the direct average within the first 30 iterations. It improves the models in a very slow way. Even after 1000 iterations, it still falls behind SOMA. Finally, GMA will become better than SOMA, but it takes a very long optimization time. Therefore, we can conclude that GMA is impractical on large datasets due to its poor efficiency.

By taking both effectiveness and efficiency into consideration, we recommend using SOMA rather than GMA in practice.

8.4 Variation of Validation Data Size

Considering both GMA and SOMA require an extra set of validation data for model merging compared to direct average, in this part we will vary the size of the validation set, and see what will happen to them. We randomly sample subsets of different sizes from the complete validation set with proportions {1%,2%,5%,10%,20%,50%,100%}\{1\%,2\%,5\%,10\%,20\%,50\%,100\%\}, and run GMA and SOMA on them, and finally evaluate the WERs on the test set. Due to the slow speed of GMA, it is not tested on the validation subsets larger than 10%10\%. The results are reported in Figure 3.

Overall, we can observe that larger validation set yields better target models for both GMA and SOMA. However, if the data size is already large enough, say 10% of the whole validation set, further increasing data volume does not bring too much help for SOMA. Moreover, we can see that both GMA and SOMA can beat the direct average with a very limited number of validation data such as proportion 1%. Note that 1% of the validation data only corresponds to approximately 0.2% of the training data, but can already help reduce test WER for more than one percent compared to the simple average. This fact provides a strong reason for why to use our methods.

8.5 Valuation of Source Models

Refer to caption
Figure 4: Shapley Value and Average Weight of Source Models

We conduct experiments with Algorithm 3 to estimate the Shapley Values (i.e., how much benefit each source model can bring to the target model), which aims to fairly evaluate each source model fine-tuned on different subsets of data. We carried out T=10T=10 of group tests to estimate the Shapley Values.

Figure 4 depicts the Shapley Value of each source model. We also report the average weight of each source model MS,iM_{S,i} in the final target model, i.e., 1Ll=1Lθil\frac{1}{L}\sum_{l=1}^{L}\theta_{i}^{l}, which reveals the effect of these source models on the generated target model.

As the results have shown, each source model’s Shapley Value is generally positively correlated with its average weight in the target model. However, this is not always the case. The Shapley Value of the source model fine-tuned with SLR33 is a little larger than those fine-tuned with SLR62, but its average weight in the generated target model is smaller. The benefits that each source model brings to the target model do not entirely depend on its performance on the test set, as is shown in Figure 1. Although some source models have higher WER, such as the model fine-tuned with SLR33, they can play a more significant role when merging with others.

9 Conclusion

In this paper, we propose a novel optimization paradigm for optimizing the acoustic model in ASR in the case where data come from different sources. In our framework, We first train multiple acoustic models independently on distinct parts of data. Then, instead of applying the simplistic averaging scheme for merging acoustic models, we propose two novel algorithms with significantly better performance: GMA and SOMA, where the former is based on a genetic algorithm and the latter one adopts SGD with a novel mathematical formulation. Experiments show that both of them can greatly improve acoustic model quality with very limited amount of validation data. Especially, SOMA demonstrates superior efficiency and can be easily applied to large-scale speech data. Besides, we propose to use the Shapley Value to measure the contribution of different data curators, and empirically study the relationship of Shapley Values to the merging weights.

References

  • [1] C. Tan, D. Jiang, J. Peng, X. Wu, Q. Xu, and Q. Yang, “A de novo divide-and-merge paradigm for acoustic model optimization in automatic speech recognition.” in IJCAI, 2020, pp. 3709–3715.
  • [2] H. B. McMahan, E. Moore, D. Ramage, S. Hampson et al., “Communication-efficient learning of deep networks from decentralized data,” arXiv preprint arXiv:1602.05629, 2016.
  • [3] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding, no. CONF.   IEEE Signal Processing Society, 2011.
  • [4] D. Povey, X. Zhang, and S. Khudanpur, “Parallel training of deep neural networks with natural gradient and parameter averaging,” arXiv preprint arXiv:1410.7455, 2014.
  • [5] M. Gales and S. Young, “The application of hidden markov models in speech recognition,” 2008.
  • [6] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using time-delay neural networks,” IEEE transactions on acoustics, speech, and signal processing, vol. 37, no. 3, pp. 328–339, 1989.
  • [7] D. Jiang, C. Zhang, and Y. Song, Probabilistic topic models: Foundation and application.   Springer, 2023.
  • [8] Y. Song, D. Jiang, X. Zhao, Q. Xu, R. C.-W. Wong, L. Fan, and Q. Yang, “L2rs: A learning-to-rescore mechanism for hybrid speech recognition,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 1157–1166.
  • [9] H. Purwins, B. Li, T. Virtanen, J. Schlüter, S.-Y. Chang, and T. Sainath, “Deep learning for audio signal processing,” IEEE Journal of Selected Topics in Signal Processing, vol. 13, no. 2, pp. 206–219, 2019.
  • [10] G. Hinton, L. Deng, D. Yu, G. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury et al., “Deep neural networks for acoustic modeling in speech recognition,” IEEE Signal processing magazine, vol. 29, 2012.
  • [11] S. Latif, M. Usman, R. Rana, and J. Qadir, “Phonocardiographic sensing using deep learning for abnormal heartbeat detection,” IEEE Sensors Journal, vol. 18, no. 22, pp. 9393–9400, 2018.
  • [12] Y. Chen, D. Jiang, C. Tan, Y. Song, C. Zhang, and L. Chen, “Neural moderation of asmr erotica content in social networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 1, pp. 275–280, 2023.
  • [13] M. Hong, C. J. Zhang, L. Yang, Y. Song, and D. Jiang, “Infantcrynet: A data-driven framework for intelligent analysis of infant cries,” arXiv preprint arXiv:2409.19689, 2024.
  • [14] Y. Song, R. Lian, Y. Chen, D. Jiang, X. Zhao, C. Tan, Q. Xu, and R. C.-W. Wong, “A platform for deploying the tfe ecosystem of automatic speech recognition,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6952–6954.
  • [15] X. Wu, D. Jiang, Y. Song, Q. Xu, and Q. Yang, “Enhance mono-modal sentiment classification with federated cross-modal transfer.” IEEE Data Eng. Bull., vol. 46, no. 1, pp. 158–169, 2023.
  • [16] C. Chen, D. Jiang, J. Peng, R. Lian, Y. Li, C. Zhang, L. Chen, and L. Fan, “Scalable identity-oriented speech retrieval,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 3, pp. 3261–3265, 2021.
  • [17] D. Jiang, C. Tan, J. Peng, C. Chen, X. Wu, W. Zhao, Y. Song, Y. Tong, C. Liu, Q. Xu et al., “A gdpr-compliant ecosystem for speech recognition with transfer, federated, and evolutionary learning,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 12, no. 3, pp. 1–19, 2021.
  • [18] M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters, vol. 42, pp. 11–24, 2014.
  • [19] A. Van Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” in International Conference on Machine Learning.   PMLR, 2016, pp. 1747–1756.
  • [20] W. DeepMind, “A generative model for raw audio,” Retrieved on Sep, vol. 12, 2016.
  • [21] B. Bollepalli, L. Juvela, and P. Alku, “Generative adversarial network-based glottal waveform model for statistical parametric speech synthesis,” arXiv preprint arXiv:1903.05955, 2019.
  • [22] W.-N. Hsu, Y. Zhang, and J. Glass, “Unsupervised learning of disentangled and interpretable representations from sequential data,” arXiv preprint arXiv:1709.07902, 2017.
  • [23] S. Ghorbani, S. Khorram, and J. H. Hansen, “Domain expansion in dnn-based acoustic models for robust speech recognition,” in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).   IEEE, 2019, pp. 107–113.
  • [24] R. Baeza-Yates, D. Jiang, F. Silvestri, and B. Harrison, “Predicting the next app that you are going to use,” in Proceedings of the eighth ACM international conference on web search and data mining, 2015, pp. 285–294.
  • [25] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, “A neural probabilistic language model,” Journal of machine learning research, vol. 3, no. Feb, pp. 1137–1155, 2003.
  • [26] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013.
  • [27] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
  • [28] S. Wang, M. Huang, and Z. Deng, “Densely connected cnn with multi-scale feature attention for text classification.” in IJCAI, 2018, pp. 4468–4474.
  • [29] J. Lu, R. Lian, D. Jiang, Y. Song, Z. Su, V. J. Wei, and L. Yang, “Pretraining enhanced rnn transducer,” CAAI Artificial Intelligence Research, vol. 3, 2024.
  • [30] X. Wu, R. Lian, D. Jiang, Y. Song, W. Zhao, Q. Xu, and Q. Yang, “A phonetic-semantic pre-training model for robust speech recognition,” CAAI Artificial Intelligence Research, vol. 1, no. 1, 2022.
  • [31] X. Zhou, L. Li, D. Dong, Y. Liu, Y. Chen, W. X. Zhao, D. Yu, and H. Wu, “Multi-turn response selection for chatbots with deep attention matching network,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 1118–1127.
  • [32] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  • [33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  • [34] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
  • [35] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  • [36] J. Wei, Q. Liu, Y. Guo, and X. Jiang, “Training multilingual pre-trained language model with byte-level subwords,” arXiv preprint arXiv:2101.09469, 2021.
  • [37] J. Wei, X. Ren, X. Li, W. Huang, Y. Liao, Y. Wang, J. Lin, X. Jiang, X. Chen, and Q. Liu, “Nezha: Neural contextualized representation for chinese language understanding,” arXiv preprint arXiv:1909.00204, 2019.
  • [38] Y. Li, D. Jiang, R. Lian, X. Wu, C. Tan, Y. Xu, and Z. Su, “Heterogeneous latent topic discovery for semantic text mining,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 1, pp. 533–544, 2021.
  • [39] D. Jiang, Y. Tong, Y. Song, X. Wu, W. Zhao, J. Peng, R. Lian, Q. Xu, and Q. Yang, “Industrial federated topic modeling,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 12, no. 1, pp. 1–22, 2021.
  • [40] X. Zhou, C. Tan, D. Jiang, B. Zhang, S. Li, Y. Xu, Q. Xu, and S. Gao, “Memetic federated learning for biomedical natural language processing,” in Natural Language Processing and Chinese Computing: 10th CCF International Conference, NLPCC 2021, Qingdao, China, October 13–17, 2021, Proceedings, Part II 10.   Springer, 2021, pp. 43–55.
  • [41] D. Jiang, Y. Song, Y. Tong, X. Wu, W. Zhao, Q. Xu, and Q. Yang, “Federated topic modeling,” in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 1071–1080.
  • [42] Y. Song, Y. Tong, S. Bao, D. Jiang, H. Wu, and R. C.-W. Wong, “Topicocean: An ever-increasing topic model with meta-learning,” in 2020 IEEE International Conference on Data Mining (ICDM).   IEEE, 2020, pp. 1262–1267.
  • [43] M. Hong, Y. Song, D. Jiang, L. Wang, Z. Guo, and C. J. Zhang, “Expanding chatbot knowledge in customer service: Context-aware similar question generation using large language models,” arXiv preprint arXiv:2410.12444, 2024.
  • [44] D. Jiang, L. Shi, R. Lian, and H. Wu, “Latent topic embedding,” in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, 2016, pp. 2689–2698.
  • [45] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018.
  • [46] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” arXiv preprint arXiv:1706.03762, 2017.
  • [47] J. Zhang, P. Zhang, B. Kong, J. Wei, and X. Jiang, “Continuous self-attention models with neural ode networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 16, 2021, pp. 14 393–14 401.
  • [48] S. Zhang, P. Zhang, X. Ma, J. Wei, N. Wang, and Q. Liu, “Tensorcoder: Dimension-wise attention via tensor representation for natural language modeling,” arXiv preprint arXiv:2008.01547, 2020.
  • [49] N. Wang, G. Gan, P. Zhang, S. Zhang, J. Wei, Q. Liu, and X. Jiang, “Clusterformer: Neural clustering attention for efficient and effective transformer,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 2390–2402.
  • [50] S. Li, P. Zhang, G. Gan, X. Lv, B. Wang, J. Wei, and X. Jiang, “Hypoformer: Hybrid decomposition transformer for edge-friendly neural machine translation,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 7056–7068.
  • [51] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” 2018.
  • [52] C. Wang, M. Li, and A. J. Smola, “Language models with transformers,” CoRR, vol. abs/1904.09408, 2019. [Online]. Available: http://arxiv.org/abs/1904.09408
  • [53] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” arXiv preprint arXiv:1409.3215, 2014.
  • [54] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
  • [55] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey et al., “Google’s neural machine translation system: Bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016.
  • [56] A. Tjandra, S. Sakti, and S. Nakamura, “Sequence-to-sequence asr optimization via reinforcement learning,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2018, pp. 5829–5833.
  • [57] J. Li, V. Lavrukhin, B. Ginsburg, R. Leary, O. Kuchaiev, J. M. Cohen, H. Nguyen, and R. T. Gadde, “Jasper: An end-to-end convolutional neural acoustic model,” arXiv preprint arXiv:1904.03288, 2019.
  • [58] N.-Q. Pham, T.-S. Nguyen, J. Niehues, M. Müller, S. Stüker, and A. Waibel, “Very deep self-attention networks for end-to-end speech recognition,” arXiv preprint arXiv:1904.13377, 2019.
  • [59] S. Sun, Z. Cao, H. Zhu, and J. Zhao, “A survey of optimization methods from a machine learning perspective,” IEEE transactions on cybernetics, vol. 50, no. 8, pp. 3668–3681, 2019.
  • [60] H. Robbins and S. Monro, “A stochastic approximation method,” The annals of mathematical statistics, pp. 400–407, 1951.
  • [61] P. Jain, S. Kakade, R. Kidambi, P. Netrapalli, and A. Sidford, “Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification,” Journal of Machine Learning Research, vol. 18, 2018.
  • [62] D. Yu and L. Deng, Automatic Speech Recognition.   Springer, 2016.
  • [63] J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
  • [64] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [65] S. J. Reddi, A. Hefny, S. Sra, B. Poczos, and A. Smola, “Stochastic variance reduction for nonconvex optimization,” in International conference on machine learning.   PMLR, 2016, pp. 314–323.
  • [66] R. Babanezhad, M. O. Ahmed, A. Virani, M. Schmidt, J. Konečnỳ, and S. Sallinen, “Stop wasting my gradients: Practical svrg,” arXiv preprint arXiv:1511.01942, 2015.
  • [67] R. Johnson and T. Zhang, “Accelerating stochastic gradient descent using predictive variance reduction,” Advances in neural information processing systems, vol. 26, pp. 315–323, 2013.
  • [68] J. D. Lee, Q. Lin, T. Ma, and T. Yang, “Distributed stochastic variance reduced gradient methods and a lower bound for communication complexity,” arXiv preprint arXiv:1507.07595, 2015.
  • [69] ——, “Distributed stochastic variance reduced gradient methods by sampling extra data with replacement,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 4404–4446, 2017.
  • [70] E.-G. Talbi, “Optimization of deep neural networks: a survey and unified taxonomy,” 2020.
  • [71] X. Cui and M. Picheny, “Acoustic model optimization based on evolutionary stochastic gradient descent with anchors for automatic speech recognition,” arXiv preprint arXiv:1907.04882, 2019.
  • [72] S. Zhang, C. Zhang, Z. You, R. Zheng, and B. Xu, “Asynchronous stochastic gradient descent for dnn training,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.   IEEE, 2013, pp. 6660–6663.
  • [73] C. De Sa, C. Zhang, K. Olukotun, and C. Ré, “Taming the wild: A unified analysis of hogwild!-style algorithms,” Advances in neural information processing systems, vol. 28, p. 2656, 2015.
  • [74] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker et al., “Large scale distributed deep networks,” 2012.
  • [75] T. Chilimbi, Y. Suzue, J. Apacible, and K. Kalyanaraman, “Project adam: Building an efficient and scalable deep learning training system,” in 11th {\{USENIX}\} Symposium on Operating Systems Design and Implementation ({\{OSDI}\} 14), 2014, pp. 571–582.
  • [76] K. Kumar and Y. Gong, “Static and dynamic state predictions for acoustic model combination,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2019, pp. 2782–2786.
  • [77] H. Meinedo and J. P. Neto, “Combination of acoustic models in continuous speech recognition hybrid systems,” in Sixth International Conference on Spoken Language Processing, 2000.
  • [78] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke, “The microsoft 2017 conversational speech recognition system,” in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP).   IEEE, 2018, pp. 5934–5938.
  • [79] Z.-H. Zhou, Ensemble methods: foundations and algorithms.   Chapman and Hall/CRC, 2012.
  • [80] J. Huang, R. Talbi, Z. Zhao, S. Boucchenak, L. Y. Chen, and S. Roos, “An exploratory analysis on users’ contributions in federated learning,” arXiv preprint arXiv:2011.06830, 2020.
  • [81] J. Weng, J. Weng, H. Huang, C. Cai, and C. Wang, “Fedserving: A federated prediction serving framework based on incentive mechanism,” arXiv preprint arXiv:2012.10566, 2020.
  • [82] T. Nishio, R. Shinkuma, and N. B. Mandayam, “Estimation of individual device contributions for incentivizing federated learning,” arXiv preprint arXiv:2009.09371, 2020.
  • [83] C. Hasan, “Incentive mechanism design for federated learning: Hedonic game approach,” arXiv preprint arXiv:2101.09673, 2021.
  • [84] S. R. Pandey, N. H. Tran, M. Bennis, Y. K. Tun, Z. Han, and C. S. Hong, “Incentivize to build: A crowdsourcing framework for federated learning,” in 2019 IEEE Global Communications Conference (GLOBECOM).   IEEE, 2019, pp. 1–6.
  • [85] R. Hu and Y. Gong, “Trading data for learning: Incentive mechanism for on-device federated learning,” arXiv preprint arXiv:2009.05604, 2020.
  • [86] Y. Sarikaya and O. Ercetin, “Motivating workers in federated learning: A stackelberg game perspective,” IEEE Networking Letters, vol. 2, no. 1, pp. 23–27, 2019.
  • [87] L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. Nguyen, and C. S. Hong, “Federated learning for edge networks: Resource optimization and incentive mechanism,” IEEE Communications Magazine, vol. 58, no. 10, pp. 88–93, 2020.
  • [88] J. Kang, Z. Xiong, D. Niyato, H. Yu, Y.-C. Liang, and D. I. Kim, “Incentive design for efficient federated learning in mobile networks: A contract theory approach,” in 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS).   IEEE, 2019, pp. 1–5.
  • [89] N. Ding, Z. Fang, and J. Huang, “Incentive mechanism design for federated learning with multi-dimensional private information,” in 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT).   IEEE, 2020, pp. 1–8.
  • [90] J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, “Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,” IEEE Internet of Things Journal, vol. 6, no. 6, pp. 10 700–10 714, 2019.
  • [91] W. Y. B. Lim, Z. Xiong, C. Miao, D. Niyato, Q. Yang, C. Leung, and H. V. Poor, “Hierarchical incentive mechanism design for federated machine learning in mobile networks,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9575–9588, 2020.
  • [92] Y. Zhan, P. Li, Z. Qu, D. Zeng, and S. Guo, “A learning-based incentive mechanism for federated learning,” IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6360–6368, 2020.
  • [93] M. Cong, H. Yu, X. Weng, J. Qu, Y. Liu, and S. M. Yiu, “A vcg-based fair incentive mechanism for federated learning,” arXiv preprint arXiv:2008.06680, 2020.
  • [94] M. Cong, H. Yu, X. Weng, and S. M. Yiu, “A game-theoretic framework for incentive mechanism design in federated learning,” in Federated Learning.   Springer, 2020, pp. 205–222.
  • [95] R. Zeng, S. Zhang, J. Wang, and X. Chu, “Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec,” arXiv preprint arXiv:2002.09699, 2020.
  • [96] T. H. T. Le, N. H. Tran, Y. K. Tun, M. N. Nguyen, S. R. Pandey, Z. Han, and C. S. Hong, “An incentive mechanism for federated learning in wireless cellular network: An auction approach,” arXiv preprint arXiv:2009.10269, 2020.
  • [97] H. Yu, Z. Liu, Y. Liu, T. Chen, M. Cong, X. Weng, D. Niyato, and Q. Yang, “A sustainable incentive scheme for federated learning,” IEEE Intelligent Systems, vol. 35, no. 4, pp. 58–69, 2020.
  • [98] ——, “A fairness-aware incentive scheme for federated learning,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 393–399.
  • [99] G. Wang, C. X. Dang, and Z. Zhou, “Measure contribution of participants in federated learning,” in 2019 IEEE International Conference on Big Data (Big Data).   IEEE, 2019, pp. 2597–2604.
  • [100] T. Song, Y. Tong, and S. Wei, “Profit allocation for federated learning,” in 2019 IEEE International Conference on Big Data (Big Data).   IEEE, 2019, pp. 2577–2586.
  • [101] J. H. Holland et al., Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence.   MIT press, 1992.
  • [102] L. R. Bahl, P. F. Brown, P. V. De Souza, and R. L. Mercer, “Maximum mutual information estimation of hidden markov model parameters for speech recognition,” in proc. icassp, vol. 86, 1986, pp. 49–52.
  • [103] B. Kleinberg, Y. Li, and Y. Yuan, “An alternative view: When does sgd escape local minima?” in International Conference on Machine Learning.   PMLR, 2018, pp. 2698–2707.
  • [104] Y. Chen and X. Ye, “Projection onto a simplex,” arXiv preprint arXiv:1101.6081, 2011.
  • [105] L. Condat, “Fast projection onto the simplex and the simplex and the 1\ell_{1} ball,” Mathematical Programming, vol. 158, no. 1-2, pp. 575–585, 2016.
  • [106] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 2938–2948.
  • [107] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 691–706.
  • [108] L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in Neural Information Processing Systems, vol. 32, pp. 14 774–14 784, 2019.
  • [109] Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications.   IEEE, 2019, pp. 2512–2520.
  • [110] B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 603–618.
  • [111] G. Ateniese, L. V. Mancini, A. Spognardi, A. Villani, D. Vitali, and G. Felici, “Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers,” International Journal of Security and Networks, vol. 10, no. 3, pp. 137–150, 2015.
  • [112] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy.” Found. Trends Theor. Comput. Sci., vol. 9, no. 3-4, pp. 211–407, 2014.
  • [113] E. Winter, “The shapley value,” Handbook of game theory with economic applications, vol. 3, pp. 2025–2054, 2002.
  • [114] R. Jia, D. Dao, B. Wang, F. A. Hubis, N. Hynes, N. M. Gürel, B. Li, C. Zhang, D. Song, and C. J. Spanos, “Towards efficient data valuation based on the shapley value,” in The 22nd International Conference on Artificial Intelligence and Statistics.   PMLR, 2019, pp. 1167–1176.
  • [115] A. Stolcke, “Srilm-an extensible language modeling toolkit,” in Seventh international conference on spoken language processing, 2002.