QI-DPFL: Quality-Aware and Incentive-Boosted Federated Learning with Differential Privacy
††thanks: *Corresponding Author.
††thanks: This work was supported by National Natural Science Foundation of China under Grant No. 62206320.
Abstract
Federated Learning (FL) has increasingly been recognized as an innovative and secure distributed model training paradigm, aiming to coordinate multiple edge clients to collaboratively train a shared model without uploading their private datasets. The challenge of encouraging mobile edge devices to participate zealously in FL model training procedures, while mitigating the privacy leakage risks during wireless transmission, remains comparatively unexplored so far. In this paper, we propose a novel approach, named QI-DPFL (Quality-Aware and Incentive-Boosted Federated Learning with Differential Privacy), to address the aforementioned intractable issue. To select clients with high-quality datasets, we first propose a quality-aware client selection mechanism based on the Earth Mover’s Distance (EMD) metric. Furthermore, to attract high-quality data contributors, we design an incentive-boosted mechanism that constructs the interactions between the central server and the selected clients as a two-stage Stackelberg game, where the central server designs the time-dependent reward to minimize its cost by considering the trade-off between accuracy loss and total reward allocated, and each selected client decides the privacy budget to maximize its utility. The Nash Equilibrium of the Stackelberg game is derived to find the optimal solution in each global iteration. The extensive experimental results on different real-world datasets demonstrate the effectiveness of our proposed FL framework, by realizing the goal of privacy protection and incentive compatibility.
Index Terms:
Federated learning, Stackelberg game, differential privacy, client selection mechanism.I Introduction
In the era of rapid advancements in science and technology, an unprecedented volume of data has been generated by edge devices. It is anticipated that the data volume in human society will experience geometric growth soon. Concurrently, the surge in private data is accompanied by escalating concerns over data privacy and security, drawing considerable focus from both academic and industrial sectors. Notably, the enactment of stringent data privacy regulations such as GDPR [1], poses a formidable challenge in accessing and utilizing high-quality private data for training artificial intelligence models. In addition, the huge communication costs associated with data transmission cannot be overlooked. These challenges pave the way for the emergence of innovative machine-learning technologies that mitigate the risk of privacy disclosure [2].
Federated learning emerges as a compelling distributed machine-learning paradigm, offering multiple benefits including privacy preservation and communication efficiency [3]. However, the widespread implementation of efficient FL systems still encounters several challenges that warrant further investigation [4]. Most research concentrates on enhancing FL model performance and assumes that FL models possess adequate safety, whereas findings from [5] indicate potential risks of significant privacy breaches in gradient propagation schemes. Differential privacy (DP) [6], a prevalent method for safeguarding data privacy, is often employed to alleviate the adverse effects of strong noise injected while enhancing the protection level, advancements such as -CDP [7] and Rnyi DP [8] have been proposed. Recent studies have seen the integration of DP methods with FL, striving to strike a harmonious equilibrium between model performance and privacy preservation [9, 10]. However, most differential privacy-based FL approaches rely on standard -DP mechanism, which is susceptible to the Catastrophe Mechanism [11].
Additionally, an idealized assumption in current research posits that mobile devices will participate in FL model training unconditionally once invited, a notion that often falls short in practical scenarios as engaging in model training entails significant consumption of computational and communication resources, in the meanwhile, participants need to be wary of the potential risk of information leakage [12]. Without a well-designed economic incentive mechanism, egocentric mobile devices are probably reluctant to partake in [13]. Recently, incentive mechanism-based federated network optimization and FL have gradually gained extensive attention. [14, 15] focus on modeling the interactions between clients and the central server as a Stackelberg game. Besides, there has emerged a surge of auction-based FL algorithms [2, 9]. Contract theory-based FL models are also proposed [16, 13]. Nevertheless, most aforementioned works on incentive mechanism design overlook the security assurance during parameter transmission between the central server and edge nodes.
To incentivize the participation of mobile devices with high-quality data and eliminate the privacy threats associated with gradient disclosure, we innovatively propose a quality-aware and incentive-boosted federated learning framework based on the -zero-concentrated differential privacy (-CDP) technique. We first design a client selection mechanism grounded in Earth Mover’s Distance (EMD) metric, followed by rigorous analysis of the differentially private federated learning (DPFL) framework, which introduces artificial Gaussian noise to obscure local model parameters, thereby addressing privacy concerns. Further, based on the DPFL framework, the interactions between the heterogeneous clients and the central server are modeled as a two-stage Stackelberg game termed QI-DPFL. In Stage I, the central server devises a time-dependent reward for clients to jointly minimize the accuracy loss and total reward. In Stage II, each selected client determines the optimal privacy budget in accordance with the reward allocated to maximize respective utility. The multi-fold contributions of our work are summarized as follows:
-
•
Privacy preservation and incentive mechanism in FL: We propose a novel and efficient quality-aware and incentive-boosted federated learning framework based on -CDP mechanism, named QI-DPFL. We first select the clients with high-quality data and then model the interactions between the central server and the selected clients as a two-stage Stackelberg game, which enables each participant to freely customize its privacy budget while achieving the well model performance in the premise of protecting data privacy.
-
•
Earth Mover’s Distance for client selection: We innovatively adopt the EMD metric for client selection mechanism design to sift the geographically distributed participants with high-quality datasets and improve the training performance.
-
•
Stackelberg Nash Equilibrium Analysis: By analyzing the interactions between the central server and selected clients, we derive the optimal reward and the optimal privacy budget in Stage I and Stage II respectively. Moreover, we demonstrate that the optimal strategy profile forms a Stackelberg Nash Equilibrium. Extensive experiments on different real-world datasets verify the effectiveness and security of our proposed differentially private federated learning framework.
II Preliminaries
II-A Standard Federated Learning Model
FL introduces an innovative decentralized machine learning paradigm in which a global model is collaboratively trained by tremendous geographically distributed clients with locally collected data. Each client engages in one or multiple epochs of mini-batch stochastic gradient descent (SGD), subsequently transmitting the updated local model to a central server for the local model aggregation and global model update. Then, the central server dispatches the updated global parameter to all clients to trigger a fresh cycle of local training until the global model converges or reaches a predefined maximum iteration.
Given clients participating in FL training and each client utilizes its localized dataset with a data size of to contribute to model training. In each global iteration, each client parallelly performs () epochs of SGD training to update its local parameter:
(1) |
where is the learning rate of client , and we define the local model . The central server averages the local models from participating clients to update the global model :
(2) |
The goal of FL is to find the optimal global parameter to minimize the global loss function :
(3) |
For the sake of theoretical analysis in the later content, we employ the following commonly used assumptions [17, 18].
Assumption 1
[-Lipschitz Smoothness] The local loss function is -Lipschitz smoothness for each participating client with any :
(4) |
Assumption 2
[-Strong Convexity] The global loss function is -strong convex with any :
(5) |
II-B Differential Privacy Mechanism
For tackling the attacks such as gradient inverse attack [5] that might disclose the original training data without accessing the datasets, -zero-concentrated differential privacy (-CDP) was proposed in [7], which attains a tight composition bound and is more suitable for analyzing the end-to-end privacy loss of iteration algorithms [19].
Firstly, we define a metric to measure the privacy loss. Specifically, for a randomized mechanism with domain and range given any two adjacent datasets with the same size but only differ by one sample, after observing an output , the privacy loss is given as:
(6) |
Then, the formal definition of -CDP is given as follows:
Definition 1
A randomized mechanism with domain and range satisfies -zero-concentrated differential privacy (-CDP) if for any , we have:
(7) |
Based on the Gaussian mechanism, given a query function , the sensitivity of the query function is defined as for any two adjacent datasets . Specifically, in the -th global training iteration, by adding the artificial Gaussian noise , the transmitted parameter of client becomes:
(8) |
where -CDP is satisfied with [6]. Based on the definition of the query function, we can easily derive the upper bound of the sensitivity as given in Corollary 1.
Corollary 1
In global iteration, by utilizing the Gaussian mechanism to perturb transmitted parameter and implementing -CDP mechanism for each participating client , the sensitivity of query function is bounded by .
Proof:
For client with any two adjacent datasets and , the sensitivity of query function with input and can be obtained as follows:
(9) |
where we assume that there exists a clipping threshold for the -th client’s local model in -th global iteration in the absence of adding artificial perturbation, i.e., . ∎
In -th global training iteration, based on Corollary 1 and note that , the variance of the Gaussian random noise of client can be easily derived as follows:
(10) |
III System Model
We propose a two-layer federated learning framework based on differential privacy technique as shown in Fig. 1. In the client selection layer, the central server outsources FL tasks to all clients. The clients who are willing to participate submit their data distributions only containing the information of label frequencies to the central server, based on which, the central server selects the participants with high-quality data according to the EMD metric. Then, in the second layer, the interactions between the central server and selected clients are formulated as a two-stage Stackelberg game, where the central server decides the optimal rewards to the selected clients to incentivize them to contribute data with a high privacy budget, and each selected client determines the privacy budget based on the rewards and adds artificial noise to their local parameter to avoid severe privacy leakage during gradient uploading.
III-A Quality-Aware Client Selection Mechanism
From the perspective of the central server, to minimize the cost while reaching the accuracy threshold, it is crucial to select the clients with superior data quality by a metric to quantify clients’ potential contributions to the FL system. To disclose the pertinent information about the local data while ensuring privacy preservation, we focus on the critical attribute of local data (i.e., data distribution) [20].
In FL, the data distribution varies owing to the distinct preferences of heterogeneous clients, leading to a non-independent and identically distributed (Non-IID) setting. The characteristic of Non-IID data dominantly affects the model performance, such as training accuracy [21]. Inspired by [20], the accuracy attenuation is significantly affected by the weight divergence, which can be quantified by the Earth Mover’s Distance (EMD) metric. The larger EMD value indicates higher weight divergence, thus damaging the global model quality.
In the first layer, we assume that there are a total of clients who are willing to participate in the FL model training process. Considering a class classification task that defines over a compact space and a label space . The data sample of client with and follows the distribution . Under the premise of the actual distribution for the whole population, we denote the EMD of by , which can be calculated as follows:
(11) |
where the actual distribution is a reference distribution and can be the public information or estimated according to the historical data. If the EMD value of client is larger than the pre-set threshold (i.e., ), client encounters a failure in executing the FL task.
III-B Incentive Mechanism Design with Stackelberg Game
Supposing that clients are selected by the central server. The interactions between the central server and selected clients are modeled as a two-stage Stackelberg game. Specifically, at Stage I, the central server, which acts as the leader, decides the optimal payment in -th global iteration to minimize its cost . Then, at Stage II, based on the reward allocated by the central server, each selected client , which acts as the follower, maximizes its utility function by determining the optimal privacy budget .
III-B1 Central Server’s Cost (Stage I)
Before introducing the cost function of the central server, we first discuss how the privacy budget of each client affects the accuracy loss of the global model. From Eq (8), we can derive the global model with Gaussian random noise as follows:
(12) |
Inspired by [22, 14], we assume that the global loss function attains an upper bound (i.e., ). Further, note that the Gaussian random noise possesses zero mean and , where represents the dimension of the input vector and the variance is determined by client ’s privacy budget as shown in Eq (10). Thus, the upper bound of can be derived as:
(13) |
Suppose that the global loss function satisfies -Lipschitz Smoothness (Assumption 1) and -Strong Convexity (Assumption 2), attains minimum at and . Since the upper bound of varies with time, we define . Denote the learning rate as , the accuracy loss can be expressed as:
(14) |
Denote the reward vector by and the privacy budget vector as . Then, the central server’s cost function can be expressed as the summation of the accuracy loss and total reward:
(15) | ||||
where discount factor measures the decrement of the value of reward over time.
III-B2 Client’s Utility (Stage II)
In the -th global iteration, given the central server’s reward , the reward allocated to client is: . The utility of client in -th global iteration is defined as the difference between the reward from the central server and training cost , i.e.,
(16) | |||
(17) |
where and , is positive coefficients. The training cost consists of privacy cost , local data cost , computation cost and communication cost . Among all components of the training cost of the client , privacy cost is closely related to the privacy budget , as a larger privacy budget signifies more precise data, yet it also corresponds to an increased vulnerability to privacy breaches. Specifically, inspired by [19], we denote the privacy cost of client as a function of , where is the privacy value that is assumed publicly known. Here, we consider linear privacy cost function for each client, i.e., .
The objective of -th client is to maximize its utility function by dynamically adjusting privacy budgets at each global iteration according to the reward from the central server. The goal of the central server is to minimize its cost function by adjusting the reward , distributed to participating clients in the premise of reaching preset global model accuracy. Note that the reward to clients will affect the clients’ designs of privacy budget , which in return affects the central server’s cost as shown in Eq (15). Thus, the two-stage Stackelberg game can be formulated as:
Stage I: | (18) | |||
Stage II: | (19) |
where the privacy budget over time form the strategy profile of client , i.e., .
IV Stackelberg Nash Equilibrium Analysis
In this section, we will find the optimal strategy profile , , with and for the central server and selected clients in the two-stage Stackelberg game through backward induction. Firstly, we concentrate on the followers’ operation and derive each selected client ’s optimal privacy budget in the -th global iteration under any given reward . Then, considering the trade-off between the model accuracy loss and payment to the clients, we deduce the central server’s optimal strategy . Finally, we prove that the optimal solution forms the Stackelberg Nash Equilibrium.
IV-A Optimal Strategy Profile
We adopt a backward induction approach to derive the optimal strategy of the central server and each client respectively. First of all, we analyze the optimal strategy of each selected client by determining the optimal privacy budget and are presented in the following theorem.
Theorem 1
In Stage II, given payment in the -th global iteration, the optimal privacy budget of each client is:
(20) |
Proof:
Firstly, we derive the first-order and the second-order derivatives of each client ’s utility function concerning privacy budget as follows:
(21) | |||
(22) |
As the second-order derivative , the utility of each client is strictly concave in the feasible region of . Then, we derive the optimal privacy budget of client in -th global iteration by solving equation , i.e.,
(23) |
From Eq (21), since client fails to acquire other clients’ privacy budget as it is privacy information and there is no communication among clients during any consecutive global training iterations, we need to derive term. Thus, Eq (21) can be rewritten as follows:
(24) |
After summing up both sides of Eq (24), we have:
(25) |
By substituting the term in Eq (23), we can finalize the expression of the optimal privacy budget as:
Hence, the theorem holds. ∎
Based on the optimal privacy budget , we derive the central server’s optimal reward as summarized in Theorem 2.
Theorem 2
In Stage I, based on the optimal privacy budget of each client , i.e, , the optimal reward of each global training iteration , where discount factor measures the decrement of the value of reward over time, constants and .
Proof:
First of all, by substituting the optimal privacy budget in Eq (20) into the central server’s cost function in Eq (15), the cost function can be rewritten as follows:
(26) |
where . The first-order of the central server’s cost function can be derived as:
(27) |
Then, we need to consider the existence of the solution of Eq (27). As the reward , we have:
(28) |
As shown in Theorem 2, the optimal reward increases with , indicating that the central server necessitates data of superior quality to attain the desired accuracy level when approaching the end of the time horizon .
Definition 2
(Stackelberg Nash Equilibrium) The strategy profile , , with and constitutes a Stackelberg Nash Equilibrium if for any reward and any privacy budget :
(30) | ||||
(31) |


Theorem 3
The above two-stage Stackelberg game possesses a Stackelberg Nash Equilibrium.
Proof:
Based on Theorem 1, we deduce that there exists the optimal strategy of privacy budget for each client given any reward . Then, we need to prove that there exists an optimal reward for the central server’s cost function in the premise of optimal privacy budget vector . According to the proof of Theorem 2, indicates that is convex. Thus, an optimal reward for central server exists by solving the first-order equation based on the optimal privacy budget vector . In other words, the strategy and are mutual optimal strategy for each selected client and the central server. Thus, the optimal strategy profile , , with and of the two-stage Stackelberg game possesses a Stackelberg Nash Equilibrium. Hence, the theorem holds. ∎
Theorem 2 reveals that the optimal privacy budget of the selected clients in Theorem 1 and optimal reward of the central server in Theorem 2 are mutually optimal, which leads to the steady state of the FL system. The overall framework process is summarized in Algorithm 1.
Input: The total client size , hyperparameter and , , discount factor , local training epoch , global training iteration and convex function .
IV-B Range Analysis of the Reward
In this section, we will analyze the range of the reward under the Stackelberg Nash Equilibrium to guarantee the pre-set model accuracy while simultaneously minimizing the compensation provided to clients. Firstly, we introduce the additivity property of the privacy budget of each selected client as summarized in Lemma 1 [19].
Lemma 1
Hypothesise two mechanism satisfy -CDP and -CDP, their composition satisfies -CDP.
According to Lemma 1, in the -th global iteration, it is equivalent that the global model satisfies -CDP, where the global privacy budget equals the summation of each client’s privacy budget (i.e., ). Naturally, the larger privacy budget usually results in better model performance. To quantify the global model accuracy as a function of the privacy budget, we acquire the test accuracy of the training model with different privacy budgets based on different real-world datasets (i.e., MNIST, Cifar10, and EMNIST datasets). Fig. 2 illustrates the MNIST and EMNIST datasets as an example. We observe that, in -th global iteration, the global model accuracy can be regarded as a simplified concave function concerning the global privacy budget (i.e., ), where , are corresponding constants.
Remark 1
(Reward Range) Based on the optimal strategy of each selected client, the global privacy budget can be calculated as: , based on which, the model accuracy at -th global iteration can be derived as follows:
(32) |
where is the predefined model accuracy that should be reached after iterations of model training. Thus, there exists a lower and an upper bound for the allocated reward in the -th global iteration, i.e.,
(33) |












V Numerical Experiments
In this section, we conduct extensive experiments to demonstrate the efficiency of our proposed framework on commonly used real-world datasets in federated learning.
V-A Experimental Settings
Our proposed approach is implemented on three real-world datasets (i.e., MNIST [23], Cifar10 [24] and EMNIST [25]) to demonstrate the generalization of our framework. We adopt Dirichlet distribution [26] to partition the datasets into Non-IID by setting the concentration parameter as 1.0. In addition, three classic machine-learning models with different structures are implemented for each dataset. Specifically, for the MNIST dataset, we leverage a linear model with a fully connected layer of 784 input and 10 output channels. As to the Cifar10 dataset, the local training model is consistent with [21]. For the EMNIST dataset, we adopt a CNN similar to the structure of LeNet-5 [23]. The correlated basic dataset information detailed parameter settings are summarized in Table I.
V-B Experiments on Real Datasets
In the experiments, we denote the approach without client selection and DP mechanism as FedAvg, the approach only performs client selection as FedAvg-select, the approach only considers DP mechanism as FedAvg-DP, and our proposed method with both client selection and DP mechanism as QI-DPFL. Moreover, to verify the efficiency and effectiveness of our proposed FL framework, we compare it with two baselines, named Max and Random, which is similar to the comparison paradigm in [27]. Specifically, Max means the central server chooses the largest possible reward value in each global iteration to guarantee the best performance. Random refers to selecting a random reward for clients incentivizing. Other incentive mechanisms such as [19] and [9] are designed based on different objectives, namely to maximize the utility of both clients and the central server, and maximize the profit of the model marketplace by designing an auction scenario for DPFL, respectively, which can’t be compared directly with our QI-DPFL framework. Thus, We exclude these two closely related methods from our comparative analysis.
EMNIST: For IID data distribution, as shown in Fig. 3a, compared with FedAvg, FedAvg-DP considers DP mechanism for privacy preservation and obtains the lowest model accuracy. As FedAvg-select and QI-DPFL select clients with high-quality data, they improve the model performance on convergence rate and accuracy. In Fig. 3b-3d, our proposed QI-DPFL achieves the lowest cost and allocated reward in the premise of guaranteeing model performance as the Max-select method and achieves higher accuracy than FedAvg-DP. For Non-IID data distribution, the superiority of QI-DPFL is more pronounced. In Fig. 4a, FedAvg-select and QI-DPFL improve the convergence rate and model accuracy by considering the client selection mechanism compared to FedAvg. FedAvg-DP obtains the lowest accuracy as artificial Gaussian random noise is added to avoid privacy leakage. Our proposed framework QI-DPFL with perturbation on local parameters still achieves a similar model performance as FedAvg-select, which shows the effectiveness of QI-DPFL. In Fig. 4b-4d, our algorithm attains model performance that is on par with the Max-select method while keeping costs and rewards minimal. Further, QI-DPFL outperforms FedAvg-DP by selecting participants with superior data quality. The experimental result on the MNIST dataset is similar to that on the EMNIST dataset.
Datasets |
Training
Set Size |
Validation
Set Size |
Class | Image Size |
Discount
Factor |
||
MNIST | 60,000 | 10,000 | 10 | 1 28 28 | 0.01 | 30 | 0.9429 |
Cifar10 | 50,000 | 10,000 | 10 | 3 32 32 | 0.1 | 80 | 0.9664 |
EMNIST | 731,668 | 82,587 | 62 | 1 28 28 | 0.01 | 30 | 0.901 |




Cifar10: For the IID datasets, as shown in Fig. 5a, compared with FedAvg, the accuracy of FedAvg-select achieves a higher accuracy with the client selection mechanism. To avoid privacy leakage, the artificial noise added to the transmitted parameters in the DP mechanism may deteriorate the model performance. The accuracy of FedAvg-DP is lower than that of FedAvg as shown in Fig. 5a. Our proposed algorithm QI-DPFL with the perturbation on transmitted local parameters still achieves a fast convergence rate and similar testing accuracy as FedAvg-select, revealing the effectiveness of QI-DPFL. Fig. 5b-5d indicate that our approach achieves comparable model performance compared to the Max-select method while maintaining the lowest cost and reward. Additionally, QI-DPFL achieves higher accuracy than FedAvg-DP by selecting clients with high data quality. Under Non-IID data distribution, the advantages of QI-DPFL are even more conspicuous. In Fig. 6a, FedAvg-DP obtains the lowest model accuracy as the DP mechanism. FedAvg-select and QI-DPFL select clients with high-quality data, they thus exhibit enhanced performance concerning both convergence rate and model accuracy. Moreover, although QI-DPFL considers artificial noise, it still achieves similar accuracy as FedAvg-select, which demonstrates the effectiveness of our model. In Fig. 6b-6d, QI-DPFL attains model accuracy that is on par with the Max-select scheme while keeping costs and rewards minimal. Further, QI-DPFL outperforms FedAvg-DP by selecting clients with superior data quality.
Based on the above experimental results, it becomes evident that the advantages of our proposed QI-DPFL are particularly pronounced on the Non-IID datasets, which is attributed to the heightened effectiveness of the client selection mechanism.
VI Conclusion
In this paper, we propose a novel federated learning framework called QI-DPFL to jointly solve the client selection and incentive mechanism problem on the premise of preserving clients’ data privacy. We adopt Earth Mover’s Distance (EMD) metric to select clients with high-quality data. Furthermore, we model the interactions between clients and the central server as a two-stage Stackelberg game and derive the Stackelberg Nash Equilibrium to describe the steady state of the system. Extensive experiments on MNIST, Cifar10, and EMNIST datasets for both IID and Non-IID settings demonstrate that QI-DPFL achieves comparable model accuracy and faster convergence rate with the lowest cost and reward for the central server.
References
- [1] W. G. Voss, “European union data privacy law reform: General data protection regulation, privacy shield, and the right to delisting,” The Business Lawyer, vol. 72, no. 1, pp. 221–234, 2016.
- [2] R. Zhou, J. Pang, Z. Wang, J. C. Lui, and Z. Li, “A truthful procurement auction for incentivizing heterogeneous clients in federated learning,” in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS). IEEE, 2021, pp. 183–193.
- [3] W. He, H. Yao, T. Mai, F. Wang, and M. Guizani, “Three-stage stackelberg game enabled clustered federated learning in heterogeneous uav swarms,” IEEE Transactions on Vehicular Technology, 2023.
- [4] J. S. Ng, W. Y. B. Lim, H.-N. Dai, Z. Xiong, J. Huang, D. Niyato, X.-S. Hua, C. Leung, and C. Miao, “Joint auction-coalition formation framework for communication-efficient federated learning in uav-enabled internet of vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 4, pp. 2326–2344, 2020.
- [5] L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
- [6] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- [7] M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in Theory of Cryptography: 14th International Conference, TCC 2016-B, Beijing, China, October 31-November 3, 2016, Proceedings, Part I. Springer, 2016, pp. 635–658.
- [8] I. Mironov, “Rényi differential privacy,” in 2017 IEEE 30th computer security foundations symposium (CSF). IEEE, 2017, pp. 263–275.
- [9] P. Sun, X. Chen, G. Liao, and J. Huang, “A profit-maximizing model marketplace with differentially private federated learning,” in IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 2022, pp. 1439–1448.
- [10] X. Wu, Y. Zhang, M. Shi, P. Li, R. Li, and N. N. Xiong, “An adaptive federated learning scheme with differential privacy preserving,” Future Generation Computer Systems, vol. 127, pp. 362–372, 2022.
- [11] J. P. Near and C. Abuah, “Programming differential privacy,” URL: https://uvm, 2021.
- [12] J. Kang, Z. Xiong, D. Niyato, H. Yu, Y.-C. Liang, and D. I. Kim, “Incentive design for efficient federated learning in mobile networks: A contract theory approach,” in 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS). IEEE, 2019, pp. 1–5.
- [13] M. Wu, D. Ye, J. Ding, Y. Guo, R. Yu, and M. Pan, “Incentivizing differentially private federated learning: A multidimensional contract approach,” IEEE Internet of Things Journal, vol. 8, no. 13, pp. 10 639–10 651, 2021.
- [14] Z. Yi, Y. Jiao, W. Dai, G. Li, H. Wang, and Y. Xu, “A stackelberg incentive mechanism for wireless federated learning with differential privacy,” IEEE Wireless Communications Letters, vol. 11, no. 9, pp. 1805–1809, 2022.
- [15] Y. Xu, M. Xiao, J. Wu, H. Tan, and G. Gao, “A personalized privacy preserving mechanism for crowdsourced federated learning,” IEEE Transactions on Mobile Computing, 2023.
- [16] N. Ding, Z. Fang, and J. Huang, “Optimal contract design for efficient federated learning with multi-dimensional private information,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 186–200, 2020.
- [17] H. Wu and P. Wang, “Fast-convergent federated learning with adaptive weighting,” IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 4, pp. 1078–1088, 2021.
- [18] Y. Sun, H. Fernando, T. Chen, and S. Shahrampour, “On the stability analysis of open federated learning systems,” in 2023 American Control Conference (ACC). IEEE, 2023, pp. 867–872.
- [19] R. Hu and Y. Gong, “Trading data for learning: Incentive mechanism for on-device federated learning,” in GLOBECOM 2020-2020 IEEE Global Communications Conference. IEEE, 2020, pp. 1–6.
- [20] Y. Jiao, P. Wang, D. Niyato, B. Lin, and D. I. Kim, “Toward an automated auction framework for wireless federated learning services market,” IEEE Transactions on Mobile Computing, vol. 20, no. 10, pp. 3034–3048, 2020.
- [21] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- [22] A. Rakhlin, O. Shamir, and K. Sridharan, “Making gradient descent optimal for strongly convex stochastic optimization,” arXiv preprint arXiv:1109.5647, 2011.
- [23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
- [24] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” Advances in neural information processing systems, 2009.
- [25] G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN). IEEE, 2017, pp. 2921–2926.
- [26] T.-M. H. Hsu, H. Qi, and M. Brown, “Measuring the effects of non-identical data distribution for federated visual classification,” arXiv preprint arXiv:1909.06335, 2019.
- [27] X. Kang, G. Yu, J. Wang, W. Guo, C. Domeniconi, and J. Zhang, “Incentive-boosted federated crowdsourcing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, 2023, pp. 6021–6029.