LOCKS: User Differentially Private and Federated Optimal Client Sampling
Abstract.
With changes in privacy laws, there is often a hard requirement for client data to remain on the device rather than being sent to the server. Therefore, most processing happens on the device, and only an altered element is sent to the server. Such mechanisms are developed by leveraging differential privacy and federated learning. Differential privacy adds noise to the client outputs and thus deteriorates the quality of each iteration. This distributed setting adds a layer of complexity and additional communication and performance overhead. These costs are additive per round, so we need to reduce the number of iterations. In this work, we provide an analytical framework for studying the convergence guarantees of gradient-based distributed algorithms. We show that our private algorithm minimizes the expected gradient variance in rounds, where is the dimensionality of the model. We discuss and suggest novel ways to improve the convergence rate to minimize the overhead using Importance Sampling (IS) and gradient diversity. Finally, we provide alternative frameworks that might be better suited to exploit client sampling techniques like IS and gradient diversity.
1. Introduction
Stochastic Gradient Descent (SGD) functions by randomly selecting a subset of data points per round and computing their gradients. The update step in SGD uses the average gradient to modify the weights from the previous round. Optimal model weights can improve the trained model and its inference capabilities on unseen data. For high-dimensional non-convex parameter paces, obtaining optimal models is non-trivial and computationally heavy. Thus, improving the quality of gradients and accelerating the training process are fundamental goals for improving gradient algorithms.
However, as we often see in practice, the information assimilated in each gradient can vary widely. After the first few training epochs, the gradients of specific data points lose their value. (Katharopoulos and Fleuret, 2018). We could select data points randomly or by batching the entire training set per round. Both methods are inefficient and do not consider the information contributed by the gradients or the data points.
A way to assess the importance of each data point could be based on the magnitude of the gradient or loss value contributed by the data point (Loshchilov and Hutter, 2015), (Alain et al., 2015). Gradient magnitude refers to metrics like the -norm or -norm computed on individual gradients. Larger magnitudes suggest higher levels of information in gradients. For high-dimensional parameter models, gradient norms can be costly to compute. Computing loss values is straightforward but needs to reflect the gradient update accurately. In this work, we, therefore, focus on the magnitude of the gradient update. Such methods are based on Importance Sampling (IS) algorithms.
The rationale behind randomly picking data points per round is to obtain an unbiased estimate of the actual gradient. However, this unbiasedness comes at the cost of higher gradient variance. In such cases, Stochastic Variance Reduction Gradient (SVRG) techniques have been suggested to reduce the variance of the gradient.
Besides the gradient magnitude, we should prioritize diverse gradients. If two gradients are similar in the form of some similarity metric (ex., cosine similarity), then we may only need to use such a gradient. Gradient diversity should therefore be another basis for client selection.
This work further focuses on distributed and private machine-learning settings. Big data technologies have enabled massive leaps in predictive learning. However, these techniques commonly access private personal client data that can potentially cause severe harm to the owner if leaked. Such leaks are preventable under Federated Learning (FL) and Differential Privacy (DP), allowing owners tighter ownership over their data. For vanilla SGD, there are already techniques like FedAvg (Konečnỳ et al., 2016) and DP-SGD (Abadi et al., 2016) that can partially or fully protect private data. However, these algorithms’ privacy and convergence analysis cannot be directly applied to the algorithms proposed in this work.
In this work, we study the problem of partial client availability under Federated Learning under differential privacy. We analyze the expected gradient variance due to uniform sampling and the noise added by DP with adaptive server learning rates. As per our knowledge, such an exact theoretical analysis does not exist. Therefore, we provide analytical results and convergence guarantees in Theorem 5.1 in terms of the required iteration. We also suggested a modified approach that allows for lower communication rounds by simple scaling. We provide a pseudo-code for the algorithm suggested. We motivate the identification of other frameworks due to significant drawbacks to the one suggested in the reference papers.
2. Motivation
Federated Learning involves training machine learning models on individual clients rather than sending the entire dataset to a central server. Clients process and train their datasets with an SGD variant on-device. At each round, the server selects a subset of its client for training. Clients only send their computed gradient to the server. The server computes the average gradient over these client gradients and updates the model. After each round, the server sends back the updated model. These rounds are repeated until the model converges, similar to vanilla training. With more rounds, the communication cost incurred by the clients and the server substantially increases. Also, with limited computational resources and data, the clients must train longer and for more rounds. Therefore, we prefer to lower the number of training rounds.
Differential Privacy (DP) is a mathematical framework over queries to a dataset that limits the leakage of a single client or data point in a dataset. However, even though we wish to learn aggregated values, the query outputs can disclose information about a particular individual in the dataset. Combining auxiliary public data with aggregated histograms or anonymized datasets can help us extract exact individual identities (Narayanan and Shmatikov, 2006). DP adds noise or performs specific sampling to limit the impact of a single client or data point on the query outputs, thus protecting individual privacy. When accessing data over multiple rounds, DP needs to add noise at each stage; thus, the DP cost is proportional to the number of rounds. Higher DP costs lead to practically trivial privacy for real-world applications. Thus, reducing the number of rounds is crucial to ensuring high privacy standards.
Therefore, an accelerated SGD algorithm will improve training quality and speeds and potentially lead to more private methods. Federated IS (FedIS) and Differentially Private IS (DPIS) leverage gradient magnitude and diversity to achieve this acceleration. We study approaches that combine some of these ideas and introduce new ones for private training.
3. Preliminaries
3.1. Differential Privacy
DP proposes a solution to publishing or sharing private datasets. DP mathematically defines privacy leakage caused due to computations over private data points. By providing an attacker-agnostic and information-agnostic defense, DP protects against worst-case scenarios. Thus, we do not need specific assumptions about the attacker and are protected against future information releases.
In the DP literature, we commonly consider a dataset as a multi-set of samples in the space. This is analogous to the dataset belonging to the real space . The following section is based on preliminary definitions found in (Dwork et al., 2014).
To understand the mathematical notion of DP, we define pure and approximate DP.
Definition 3.1.
(Differential Privacy) A randomized mechanism is considered to be DP if for all neighboring datasets differing in a single sample (i.e., ), we have
(1) |
Differential privacy allows for pure-DP () or approximate-DP (), each delivered by a different noise addition mechanism. Approximate DP allows for an infinitesimally small probability of data leakage. However, it composes (adds up) well over multiple iterations compared to pure DP. For most computations in this work, we will stick to approximate DP.
Definition 3.2.
(-Sensitivity) For a function , we define its norm sensitivity (denoted as ) over all neighboring datasets differing in a single sample as
(2) |
Given the sensitivity of a function, we can further define the Gaussian Mechanism that adds noise to the function . Gaussian DP is used to achieve approximate DP.
Lemma 3.3.
(Gaussian Mechanism) Consider an arbitrary . Let the sensitivity of the function be . Further, consider where . Then, the following randomized mechanism ,
(3) | ||||
(4) |
satisfies approximate or DP.
We define a few other properties of DP.
Definition 3.4.
(Rényi Divergence) We use Rényi divergence to quantify the closeness of two probability distributions. Given, two probability distributions and their density functions s.t. then the Rényi divergence of order () is given as,
(5) |
Based on the Rényi divergence, we use the Rényi Differential Privacy (RDP) (Mironov, 2017) to quantify our proposed algorithms’ privacy cost. Although approximate DP is a stricter notion of privacy, its composition analysis is weaker than the newer forms of privacy like RDP, Concentrated Differential Privacy (CDP), and z-CDP. In this case, we choose to use RDP to be consistent with the privacy costs shown in (Wei et al., 2022).
Definition 3.5.
(Rényi Differential Privacy (RDP)) For any neighboring datasets , a randomized mechanism satisfies -RDP if
(6) |
Definition 3.6.
(-RDP to -DP conversion) The randomized mechanism , satisfies -RDP. For the constants, and appropriate satisfies -DP for,
(7) |
3.2. Federated Optimization
Federated Learning pursues distributed optimization by allowing clients to train models on their data. The raw data never leaves the device, and each client only sends the gradient value to be aggregated on the server. Thus, the optimization function is split across clients and can be arranged as the following,
(8) | ||||
(9) | ||||
(10) |
Here, and are the dataset and response variable resp.
We assume a non-convex setting and make the following assumptions as made in (Wang et al., 2022b) (restated for completeness).
3.2.1. Assumptions
Assumption 1 (L-smooth).
Each client has a local objective function that is -Lipschitz Smooth () s.t.
(11) |
Assumption 2.
(Unbiased Local Gradient Estimator and Local Variance) Suppose for the round we have sample and model parameter . Then for all clients
(12) |
where is the set of all the data samples at client and is the uniform distribution.
Further, the local utility functions have bounded local variance s.t.,
(13) |
Assumption 3.
(Bound Dissimilarity) (Wang et al., 2020) The following expression relates the expected value of the local and global objective functions,
(14) |
where are constants. For the case when , then all local loss functions are similar.
Assumption 4.
(Bounded Clipping) Suppose vector is clipped s.t. then we define . We say that
(15) |
The last assumption is a new one that suggests that the variance due to clipping is generally small. (Wang et al., 2022b) also assumes that the stochastic gradient’s norm is uniformly bounded. However, since we will be clipping the local gradients before adding noise for DP, we do not require this explicit assumption. Rather, we use the clipping constant to bound each local gradient’s norm.
4. Preliminary Work
Lemma 4.1.
Proof: We know that can be written as the follows,
(16) | ||||
(17) | ||||
(18) |
Lemma 4.2.
The private gradient estimate at round , () in Algorithm 1 is an unbiased estimator of the aggregate clipped gradients.
Let us first understand the expression for . Note that and thus by definition has expected value of zero. (1) follows from this idea. (2) follows from introducing the next sampling probability of clients in the first and second rounds (combined).
(19) | ||||
(20) | ||||
(21) | ||||
(22) | ||||
(23) | ||||
(24) | ||||
(25) | ||||
(26) |
5. Main Theorems
We need to find an optimal sampling mechanism for our proposed method that minimizes overall variance. Additional variance is introduced due to the partial participation of clients leading to an approximation of the overall objective. We first study the impact of uniformly choosing samples from the set in Theorem 5.1.
Theorem 5.1.
Let the adaptive global learning rates be . Under our assumptions 1-4, the model parameter sequence generated by Algorithm 1 satisfies the following:
(27) | ||||
(28) |
where the expectation is taken over the local dataset samples among clients. is the variance seen due to random sampling and privacy noise. Here, . We uniformly sample clients as shown in Algorithm 1 with a net probability (across both rounds of sampling). The variance bounds are introduced in assumptions 1-4. is the number of model parameters, is the privacy noise added as per Algorithm 1. Furthermore,
(29) | ||||
(30) | ||||
(32) | ||||
(33) |
Proof: We define . In the case all participants participate, we can see that . However, for partial client participation we see that . Thus, the randomness in partial client participation consists of client sampling, stochastic gradient, and privacy noise.
Due to the smoothness assumption on (by assumption 1), and taking the expectation of over the client sampling and at round . Note is unaffected by the sampling at round .
Here, and then . Further, . We note that expectation over the randomness of client sampling and private Gaussian noise gives us . Due to the independent Gaussian noise and expectation over its randomness, we have and .
Thus, . Therefore,
(34) | ||||
Let us bound term first. We note that by our sampling technique, the expected batch size is (Lemma 4.1) and that .
(35) | ||||
(36) | ||||
(37) | ||||
(38) | ||||
(39) |
Here (1) is from the law of total expectation (or tower expression), (2) is due to where , (3) is by Jensen’s inequality for a concave function (squared root is concave) and the (4) follows from Lemma 4.1.
Let us now bound with the assumption that we sample with replacement.
(40) | ||||
(41) | ||||
(42) | ||||
(43) |
Here, (1) follows from the simple expression . We first bound the first term . Note, is the factor clipped off by clipping s.t. . Here, is the clipped form of . We assume .
(44) | ||||
(45) | ||||
(46) |
Here (1) rewrites the summation with the indicator variable trick, and in (2), we take the expected value of the indicator function (also equal to its probability). In (3), we use the inequality and , assumption 2 and assume (achieved by clipping the number of clients per user) and . The server broadcasts both and . Clients limit their training to samples with repetition and ensure that at least samples (with replacement) are trained each time.
Let us bound now,
(47) | ||||
(48) | ||||
(49) | ||||
(50) | ||||
(51) |
where (1) follows from analysis similar to and cancellation of probabilities, (2) uses the inequality and Assumption 3.
Now, we only need to bound .
(52) | ||||
(53) | ||||
(54) | ||||
(55) | ||||
(56) |
where (1) is a simple re-shuffling of constants and (2) uses the inequality and the analysis used in based on .
Putting everything together, we get that,
(57) | ||||
(58) | ||||
(59) | ||||
(60) |
Re-arranging the terms, taking expectations for all the steps and summing from to steps (and canceling some terms), and dividing by , we next get the following,
(61) | |||
(62) | |||
(63) | |||
(64) | |||
(65) | |||
(66) | |||
(67) |
Here, in (1) we assume that where is the base learning rate. Also, we know that by definition, . (2) follows from the idea that .
We need to ensure that for an appropriate constant and all values of , . Note that the value is largest when , and so we essentially need,
(68) |
Finally, we notice that to reduce the variance; we need a policy that follows equation 68, for all combinations s.t.
(69) |
If we choose where is a constant, then we get the following value at round ,
(70) |
Taking summation over all values of ,
(71) | ||||
(72) |
where . Now, we can rewrite our expected variance function as,
(73) | |||
(74) | |||
(75) |
Substituting this back, our final result convergence result is,
(76) | |||
(77) | |||
(78) | |||
(79) |
Here, . For this bound to be meaningful and assuming the bounded variances are reasonably small, the biggest challenge is the additional term of . We minimize variance only when for models with many parameters. After that, we get the bound for uniform sampling as,
(80) |
This concludes the proof of the main theorem. The probabilities in the denominator should be large. But, with dynamic sampling some probabilities might be significantly small leading to large values for the variance. Therefore, we look at alternative ways of tackling the same problem.
We notice that the unbiasedness of the gradient can be maintained without probability scaling by assuming a weighted gradient output.
Lemma 5.2.
The private unscaled gradient estimate at round , () is an unbiased estimator of the aggregate clipped gradients, assuming that at each round and that the actual gradient obtained is weighted.
Let us first understand the expression for the unscaled .
(81) | ||||
(82) | ||||
(83) | ||||
(84) | ||||
(85) | ||||
(86) | ||||
(87) | ||||
(88) |
Further, the expected batch size is as . We believe this might be a better framework for analysis since this provides the following advantages:
-
•
Avoids having dynamic probabilities in the denominator.
-
•
Sums up to 1 and provides weighted gradients to denote the weights intuitively.
-
•
Maintains unbiased estimator, small batch size.
The only minor inconvenience might be the factor in the numerator. We study the variance factor as part of our future work.
Theorem 5.3.
Algorithm 1 is differentially private.
Proof: We notice that in Algorithm 1, the only private parameters are the raw gradients and their norms. Since, we bound each gradient by a clipped factor of , we have bounded sensitivity. Normally, we would add noise with the variance where and are the per-step privacy budgets. With strong composition (Kairouz et al., 2015) we get,
(89) | ||||
(90) | ||||
(91) |
We add epsilons from the mechanisms with RDP (Mironov, 2017). However, the optimal value of depends on the value of and . Thus, given a target , we have the following,
(92) | ||||
(93) |
Here, remains constant and can be replaced by the value that minimizes privacy costs. Further, the expectation is taken over the sampling randomness at step , and the value of is also dependent on the number of steps.
6. Algorithms
For the LOCks algorithm (Algorithm 1), we provide details and explanations about the client-server architecture and how each update is processed.
In the first round, the server randomly picks a client w.p. . These clients compute a private estimate of their last gradient norm. If the norm is negative, we clip to a minimal value . These norms are then converted to a probability distribution, and the ones with the highest probability are prioritized in the sampling. Thus, the probability is proportionate to the noisy gradient norm of the client. The clients picked to submit their private gradient to the server. The server aggregates this gradient and broadcasts the updated model to all the clients.
7. Experiments
We provide preliminary experiments to motivate further investigation in this space. By uniformly sampling clients, we analyze a federated system. Here, the client only adds noise to the last gradient before sending it to the server. This provides user privacy and keeps the data on the device. For our analysis, we consider clients and assume that are uniformly randomly picked in each round. Each client runs local epochs between the server updates. The noise multiplier in each round is picked after searching for the best value in RDP using Google’s Tensorflow privacy library. We notice that for an , we need a multiplier of , for , we require a multiplier of , and for , we need a multiplier of . The smaller the multiplier, the lower the noise, but the higher the privacy loss leading to significant leaks.
Note that none of the provided values are satisfactory. We expect this since the gradient variance is proportionately higher if we use a smaller noise multiplier.

We also empirically analyze the optimal client sampling techniques suggested in (Wei et al., 2022) and (Li et al., 2019) as modified in Algorithm 1. In this scenario, we first randomly sample clients. We prioritize the clients with the highest private gradient norms in the second round. So, the probability of picking these clients is proportionate to their gradient size. By prioritizing the norm gradient, we see mild speed-ups in our convergence rate (empirically), as shown in Figure 1. Though LOCKS generally performs better for high-noise / high-privacy cases, we still need to understand the underlying mechanism for its success.
We showcase our preliminary results for MNIST in Figure 1.
8. Conclusions
This work demonstrates the convergence of user differentially private and federated systems for non-convex objective functions with an adaptive learning rate and partial client participation. We provide a general two-pronged sampling framework that reduces the communication requirement for Federated systems per round. Our method maintains an unbiased gradient estimate while providing a flexible framework for further analysis. We also offer an alternative approach for analyzing the weighted sum of user gradients. There are, however, significant gaps in the literature that suggest multiple open problems.
-
•
(Wang et al., 2022a) shows that different gradients due to data heterogeneity do not impact empirical results. Therefore, we need to understand why gradient dissimilarity is important and how it impacts the convergence rate.
-
•
Currently, our model requires steps to reasonably converge. A high number of steps adds significant communication and privacy costs, and future work should tackle this issue.
-
•
Study how adaptive clipping and adaptive sampling affect convergence rates.
-
•
Demonstrate empirical results for complex datasets.
-
•
Alternative approaches for gradient similarity using similarity metrics like angles, cosine similarity, and correlation.
References
- (1)
- Abadi et al. (2016) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC conference on computer and communications security (2016), 308–318.
- Alain et al. (2015) Guillaume Alain, Alex Lamb, Chinnadhurai Sankar, Aaron Courville, and Yoshua Bengio. 2015. Variance reduction in sgd by distributed importance sampling. arXiv preprint arXiv:1511.06481 (2015).
- Dwork et al. (2014) Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9, 3-4 (2014), 211–407.
- Kairouz et al. (2015) Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2015. The composition theorem for differential privacy. In International conference on machine learning. PMLR, 1376–1385.
- Katharopoulos and Fleuret (2018) Angelos Katharopoulos and François Fleuret. 2018. Not all samples are created equal: Deep learning with importance sampling. In International conference on machine learning. PMLR, 2525–2534.
- Konečnỳ et al. (2016) Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
- Li et al. (2019) Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2019. On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 (2019).
- Loshchilov and Hutter (2015) Ilya Loshchilov and Frank Hutter. 2015. Online batch selection for faster training of neural networks. arXiv preprint arXiv:1511.06343 (2015).
- Mironov (2017) Ilya Mironov. 2017. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF). IEEE, 263–275.
- Narayanan and Shmatikov (2006) Arvind Narayanan and Vitaly Shmatikov. 2006. How to break anonymity of the netflix prize dataset. arXiv preprint cs/0610105 (2006).
- Wang et al. (2022a) Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, and Tong Zhang. 2022a. On the unreasonable effectiveness of federated averaging with heterogeneous data. arXiv preprint arXiv:2206.04723 (2022).
- Wang et al. (2020) Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems 33 (2020), 7611–7623.
- Wang et al. (2022b) Lin Wang, YongXin Guo, Tao Lin, and Xiaoying Tang. 2022b. DELTA: Diverse Client Sampling for Fasting Federated Learning. https://doi.org/10.48550/ARXIV.2205.13925
- Wei et al. (2022) Jianxin Wei, Ergute Bao, Xiaokui Xiao, and Yin Yang. 2022. DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling. arXiv preprint arXiv:2210.09634 (2022).