Privacy of Noisy Stochastic Gradient Descent:
More Iterations without More Privacy Loss
Abstract
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm’s privacy loss remain open—even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant factor. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy.
Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings, e.g., strongly convex losses, non-uniform stepsizes, arbitrary batch sizes, and random or cyclic choice of batches.
1 Introduction
Convex optimization is a fundamental task in machine learning. When models are learnt on sensitive data, privacy becomes a major concern—motivating a large body of work on differentially private convex optimization [11, 5, 6, 20, 28, 26, 41, 7]. In practice, the most common approach for training private models is Noisy-SGD, i.e., Stochastic Gradient Descent with noise added each iteration. This algorithm is simple and natural, has optimal utility bounds [5, 6], and is implemented on mainstream machine learning platforms such as Tensorflow (TF Privacy) [1], PyTorch (Opacus) [48], and JAX [8].
Yet, despite the simplicity and ubiquity of this Noisy-SGD algorithm, we do not understand basic questions about its privacy loss222We follow the literature by writing “privacy loss” to refer to the (Rényi) differential privacy parameters.—i.e., how sensitive the output of Noisy-SGD is with respect to the training data. Specifically:
Question 1.1.
What is the privacy loss of Noisy-SGD as a function of the number of iterations?
Even in the seemingly simple setting of smooth convex losses over a bounded domain, this fundamental question has remained wide open. In fact, even more basic questions are open:
Question 1.2.
Does the privacy loss of Noisy-SGD increase ad infinitum in the number of iterations?
The purpose of this paper is to understand these fundamental theoretical questions. Specifically, we resolve Questions 1.1 and 1.2 by characterizing the privacy loss of Noisy-SGD up to a constant factor in this (and other) settings for a large range of parameters. Below, we first provide context by describing previous analyses in §1.1, and then describe our result in §1.2 and techniques in §1.3.
1.1 Previous approaches and limitations
Although there is a large body of research devoted to understanding the privacy loss of Noisy-SGD, all existing analyses have (at least) one of the following two drawbacks.
Privacy bounds that increase ad infinitum.
One type of analysis approach yields upper bounds on the DP loss (resp., Rényi DP loss) that scale in the number of iterations as (resp., ). This includes the original analyses of Noisy-SGD, which were based on the techniques of Privacy Amplification by Sampling and Advanced Composition [5, 2], as well as alternative analyses based on the technique of Privacy Amplification by Iteration [19]. A key issue with these analyses is that they increase unboundedly in . This limits the number of iterations that Noisy-SGD can be run given a reasonable privacy budget, typically leading to suboptimal optimization error in practice. Is this a failure of existing analysis techniques or an inherent fact about the privacy loss of Noisy-SGD?
Privacy bounds that apply for large and require strong additional assumptions.
The second type of analysis approach yields convergent upper bounds on the privacy loss, but requires strong additional assumptions. This approach is based on connections to sampling. The high-level intuition is that Noisy-SGD is a discretization of a continuous-time algorithm with bounded privacy loss. Specifically, Noisy-SGD can be interpreted as the Stochastic Gradient Langevin Dynamics (SGLD) algorithm [45], which is a discretization of a continuous-time Markov process whose stationary distribution is equivalent to the exponential mechanism [30] and thus is differentially private under certain assumptions.
However, making this connection precise requires strong additional assumptions and/or the resolution of longstanding open questions about the mixing time of SGLD (see §1.4 for details). Only recently did a large effort in this direction culminate in the breakthrough work by Chourasia et al. [12], which proves that full batch333Recently, Ye and Shokri [47] and Ryffel et al. [38], in works concurrent to the present paper, extended this result of [12] to SGLD by removing the full batch assumption; we also obtain the same result by a direct extension of our (completely different) techniques, see §5.1. Note that both these papers [47, 38] still require strongly convex losses, and in fact state in their conclusions that the removal of this assumption is an open problem that “would pave the way for wide adoption by data scientists.” Our main result resolves this question. Langevin dynamics (a.k.a., Noisy-GD rather than Noisy-SGD) has a privacy loss that converges as in this setting where the smooth losses are additionally assumed to be strongly convex.
Unfortunately, the assumption of strong convexity seems unavoidable with current techniques. Indeed, in the absence of strong convexity, it is not even known if Noisy-SGD converges to a private stationary distribution, let alone if this convergence occurs in a reasonable amount of time. (The tour-de-force work [10] shows mixing in (large) polynomial time, but only in total variation distance which does not have implications for privacy.) There are fundamental challenges for proving such a result. In short, SGD is only a weakly contractive process without strong convexity, which means that its instability increases with the number of iterations [25]—or in other words, it is plausible that Noisy-SGD could run for a long time while memorizing training data, which would of course mean it is not a privacy-preserving algorithm. As such, given state-of-the-art analyses in both sampling and optimization, it is unclear if the privacy loss of Noisy-SGD should even remain bounded; i.e., it is unclear what answer one should even expect for Question 1.2, let alone Question 1.1.
1.2 Contributions
The purpose of this paper is to resolve Questions 1.1 and 1.2. To state our result requires first recalling the parameters of the problem. Throughout, we prefer to state our results in terms of Rényi Differential Privacy (RDP); these RDP bounds are easily translated to DP bounds, as mentioned below in Remark 1.4. See the preliminaries section §2 for definitions of DP and RDP.
We consider the basic Noisy-SGD algorithm run on a dataset , where each defines a convex, -Lipschitz, and -smooth loss function on a convex set of diameter . For any stepsize , batch size , and initialization , we iterate times the update
where denotes the average gradient vector on a random batch of size , is an isotropic Gaussian444Note that different papers have different notational conventions for the total noise : the variance is here and, e.g., [19]; but is in other papers like [12]; and is in others like [5]. To translate results, simply rescale ., and denotes the Euclidean projection onto .
Known privacy analyses of Noisy-SGD give an (-RDP upper bound of
(1.1) |
which increases ad infinitum as the number of iterations [5, 2, 19]. Our main result is a tight characterization of the privacy loss, answering Question 1.1. This result also answers Question 1.2 and shows that previous analyses have the wrong qualitative behavior: after a small burn-in period of iterations, Noisy-SGD leaks no further privacy. See Figure 1 for an illustration.

Theorem 1.3 (Informal statement of main result: tight characterization of the privacy of Noisy-SGD).
For a large range of parameters, Noisy-SGD satisfies -RDP for
(1.2) |
and moreover this bound is tight up to a constant factor.
Observe that the privacy bound in Theorem 1.3 is identical to the previous bound (1.1) when the number of iterations is small, but stops increasing when . Intuitively, can be interpreted as the smallest number of iterations required for two SGD processes run on adjacent datasets to, with reasonable probability, reach opposite ends of the constraint set.555Details: Noisy-SGD updates on adjacent datasets differ by at most if the different datapoint is in the current batch (i.e., with probability ), and otherwise are identical. Thus, in expectation and with high probability, it takes iterations for two Noisy-SGD processes to differ by distance . In other words, is effectively the smallest number of iterations before the final iterates could be maximally distinguishable.
To prove Theorem 1.3, we show a matching upper bound (in §3) and lower bound (in §4). These two bounds are formally stated as Theorems 3.1 and 4.1, respectively. See §1.3 for an overview of our techniques and how they depart from previous approaches.
We conclude this section with several remarks about Theorem 1.3.
Remark 1.4 (Tight DP characterization).
While Theorem 1.3 characterizes the privacy loss of Noisy-SGD in terms of RDP, our results can be restated in terms of standard DP bounds. Specifically, by a standard conversion from RDP to DP (Lemma 2.5), if is not smaller than exponentially small in and if the resulting bound is , it follows that Noisy-SGD is -DP for
(1.3) |
A matching DP lower bound, for the same large range of parameters as in Theorem 1.3, is proved along the way when we establish our RDP lower bound in §4.


Discussion of extensions.
Our analysis techniques immediately extend to other settings:
-
•
Strongly convex losses. If the convexity assumption on the losses is replaced by -strong convexity, then Noisy-SGD enjoys better privacy. Specifically, the bound in Theorem 1.3 extends identially except that the threshold for no further privacy loss improves from linear to logarithmic in , namely . For example, if the stepsize , then , where denotes the condition number of the losses. This matches the independent results of [38, 47] (see Footnote 3) and uses completely different techniques from them. Details in §5.1; see also Figure 2(b) for an illustration of the privacy bound.
-
•
Cyclic batch updates. Noisy-SGD is sometimes run with batches that are chosen cyclicallly rather than randomly. The bound in Theorem 1.3 extends identically, except for an initial transient phase in which there is unavoidably a large privacy loss since the algorithm may choose to update using a sensitive datapoint in an early batch. Details in §5.2; see also Figure 2(a) for an illustration of the privacy bound.
- •
Discussion of assumptions.
-
•
Boundedness. All previous privacy analyses are oblivious to any sort of diameter bound and therefore unavoidably increase ad infinitum666For strongly convex losses, boundedness is unnecessary for convergent privacy; see §5.1., c.f. our lower bound in Theorem 1.3. Our analysis is the first to exploit boundedness: whereas previous analyses can only argue that having a smaller constraint set does not worsen the privacy loss, we show that this strictly improves the privacy loss, in fact making it convergent as . See the techniques section §1.3.
We emphasize that this diameter bound is a mild constraint and is in common with all existing theoretical bounds for utility/optimization. Indeed, every utility guarantee for (non-strongly) convex losses also has a dependence on some similar diameter bound; this is inevitable simply due to the difference between the initialization and optimum. We also mention that one can solve an unconstrained problem by solving constrained problems with norm bounds, paying only a small logarithmic overhead on the number of solves and a constant overhead in the privacy loss using known techniques [29]. Moreover, in many optimization problems, the solution set is naturally constrained either from the problem formulation or application.
-
•
Smoothness. The smoothness assumption on the losses can be relaxed if one runs Noisy-SGD on a smoothed version of the objective. This can be done using standard smoothing techniques, e.g., Gaussian convolution smoothing [19, §5.5], or Moreau-Yousida smoothing by replacing gradient steps with proximal steps [6, §4].
-
•
Convexity. Convexity appears to be essential for privacy bounds that do not increase ad infinitum in . Our analysis exploits convexity to ensure that Noisy-SGD is a contractive process. However, convexity is too restrictive when training deep networks, and it is an interesting question if the assumption of convexity can be relaxed. Any such result appears to require new techniques—if even true. The key challenge is that for any iterative process whose set of reachable fixed points is non-convex, there will be non-contractivity at the boundary between basins of attraction—and this appears to preclude arguments based on Privacy Amplification by Iteration, which at present is the only non-sampling-based technique with a hope of establishing convergent privacy bounds (see the techniques section §1.3).
-
•
Lipschitzness. This assumption can be safely removed as the (necessary) assumptions of smoothness and bounded diameter already imply Lipschitzness (with parameter ). We write our result in this way in order to isolate where this dependence comes from more simply, and also because the Lipschitz parameter may be much better than .
Our analysis only uses Lipschitzness to bound gradient sensitivity, just like all previous analyses that use Privacy Amplification by Sampling. For details on this, as well as an example where gradient sensitivity is more useful in practice than Lipschitzness, see Section §5.4.
-
•
Mild assumptions on parameters. The lower bound in Theorem 1.3 makes the mild assumption that the diameter of the decision set is asymptotically larger than both the movement size from one gradient step, and the standard deviation from random increments of Gaussian noise so that the noise does not overwhelm the learning.
The upper bound uses the technique of Privacy Amplification by Sampling, and thus inherits two mild assumptions on the parameters that are also inherent in all previous analyses of Noisy-SGD. One is an upper bound assumption on , and the other is a lower bound assumption on the noise so that one step of Noisy-SGD provides at least constant privacy to any user. These restrictions neither affect numerical bounds which can be computed for any and (see §2.3), nor do they affect the asymptotic -DP bounds in most parameter regimes of interest. We also remark that our results require no restriction if the batches are chosen cyclically, and thus also if full-batch gradients are used (see Theorem 5.8).
1.3 Techniques
Upper bound on privacy.
Our analysis technique departs from recent approaches based on fast mixing. This allows us to bypass the many longstanding technical challenges discussed in §1.1.
Instead, our analysis combines the techniques of Privacy Amplification by Iteration and Privacy Amplification by Sampling. As discussed in §1.1, previous analyses based on these techniques yield loose privacy bounds that increase ad infinitum in . Indeed, bounds based on Privacy Amplification by Sampling inevitably diverge since they “pay” for releasing the entire sequence of iterates, each of which leaks more information about the private data [5, 2]. Privacy Amplification by Iteration avoids releasing the entire sequence by directly arguing about the final iterate; however, previous arguments based on this technique are unable to exploit a diameter bound on the constraint set, and therefore inevitably lead to privacy bounds which grow unboundedly in [19].
Our analysis hinges on the observation that when the iterates are constrained to a bounded domain, we can combine the techniques of Privacy Amplification by Iteration and Privacy Amplification by Sampling in order to only pay for the privacy loss from the final iterations. To explain this, first recall that by definition, differential privacy of Noisy-SGD means that the final iterate of Noisy-SGD is similar when run on two (adjacent but) different datasets from the same initialization. At an intuitive level, we establish this by arguing about the privacy of the following three scenarios:
-
(i)
Run Noisy-SGD for iterations on different datasets from same initialization.
-
(ii)
Run Noisy-SGD for iterations on different datasets from different initializations.
-
(iii)
Run Noisy-SGD for iterations on same datasets from different initializations.
In order to analyze (i), which as mentioned is the definition of Noisy-SGD being differentially private, we argue that no matter how different the two input datasets are, the two Noisy-SGD iterates at iteration are within distance of each other since they lie within the constraint set of diameter . Thus in particular, the privacy of scenario (i) is at most the privacy of scenario (ii), which has initializations that can be arbitrarily different so long as they are within distance from each other. In this way, we arrive at a scenario which is independent of . This is crucial for obtaining privacy bounds which do not diverge in ; however, previously privacy analyses could not proceed in this way because they could not argue about different initialization.
In order to analyze (ii), we use the technique of Privacy Amplification by Sampling—but only on these final iterations. In words, this enables us to argue that the noisy gradients that Noisy-SGD uses in the last iterations are indistinguishable up to a certain privacy loss (to be precise the RDP scales linearly in ) despite the fact that they are computed on different input datasets. This enables us to reduce analyzing the privacy of scenario (ii) to scenario (iii).
In order to analyze (iii), we use a new diameter-aware version of Privacy Amplification by Iteration (Proposition 3.2). In words, this shows that running the same Noisy-SGD updates on two different initializations masks their initial difference due to both the noise and contraction maps that define each Noisy-SGD update. Of course, this indistinguishability improves the longer that Noisy-SGD is run (to be precise, we show that the RDP scales inversely in ).777This step of the analysis has a natural interpretation in terms of the mixing time of the Langevin Algorithm to its biased stationary distribution; this connection and its implications are explored in detail in [3].
We arrive, therefore, at a privacy loss which is the sum of the bounds from the Privacy Amplification by Sampling argument in step (ii) and the Privacy Amplification by Iteration argument in step (iii). This privacy loss has a natural tradeoff in the parameter : the former bound leads to a privacy loss that increases in (since it pays for the release of noisy gradients), whereas the latter bound leads to a privacy loss that decreases in (since more iterations of the same Noisy-SGD process enable better masking of different initializations). Balancing these two privacy bounds leads to the final choice . (We remark that this quantity is not just an artefact of our analysis, but in fact is the true answer for when the privacy loss stops increasing, as established by our matching lower bound.)
At a high-level, our analysis proceeds by carefully running these arguments in parallel.
We emphasize that this analysis is the first to show that the privacy loss of Noisy-SGD can strictly improve if the constraint set is made smaller. In contrast, previous analyses only argue that restricting the constraint set cannot worsen the privacy loss (e.g., by using a post-processing inequality to analyze the projection step). The key technical challenge in exploiting a diameter bound is dealing with the complicated non-linearities that arise when interleaving projections with noisy gradient updates. Our techniques enable such an analysis.
Lower bound on privacy.
We construct two adjacent datasets for which the corresponding Noisy-SGD processes are random walks—one symmetric, one biased—that are confined to an interval of length . In this way, we reduce the question of how large is the privacy loss of Noisy-SGD, to the question of how distinguishable is a constrained symmetric random walk from a constrained biased random walk. The core technical challenge is that the distributions of the iterates of these random walks are intractable to reason about explicitly—due to the highly non-linear interactions between the projections and random increments. Briefly, our key technique here is a way to modify these processes so that on one hand, their distinguishability is essentially the same, and the other hand, no projections occur with high probability—allowing us to explicitly compute their distributions and thus also their distinguishability. Details in §4.
1.4 Other related work
Private sampling.
The mixing time of (stochastic) Langevin Dynamics has been extensively studied in recent years starting with [13, 15]. A recent focus in this vein is analyzing mixing in more stringent notions of distance, such as the Rényi divergence [43, 22], in part because this is necessary for proving privacy bounds. In addition to the aforementioned results, several other works focus on sampling from the distribution or its regularized versions from a privacy viewpoint. [31] proposed a mechanism of this kind that works for unbounded and showed -DP. Recently, [24] gave better privacy bounds for such a regularized exponential mechanism, and designed an efficient sampler based only on function evaluation. Also, [23] showed that the continuous Langevin Diffusion has optimal utility bounds for various private optimization problems.
As alluded to in §1.1, there are several core issues with trying to prove DP bounds for Noisy-SGD by directly combining “fast mixing” bounds with “private once mixed” bounds. First, mixing results typically do not apply, e.g., since DP requires mixing in stringent divergences like Rényi, or because realistic settings with constraints, stochasticity, and lack of strong convexity are difficult to analyze—indeed, understanding the mixing time for such settings is a challenging open problem. Second, even when fast mixing bounds do apply, directly combining them with “private once mixed” bounds unavoidably leads to DP bounds that are loose to the point of being essentially useless (e.g., the inevitable dimension dependence in mixing bounds to the stationary distribution of the continuous-time Langevin Diffusion would lead to dimension dependence in DP bounds, which should not occur—as we show). Third, even if a Markov chain were private after mixing, one cannot conclude from this that it is private beforehand—indeed, there are simple Markov chains which are private after mixing, yet are exponentially non-private beforehand [21].
Utility bounds.
In the field of private optimization, one separately analyzes two properties of algorithms: (i) the privacy loss as a function of the number of iterations, and (ii) the utility (a.k.a., optimization error) as a function of the number of iterations. These two properties can then be combined to obtain privacy-utility tradeoffs. The purpose of this paper is to completely resolve (i); this result can then be combined with any bound on (ii).
Utility bounds for SGD are well understood [39], and these analyses have enabled understanding utility bounds for Noisy-SGD in empirical [5] and population [6] settings. However, there is a big difference between the minimax-optimal utility bounds in theory versus what is desired in practice. Indeed, while in theory a single pass of Noisy-SGD achieves the minimax-optimal population risk [20], in practice Noisy-SGD benefits from running longer to get more accurate training. In fact, this divergence is even true and well-documented for non-private SGD as well, where one epoch is minimax-optimal in theory, but in practice more epochs help. Said simply, this is because typical problems are not worst-case problems (i.e., minimax-optimal theoretical bounds are typically not representative of practice). For these practical settings, in order to run Noisy-SGD longer, it is essential to have privacy bounds which do not increase ad infinitum. Our paper resolves this.
1.5 Organization
Section 2 recalls relevant preliminaries. Our main result Theorem 1.3 is proved in Section 3 (upper bound) and Section 4 (lower bound). Section 5 contains extensions to strongly convex losses, cyclic batch updates, and non-uniform stepsizes. §6 concludes with future research directions motivated by these results. For brevity, several proofs are deferred to Appendix A. For simplicity, we do not optimize constants in the main text; this is done in Appendix B.
2 Preliminaries
In this section, we recall relevant preliminaries about convex optimization (§2.1), differential privacy (§2.2), and two by-now-standard techniques for analyzing the privacy of optimization algorithms—namely, Privacy Amplification by Sampling (§2.3) and Privacy Amplification by Iteration (§2.4).
Notation.
We write to denote the law of a random variable , and to denote the law of given the event . We write as shorthand for the vector concatenating . We write to denote the pushforward of a distribution under a function , i.e., the law of where . We abuse notation slightly by writing even when is a “random function”, by which we mean a Markov transition kernel. We write to denote the convolution of probability distributions and , i.e., the law of where and are independently distributed. We write to denote the mixture distribution that is with probability and with probability . We write to denote the output of an algorithm run on input ; this is a probability distribution if is a randomized algorithm.
2.1 Convex optimization
Throughout, the loss function corresponding to a data point or is denoted by or , respectively. While the dependence on the data point is arbitrary, the loss functions are assumed to be convex in the argument (a.k.a., the machine learning model we seek to train). Throughout, the set of possible models is a convex subset of . In order to establish both optimization and privacy guarantees, two additional assumptions are required on the loss functions. Recall that a differentiable function is said to be:
-
•
-Lipschitz if for all .
-
•
-smooth if for all .
An important implication of smoothness is the following well-known fact; for a proof, see e.g., [35].
Lemma 2.1 (Small gradient steps on smooth convex losses are contractions).
Suppose is a convex, -smooth function. For any , the mapping is a contraction.
2.2 Differential privacy and Rényi differential privacy
Over the past two decades, differential privacy (DP) has become a standard approach for quantifying how much sensitive information an algorithm leaks about the dataset it is run upon. Differential privacy was first proposed in [17], and is now widely used in industry (e.g., at Apple [36], Google [18], Microsoft [14], and LinkedIn [37]), as well as in government data collection such as the 2020 US Census [44]. In words, DP measures how distinguishable the output of an algorithm is when run on two “adjacent” datasets—i.e., two datasets which differ on at most one datapoint.888This is sometimes called “replace adjacency”. The other canonical notion of dataset adjacency is “remove adjacency”, which means that one dataset is the other plus an additional datapoint. All results in this paper extend immediately to both settings; for details see Remark B.2.
Definition 2.2 (Differential Privacy).
A randomized algorithm satisfies -DP if for any two adjacent datasets and any measurable event ,
In order to prove DP guarantees, we work with the related notion of Rényi Differential Privacy (RDP) introduced by [32], since RDP is more amenable to our analysis techniques and is readily converted to DP guarantees. To define RDP, we first recall the definition of Rényi divergence.
Definition 2.3 (Rényi divergence).
The Rényi divergence between two probability measures and of order is
if , and is otherwise. Here we adopt the standard convention that and for . The Rényi divergences of order are defined by continuity.
Definition 2.4 (Rényi Differential Privacy).
A randomized algorithm satisfies -RDP if for any two adjacent datasets ,
It is straightforward to convert an RDP bound to a DP bound as follows [32, Proposition 1].
Lemma 2.5 (RDP-to-DP conversion).
Suppose that an algorithm satisfies -RDP for some . Then the algorithm satisfies -DP for any and .
Following, we recall three basic properties of RDP that we use repeatedly in our analysis. The first property regards convexity. While the KL divergence (the case ) is jointly convex in its two arguments, for the Rényi divergence is only jointly quasi-convex. See e.g., [42, Theorem 13] for a proof and a further discussion of partial convexity properties of the Rényi divergence.
Lemma 2.6 (Joint quasi-convexity of Rényi divergence).
For any Rényi parameter , any mixing probability , and any two pairs of probability distributions and ,
The second property states that pushing two measures forward through the same (possibly random) function cannot increase the Rényi divergence. This is the direct analog of the classic “data-processing inequality” for the KL divergence. See e.g., [42, Theorem 9] for a proof.
Lemma 2.7 (Post-processing property of Rényi divergence).
For any Rényi parameter , any (possibly random) function , and any probability distributions ,
The third property is the appropriate analog of the chain rule for the KL divergence (the direct analog of the chain rule does not hold for Rényi divergences, ). This bound has appeared in various forms in previous work (e.g., [2, 32, 16]); we state the following two equivalent versions of this bound since at various points in our analysis it will be more convenient to reference one or the other. A proof of the former version is in [32, Proposition 3]. The proof of the latter is similar to [2, Theorem 2]; for the convenience of the reader, we provide a brief proof in Section A.1.
Lemma 2.8 (Strong composition for RDP, v1).
Suppose algorithms and satisfy -RDP and -RDP, respectively. Let denote the algorithm which, given a dataset as input, outputs the composition . Then satisfies -RDP.
Lemma 2.9 (Strong composition for RDP, v2).
For any Rényi parameter and any two sequences of random variables and ,
2.3 Privacy Amplification by Sampling
A core technique in the DP literature is Privacy Amplification by Sampling [27] which quantifies the idea that a private algorithm, run on a small random sample of the input, becomes more private. There are several ways of formalizing this. For our purpose of analyzing Noisy-SGD, we must understand how distinguishable a noisy stochastic gradient update is when run on two adjacent datasets. This is precisely captured by the “Sampled Gaussian Mechanism”, which is a composition of two operations: subsampling and additive Gaussian noise. Below we recall a convenient statement of this in terms of RDP from [33]. We start with a preliminary definition in one dimension.
Definition 2.10 (Rényi Divergence of the Sampled Gaussian Mechanism).
For Rényi parameter , mixing probability , and noise parameter , define
Next, we provide a simple lemma that extends this notion to higher dimensions. In words, the proof simply argues that the worst-case distribution is a Dirac supported on a point of maximal norm, and then reduces the multivariate setting to the univariate setting via rotational invariance. The proof is similar to [33, Theorem 4]; for convenience, details are provided in Section A.2.
Lemma 2.11 (Extrema for Rényi Divergence of the Sampled Gaussian Mechanism).
For any Rényi parameter , mixing probability , noise level , dimension , and radius ,
where above denotes the set of Borel probability distributions that are supported on the ball of radius in .
Finally, we recall the tight bound [33, Theorem 11] on these quantities. This bound restricts to Rényi parameter at most , which is defined to be the largest satisfying and , where . While we use Lemma 2.12 to prove the asymptotics in our theoretical results, we emphasize that i) our bounds can be computed numerically for any , see Remark B.1; and ii) this upper bound does not preclude from the typical parameter regime of interest, see the discussion in §1.2.
Lemma 2.12 (Bound on Rényi Divergence of the Sampled Gaussian Mechanism).
Consider Rényi parameter , mixing probability , and noise level . If , then
2.4 Privacy Amplification by Iteration
The Privacy Amplification by Iteration technique of [19] bounds the privacy loss of an iterative algorithm without “releasing” the entire sequence of iterates—unlike arguments based on Privacy Amplification by Sampling, c.f., the discussion in §1.3. This technique applies to processes which are generated by a Contractive Noisy Iteration (CNI). We begin by recalling this definition, albeit in a slightly more general form that allows for two differences. The first difference is allowing the contractions to be random; albeit simple, this generalization is critical for analyzing Noisy-SGD because a stochastic gradient update is random. The second difference is that we project each iterate; this generalization is solely for convenience as it simplifies the exposition.999Since projections are contractive, these processes could be dubbed Contractive Noisy Contractive Processes (CNCI); however, we use Definition 2.13 as it more closely mirrors previous usages of CNI. Alternatively, since the composition of contractions is a contraction, the projection can be combined with in order to obtain a bona fide CNI; however, this requires defining auxiliary shifted processes which leads to a more complicated analysis (this was done in the original arXiv version v1).
Definition 2.13 (Contractive Noisy Iteration).
Consider a (random) initial state , a sequence of (random) contractions , a sequence of noise distributions , and a convex set . The Projected Contractive Noisy Iteration is the law of the final iterate of the process
where is drawn independently from .
Privacy Amplification by Iteration is based on two key lemmas. We recall both below in the case of Gaussian noise since this suffices for our purposes. We begin with a preliminary definition.
Definition 2.14 (Shifted Rényi Divergence; Definition 8 of [19]).
Let be probability distributions over . For parameters and , the shifted Rényi divergence is
(Recall that the -Wasserstein metric between two distributions and is the smallest real number for which the following holds: there exists a joint distribution with first marginal and second marginal , under which almost surely.)
The shift-reduction lemma [19, Lemma 20] bounds the shifted Rényi divergence between two distributions that are convolved with Gaussian noise.
Lemma 2.15 (Shift-reduction lemma).
Let be probability distributions on . For any ,
The contraction-reduction lemma [19, Lemma 21] bounds the shifted Rényi divergence between the pushforwards of two distributions through similar contraction maps. Below we state a slight generalization of [19, Lemma 21] that allows for random contraction maps. The proof of this generalization is similar, except that here we exploit the quasi-convexity of the Rényi divergence to handle the additional randomness; details in Section A.3.
Lemma 2.16 (Contraction-reduction lemma, for random contractions).
Suppose are random functions from to such that (i) each is a contraction almost surely; and (ii) there exists a coupling of under which almost surely. Then for any probability distributions and on ,
The original101010Strictly speaking, Proposition 2.17 is a generalization of [19, Theorem 22] since it allows for randomized contractions and projections in the CNI (c.f., Definition 2.13). However, the proof is identical, modulo replacing the original Contraction-Reduction Lemma with its randomized generalization (Lemma 2.16) and analyzing the projection step again using the Contraction-Reduction Lemma. Privacy Amplification by Iteration argument combines these two lemmas to establish the following bound.
Proposition 2.17 (Original PABI bound).
Let and denote the outputs of and where . Let , and consider any sequence such that is non-negative for all and satisfies . Then
3 Upper bound on privacy
In this section, we prove the upper bound in Theorem 1.3. The formal statement of this result is as follows; see §1.2 for a discussion of the mild assumptions on and .
Theorem 3.1 (Privacy upper bound for Noisy-SGD).
Let be a convex set of diameter , and consider optimizing convex losses over that are -Lipschitz and -smooth. For any number of iterations , dataset size , batch size , stepsize , noise parameter , and initialization , Noisy-SGD satisfies -RDP for and
where .
Below, in §3.1 we first isolate a simple ingredient in our analysis as it may be of independent interest. Then in §3.2 we prove Theorem 3.1.
3.1 Privacy Amplification by Iteration bounds that are not vacuous as
Recall from the preliminaries section §2.4 that Privacy Amplification by Iteration arguments, while tight for a small number of iterations , provide vacuous bounds as (c.f., Proposition 2.17). The following proposition overcomes this by establishing privacy bounds which are independent of the number of iterations . This result only requires additionally assuming that is bounded at some intermediate time . This is a mild assumption that is satisfied automatically if, e.g., both CNI processes are in a constraint set of bounded diameter.
Proposition 3.2 (New PABI bound that is not vacuous as ).
Let , , and be as in Proposition 2.17. Consider any and sequence such that is non-negative for all and satisfies . If has diameter , then:
In words, the main idea behind Proposition 3.2 is simply to change the original Privacy Amplification by Iteration argument—which bounds the shifted divergence at iteration , by the shifted divergence at iteration , and so on all the way to the shifted divergence at iteration —by instead stopping the induction earlier. Specifically, only unroll to iteration , and then use boundedness of the iterates to control the shifted divergence at that intermediate time .
We remark that this new version of PABI uses the shift in the shifted Rényi divergence for a different purpose than previous work: rather than just using the shift to bound the bias incurred from updating on two different losses, here we also use the shift to exploit the boundedness of the constraint set.
Proof of Proposition 3.2.
Bound the divergence at iteration by the shifted divergence at iteration as follows:
Above, the first step is because ; the second step is by the iterative construction of ; the third and final steps are by the the contraction-reduction lemma (Lemma 2.16), and the penulimate step is by the shift-reduction lemma (Lemma 2.15).
By repeating the above argument, from to all the way to , we obtain:
Now observe that the shifted Rényi divergence on the right hand side vanishes because . ∎
3.2 Proof of Theorem 3.1
Step 1: Coupling the iterates
Suppose and are adjacent datasets; that is, they agree on all indices except for at most one index . Denote the corresponding loss functions by and , where except possibly . Consider running Noisy-SGD on either dataset or for iterations—call the resulting iterates and , respectively—where we start from the same point and couple the sequence of random batches and the random noise injected in each iteration. That is,
for all , where we have split the Gaussian noise into terms , , , for any111111Simply choosing yields our minimax-optimal asymptotics; and we do this at the end of the proof. However, optimizing the constants in this “noise splitting” is helpful in practice (details in Appendix B), and as such we leave and as variables until the end of the proof. numbers satisfying . In words, this noise-splitting enables us to use the noise for both the Privacy Amplification by Sampling and Privacy Amplification by Iteration arguments below.
Importantly, notice that in the definition of , the gradient is taken w.r.t. loss functions corresponding to data set rather than ; this is then corrected via the bias in the noise term . Notice also that this bias term in is only realized (i.e., is possibly non-centered) with probability because the probability that is in a random size- subset of is
(3.1) |
Step 2: Interpretation as conditional CNI sequences
Observe that conditional on the event that are equal (call their value ), then
where
(3.2) |
Since the following lemma establishes that is contractive, we conclude that conditional on the event that for all , then and are projected CNI (c.f., Definition 2.13) with respect to the same update functions. Here, is a parameter that will be chosen shortly. Intuitively, is the horizon for which we bound all previous privacy leakage only through the fact that are within distance from each other, see the proof overview in §1.3.
Observation 3.3.
The function defined in (3.2) is contractive.
Proof.
For any ,
by plugging in the definition of , using the triangle inequality, and then using the fact that the stochastic gradient update is a contraction (Lemma 2.1). ∎
Step 3: Bounding the privacy loss
Recall that we seek to upper bound . We argue that:
(3.3) |
Above, the first step is by the post-processing inequality for the Rényi divergence (Lemma 2.7), and the second step is by the strong composition rule for the Rényi divergence (Lemma 2.9).
Step 3a: Bounding , using Privacy Amplification by Sampling.
We argue that
(3.4) |
Above, the first step is the definition of . The second step is by the strong composition rule for the Rényi divergence (Lemma 2.9). The third step is because for any , the law of conditional on is the Gaussian distribution ; and by (3.1), the law of conditional on is the mixture distribution that is with probability , and otherwise is where . The final step is by the bound in Lemma 2.11 on the Rényi divergence of the Sampled Gaussian Mechanism, combined with the observation that , which is immediate from the triangle inequality and the -Lipschitz smoothness of the loss functions.
Step 3b: Bounding , using Privacy Amplification by Iteration. As argued in step 2, conditional on the event that for all , then and are projected CNI with respect to the same update functions. Note also that since the iterates lie in the constraint set which has diameter . Therefore we may apply the new Privacy Amplification by Iteration bound (Proposition 3.2) with and to obtain:
(3.5) |
Step 4: Putting the bounds together
By plugging into (3.3) the bound (3.4) on and the bound (3.5) on , we conclude that the algorithm is -RDP for
(3.6) |
By Lemma 2.12, for and .121212While Lemma 2.12 requires , the case has an alternate proof that does not require Lemma 2.12 (or any of its assumptions). Specifically, replace (3.4) with the upper bound by using the well-known formula for the Rényi divergence between Gaussians. This is tight up to a constant factor, and the rest of the proof proceeds identically.
By setting , we have that up to a constant factor,
Bound this minimization by
where above the first step is by setting , and the second step is by setting (this can be done if ). Therefore, by combining the above two displays, we obtain
Here the first term in the minimization comes from the simple bound (1.1) which scales linearly in . This completes the proof of Theorem 3.1.
4 Lower bound on privacy
In this section, we prove the lower bound in Theorem 1.3. This is stated formally below and holds even for linear loss functions in one dimension, in fact even when all but one of the loss functions are zero. See §1.2 for a discussion of the mild assumptions on the diameter.
Theorem 4.1 (Privacy lower bound for Noisy-SGD).
There exist universal constants131313We prove this for , , , ; no attempt has been made to optimize these constants. , , , and a family of -Lipschitz linear loss functions over the interval such that the following holds. Consider running Noisy-SGD from arbitrary initialization with any parameters satisfying and . Then Noisy-SGD is not -RDP for
(4.1) |
where .
Proof sketch of Theorem 4.1.
(Full details in Section A.4.)
Construction. Consider datasets and which differ only on , and corresponding functions which are all zero , except for . Clearly these functions are linear and -Lipschitz. The intuition behind this construction is that running Noisy-SGD on or generates a random walk that is clamped to stay within the interval —but with the key difference that running Noisy-SGD on dataset generates a symmetric random walk , whereas running Noisy-SGD on dataset generates a biased random walk that biases right with probability each step. That is,
where the processes are initialized at , each random increment is an independent Gaussian, and each bias is with probability and otherwise is .
Key obstacle. The high-level intuition behind this construction is simple to state: the bias of the random walk makes it distinguishable (to the minimax-optimal extent, as we show) from the symmetric random walk . However, making this intution precise is challenging because the distributions of the iterates are intractable to reason about explicitly—due to the highly non-linear interactions between the projections and the random increments. Thus we must establish the distinguishability of without reasoning explicitly about their distributions.
Key technical ideas. A first, simple observation is that it suffices to prove Theorem 4.1 in the constant RDP regime of , since the linear RDP regime of then follows by the strong composition rule for RDP (Lemma 2.8), as described in Appendix A.4. Thus it suffices to show that the final iterates are distinguishable after iterations.
A natural attempt to distinguish for large is to test positivity. This is intuitively plausible because by symmetry, whereas we might expect since the bias of pushes it to the top half of the interval . Such a discrepancy would establish an ()-DP bound that, by the standard RDP-to-DP-conversion in Lemma 2.5, would imply the desired ()-RDP bound in Theorem 4.1. However, the issue is how to prove the latter statement without an explicit handle on the distribution of .
To this end, the key technical insight is to define an auxiliary process which intializes iterations before the final iteration at the lowest point in , namely , and then updates in an analogously biased way as the process except without projections at the bottom of . That is,
The point is that on one hand, stochastically dominates , so that it suffices to show . And on the other hand, is easy to analyze because, as we show, with overwhelming probability no projections occur. This lack of projections means that, modulo a probability event which is irrelevant for the purpose of -DP bounds, is the sum of (biased) independent Gaussian increments—hence it has a simple explicitly computable distribution: it is Gaussian.
It remains to establish that (i) with high probability no projections occur in the process, so that the aforementioned Gaussian approximates the law of , and (ii) this Gaussian is positive with probability , so that we may conclude the desired DP lower bound. Briefly, the former amounts to bounding the hitting time of a Gaussian random walk, which is a routine application of martingale concentration. And the latter amounts to computing the parameters of the Gaussian. A back-of-the-envelope calculation shows that the total bias in the process is , and conditional on such an event, the Gaussian approximating has mean roughly and variance , and therefore is positive with probability , as desired. Full proof details are provided in Section A.4.
5 Extensions
This section provides details for the extensions mentioned in §1.2. Specifically, we show how our techniques readily extend to strongly convex losses (§5.1), to cyclic batch updates (§5.2), to non-uniform stepsizes (§5.3), and to regularized losses (§5.4). Our techniques readily extend to any combinations of these settings, e.g. Noisy-SGD with cyclic batches and non-uniform stepsizes on strongly convex losses; the details are straightforward and omitted for brevity.
5.1 Strongly convex losses
Here we show a significantly improved privacy loss if the functions are -strongly convex. In particular, we show that after only iterations, there is no further privacy loss. Throughout, the notation suppresses logarithmic factors in the relevant parameters.
Theorem 5.1 (Privacy upper bound for Noisy-SGD, in strongly convex setting).
Let be a convex set of diameter , and consider optimizing losses over that are -Lipschitz, -strongly convex, and -smooth. For any number of iterations , dataset size , batch size , noise parameter , and initialization , Noisy-SGD with stepsize satisfies -RDP for and
(5.1) |
where and .
We make two remarks about this result.
Remark 5.2 (Intepretation of and ).
The operational interpretation of is that it is the contraction coefficient for a gradient descent iteration with stepsize (c.f., Lemma 5.5 below). In the common setting of , then , whereby
Another important example is , i.e., when the stepsize is optimized to minimize the contraction coefficient . In this case, where denotes the condition number, and thus .
Remark 5.3 (Bounded diameter is unnecessary for convergent privacy in the strongly convex setting).
Unlike the convex setting, in this strongly convex setting, the bounded diameter assumption can be removed. Specifically, for the purposes of DP (rather than RDP), the logarithmic dependence of on (hidden in the above) can be replaced by logarithmic dependence on , , , and since the SGD trajectories move from the initialization point by with high probability, and so this can act as the “effective diameter”. The intuition behind why a diameter assumption is needed for the convex setting but not for the strongly convex setting is that without strong convexity, SGD is only weakly contractive, which means that its instability increases with the number of iterations; whereas in the strongly convex setting, SGD is strongly contractive, which means that movements from far enough in the past are effectively “erased”—a pre-requisite for convergent privacy.
The proof of Theorem 5.1 (the setting of strongly convex losses) is similar to the proof of Theorem 3.1 (the setting of convex losses), except for two key differences:
-
(i)
Strong convexity of the losses ensures that the update function in the Contractive Noisy Iterations is strongly contractive. (Formalized in Observation 5.4.)
-
(ii)
This strong contractivity of the update function ensures exponentially better bounds in the Privacy Amplification by Iteration argument. (Formalized in Proposition 5.6.)
We first formalize the change (i).
Observation 5.4 (Analog of Observation 3.3 for strongly convex losses).
Proof.
Lemma 5.5 (Analog of Lemma 2.1 for strongly convex losses).
Suppose is an -strongly convex, -smooth function. For any stepsize , the mapping is -contractive, for .
Next we formalize the change (ii).
Proposition 5.6 (Analog of Proposition 3.2 for strongly convex losses).
Consider the setup in Proposition 3.2, and additonally assume that are almost surely -contractive. Consider any and any reals such that is non-negative for all and satisfies . Then:
Proof.
Lemma 5.7 (Analog of Lemma 2.16 for strongly convex losses).
Consider the setup of Lemma 5.7, except additionally assuming that are each -contractive almost surely. Then
Proof.
We now combine the changes (i) and (ii) to prove Theorem 5.1.
Proof of Theorem 5.1.
We record the differences to Steps 1-4 of the proof in the convex setting (Theorem 3.1). Step 1 is identical for coupling the iterates. Step 2 is identical for constructing the conditional CNI, except that in this strongly convex setting, the update functions are not simply contractive (Observation 3.3), but in fact -contractive (Observation 5.4). We use this to establish an improved bound on the term in Step 3. Specifically, invoke Proposition 5.6 with for all , for all and to obtain
(5.2) |
This allows us to conclude the following analog of (3.6) for the present strongly convex setting:
(5.3) |
where . Simplifying this in Step 4 requires the following modifications since the asymptotics are different in the present strongly convex setting. Specifically, bound the above by
by setting , valid if . By setting , and using the bound on in the proof of Theorem 3.1, we conclude the desired bound
∎
5.2 Cyclic batch updates
Here we show how our results can be extended to Noisy-SGD if the batches are chosen cyclically in round-robin fashion rather than randomly. We call this algorithm Noisy-CGD. The previous best privacy bound for this algorithm is -RDP upper bound with
(5.4) |
which can be obtained from the standard Privacy Amplification by Iteration argument if one extends the proof of [19, Theorem 35] to arbitrary numbers of iterations and batch sizes . The key difference in our privacy bound, below, is that it does not increase ad infinitum as . This is enabled by the new diameter-aware Privacy Amplification by Iteration bound (Proposition 3.2).
Theorem 5.8 (Privacy upper bound for Noisy-CGD).
Let be a convex set of diameter , and consider optimizing convex losses over that are -Lipschitz and -smooth. For any number of iterations , stepsize , noise parameter , initialization , dataset size , and batch size , Noisy-CGD satisfies -RDP for any and
(5.5) |
where .
Our analysis of Noisy-CGD is simpler than that of Noisy-SGD: because Noisy-CGD updates in a deterministic order, its analysis requires neither Privacy Amplification by Sampling arguments nor the conditionality of the CNI argument in the proof of Theorem 3.1. This also obviates any assumptions on the Rényi parameter and noise because the Privacy Amplification by Sampling Lemma 2.12 is not needed here. On the other hand, despite Noisy-CGD being easier to analyze, this algorithm yields substantially weaker privacy guarantees for small numbers of iterations .
Proof of Theorem 5.8.
We prove Theorem 5.8 in the case , because as mentioned above, the case follows from (5.4), which is a straightforward extension of [19, Theorem 35].
Suppose and are adjacent datasets; that is, they agree on all indices except for at most one index . Denote the corresponding loss functions by and , where except possibly . Consider running Noisy-CGD on either dataset or for iterations—call the resulting iterates and , respectively—where we start from the same point and couple the sequence of random batches and the random noise injected in each iteration. Then
where and
Observe that are contractions since a stochastic gradient update on a smooth function is contractive (by Observation 3.3 and the triangle inequality). Therefore the SGD iterates are generated by Projected Contractive Noisy Iterations and where the noise distributions are .
Therefore we may invoke the new diameter-aware Privacy Amplification by Iteration bound (Proposition 3.2) using the diameter bound . To this end, we now set the parameters , , and in that proposition. Bound by the -Lipschitz assumption on the losses. Set and assume for simplicity of exposition that the batch size divides the dataset size , and also that divides so that each datapoint is updated the same number of times in iterations; the general case is a straightforward extension that just requires additional floors/ceilings. Then set for the first batches before updating the different datapoint , followed by for the next batches before the final update on , and then until the final iteration . Observe that then satisfies for all and has final value , which lets us invoke Proposition 3.2. Therefore
(5.6) |
To simplify, plug in the definition of , use the elementary bound , and collect terms to obtain
Combining the above two displays yields
This proves Theorem 5.8 in the case , as desired. ∎
5.3 Non-uniform stepsizes
The analysis of Noisy-SGD in Theorem 1.3 readily extends to non-uniform stepsize schedules . Assume for all so that the gradient update is contractive (Lemma 2.1). Then the analysis in §3 goes through unchanged, and can be improved by choosing non-uniform in the Privacy Amplification by Iteration argument to bound . Specifically, since the bound in Proposition 3.2 scales as
the proper choice of is no longer the uniform choice , but instead the non-uniform choice . This yields the following generalization of the privacy bound (3.6): Noisy-SGD satisfies -RDP for
(5.7) |
where .
Further simplifying this RDP bound (5.7) requires optimizing over , and this optimization depends on the decay rate of the stepsize schedule . As an illustrative example, we demonstrate this optimization for the important special case of for , which generalizes the constant step-size result in Theorem 3.1.
Example 5.9 (Privacy of Noisy-SGD with polynomially decaying stepsize schedule).
Consider stepsize schedule for . Then up to a constant factor, the privacy bound (5.7) gives
by a similar optimization over as in the proof of Theorem 3.1, with the only differences here being that we take where and note that since . We therefore obtain the final privacy bound of
where the first term in the minimization is from the simple bound (1.1) which scales linearly in .
5.4 Regularization
In order to improve generalization error, a common approach for training machine learning models—whether differentially private or not—is to regularize the loss functions. Specifically, solve
for some convex regularization function . In order to apply existing optimization and privacy analyses for the regularized problem, one could simply use the relevant parameters (Lipschitzness, smoothness, strong convexity) of the regularized losses rather than of . A standard issue is that the Lipschitzness constant degrades after regularization; that is, typically the Lipschitz constant of can only be bounded by the Lipschitz constant of plus that of .
The purpose of this brief subsection is to point out that when using our privacy analysis, the Lipschitz constant does not degrade after regularization. Specifically, although applying our privacy bounds requires using the strong convexity and smoothness constants of the regularized function (as is standard in all existing privacy/optimization analyses), one can use the Lipschitz constant of rather than that of . This is because our analysis only uses Lipschitzness in Step 3a to bound the “gradient sensitivity” —aka the amount that gradients can differ when using losses from adjacent datasets—and this quantity is invariant under regularization because of course .
6 Discussion
The results of this paper suggest several natural directions for future work:
-
•
Clipped gradients? In practical settings, Noisy-SGD implementations sometimes “clip” gradients to force their norms to be small, see e.g., [2]. In the case of generalized linear models, the clipped gradients can be viewed as gradients of an auxiliary convex loss [40], in which case our results can be applied directly. However, in general, clipped gradients do not correspond to gradients of a convex loss, in which case our results (as well as all other works in the literature that aim at proving convergent privacy bounds) do not apply. Can this be remedied?
-
•
Average iterate? Can similar privacy guarantees be established for the average iterate rather than the last iterate? There are fundamental difficulties with trying to proving this: indeed, the average iterate is provably not as private for Noisy-CGD [6].
-
•
Adaptive stepsizes? Can similar privacy guarantees be established for optimization algorithms with adaptive stepsizes? The main technical obstacle is the impact of the early gradients on the trajectory through the adaptivity, and one would need to control the privacy loss accumulation due to this. This appears to preclude using our analysis techniques, at least in their current form.
-
•
Beyond convexity? Convergent privacy bounds break down without convexity. This precludes applicability to deep neural networks. Is there any hope of establishing similar results under some sort of mild non-convexity? Due to simple non-convex counterexamples where the privacy of Noisy-SGD diverges, any such extension would have to make additional structural assumptions on the non-convexity (and also possibly change the Noisy-SGD algorithm), although it is unclear how this would even look. Moreover, this appears to require significant new machinery as our techniques are the only known way to solve the convex problem, and they break down in the non-convex setting (see also the discussion in §1.2).
-
•
General techniques? Can the analysis techniques developed in this paper be used in other settings? Our techniques readily generalize to any iterative algorithm which interleaves contractive steps and noise convolutions. Such algorithms are common in differentially private optimization, and it would be interesting to apply them to variants of Noisy-SGD.
Acknowledgements.
We are grateful to Hristo Paskov for many insightful conversations.
Appendix A Deferred proofs
A.1 Proof of Lemma 2.9
For ,
The remaining case of follows by continuity (or by the Chain Rule for KL divergence).
A.2 Proof of Lemma 2.11
Observe that the mixture distribution can be further decomposed as the mixture distribution
where this integral is with respect to the weak topology of measures. Now argue as follows:
Above, the second step is by the quasi-convexity of the Rényi divergence (Lemma 2.6), the third step is by the rotation-invariance of the Gaussian distribution, and the fifth step is by a change of variables. Finally, observe that all steps in the above display hold with equality if is taken to be the Dirac distribution at a point of norm .
A.3 Proof of Lemma 2.16
Let be a probability distribution that certifies ; that is, satisfies and .
We claim that
(A.1) |
To establish this, first use the triangle inequality for the Wasserstein metric to bound
For the first term, pushforward under the promised coupling of in order to form a feasible coupling for the optimal transport distance that certifies . For the second term, use the fact that is almost surely a contraction to bound .
Now that we have established (A.1), it follows that is a feasible candidate for the optimization problem . That is,
By the quasi-convexity of the Rényi divergence (Lemma 2.6), the post-processing inequality for the Rényi divergence (Lemma 2.7), and then the construction of ,
Combining the last two displays completes the proof.
A.4 Proof of Theorem 4.1
Assume because once the theorem is proven in this setting, then the remaining setting follows by the strong composition rule for RDP (Lemma 2.8) by equivalently re-interpreting the algorithm Noisy-SGD run once with many iterations as running multiple instantiations of Noisy-SGD, each with few iterations. Consider the choice of constants in Footnote 13; then for , the quantity in (4.1) is greater than . So it suffices to show that Noisy-SGD does not satisfy -RDP. By the conversion from RDP to DP (Lemma 2.5), plus the calculation , it therefore suffices to show that Noisy-SGD does not satisfy -DP.
Consider the construction of datasets , functions , and processes in §4. Then, in order to show that Noisy-SGD does not satisfy -DP, it suffices to show that . We simplify both sides: for the left hand side, note that stochastically dominates for all ; and on the right hand side, note that by symmetry of the process around . Therefore it suffices to prove that
(A.2) |
To this end, we make two observations that collectively formalize the Gaussian approximation described in the proof sketch in §4. Below, let denote the total bias in the process , and let denote the event that both
-
(i)
Concentration of bias in the process : it holds that , for .
-
(ii)
No projections in the process : it holds that .
Observation A.1 ( occurs with large probability).
.
Proof.
For item (i) of , note that is a binomial random variable with trials, each of which has probability , so has expectation . Thus, by a standard Chernoff bound (see, e.g., [34, Corollary 4.6]), the probability that (i) does not hold is at most
Next, we show that conditional on (i), item (ii) fails with low probability. To this end, note that (ii) is equivalent to the event that for all . Thus because conditional on event (i), we have that
Now the latter expression has a simple interpretation: it is the probability that a random walk of length with i.i.d. increments never surpasses . By a standard concentration inequality on the hitting time of a random walk (e.g., see the application of Doob’s Submartingale Inequality on [46, Page 139]), this probability is at most
Putting the above bounds together, we conclude the desired claim:
∎
Observation A.2 (Gaussian approximation of conditional on ).
Denote . Conditional on the event , it holds that .
Proof.
By item (ii) of the event , no projections occur in the process , thus . By item (i), the bias . ∎
Next, we show how to combine these two observations in order to approximate by a Gaussian, and from this conclude a lower bound on the probability that is positive. We argue that
where above the second step is by Observation A.2, and the final step is by Observation A.1. Since is a centered Gaussian with variance , we have
By combining the above two displays, we conclude that
This establishes the desired DP lower bound (A.2), and therefore proves the theorem.
Appendix B Non-asymptotics for numerics
In this paper we have established asymptotic expressions for the privacy loss of Noisy-SGD (and variants thereof). While our results are tight up to a constant factor (proven in §4), constant factors are important for the numerical implementation of private machine learning algorithms. As such, we describe here how to numerically optimize these constants—first for full-batch noisy gradient descent in Section B.1 since this case is simplest, then for Noisy-SGD in Section B.2, and finally for strongly convex losses in Section B.3.
We make two remarks that apply to all of the non-asymptotic bounds in this section.
Remark B.1 (Full range of parameters).
Remark B.2 (Alternative notions of dataset adjacency).
Differential privacy refers to the indistinguishability of an algorithm’s output when run on “adjacent” datasets, and thus privacy bounds differ depending on the notion of adjacency between datasets. There are two standard such notions: two datasets are “replace adjacent” if they agree on all but one datapoint, and are “remove adjacent” if one dataset equals the other plus an additional datapoint. Throughout this paper, we proved bounds for “replace adjacency”; for “remove adjacency”, the bounds improve by a small constant factor. Specifically, replace occurences of by in the bounds in this section.141414This factor of improvement is due to the fact that the term in the Privacy Amplification by Iteration Argument can be improved to under “remove adjacency”, as opposed to under “replace adjacency”.
B.1 Full-batch updates
We stop the proof of Theorem 5.8 before the final asymptotic simplifications which lose small constant factors. Then full-batch GD satisfies -RDP for
(B.1) |
Here, the outer minimization combines two bounds. The second bound comes from (5.6) by applying the new Privacy Amplification by Iteration argument (Proposition 3.2) with , , and , where . The first bound comes simply applying the original Privacy Amplification by Iteration argument (Proposition 2.17) there with .
Numerical computation of privacy bound.
This bound (B.1) can be efficiently computed by taking to be , modulo a possible floor or ceiling, if this is at most .
B.2 Small-batch updates
We stop the proof of Theorem 3.1 before the final asymptotic simplifications which lose small constant factors. Then Noisy-SGD satisfies -RDP for
where . Above, the outermost minimization is from the noise-splitting in Step 1 of the proof of Theorem 3.1. The next minimization combines two bounds. The latter bound is (3.6), modulo a change of variables . The former bound is by noting that if , then the two Contractive Noisy Iterations in the proof of Theorem 3.1 are identical, whereby . Now, by swapping the outermost minimizations in the above bound, we may simplify to
(B.2) |
Numerical computation of privacy bound.
This bound (B.2) can be efficiently computed via ternary search over , where for each candidate we compute to high precision using the numerically stable formula in [33] and then evaluate the resulting minimization over by setting it to , modulo a possible floor or ceiling. In order to solve for the smallest noise given an RDP budget, one can use the aforementioned computation of the RDP budget given in order to binary search for (since the RDP is increasing in ).
B.3 Strongly convex losses
We stop the proof of Theorem 5.1 before the final asymptotic simplifications which lose small constant factors, and optimize151515 Specifically, rather than simply setting and the rest to , improve the bound on in (5.2) to times the minimal value of when optimizing over subject to . This has a closed-form solution: , where , yielding value . the parameters in the application of the new Privacy Amplification by Iteration argument (Proposition 5.6). Then, as argued in Section B.2, Noisy-SGD satisfies -RDP for
(B.3) |
where .
Numerical computation of privacy bound.
This bound (B.3) can be efficiently computed via ternary search over , where for each candidate we compute to high precision using the numerically stable formula in [33] and then evaluate the resulting minimization over by using first-order optimality to set it to
modulo possibly taking a floor/ceiling on , and taking the minimum with . In order to solve for the smallest noise given an RDP budget, one can use the aforementioned computation of the RDP budget given in order to binary search for (since the RDP is increasing in ).
We remark that this bound exploits strong convexity via the contraction factor , which exponentially decays the last term in (B.3). This contraction is best when is minimal, i.e., when the stepsize is , and it appears that one pays a price in privacy for any deviation from this choice of stepsize.
References
- Abadi et al. [2016a] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. TensorFlow: A system for large-scale machine learning. In Symposium on Operating Systems Design and Implementation, pages 265–283, 2016a.
- Abadi et al. [2016b] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Conference on Computer and Communications Security, pages 308–318, 2016b.
- Altschuler and Talwar [2022a] Jason M Altschuler and Kunal Talwar. Resolving the mixing time of the Langevin Algorithm to its stationary distribution for log-concave sampling. arXiv preprint, 2022a.
- Altschuler and Talwar [2022b] Jason M Altschuler and Kunal Talwar. Privacy of noisy stochastic gradient descent: More iterations without more privacy loss. NeurIPS, 2022b.
- Bassily et al. [2014] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Symposium on Foundations of Computer Science, pages 464–473. IEEE, 2014.
- Bassily et al. [2019] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems, volume 32, pages 11282–11291, 2019.
- Bassily et al. [2021] Raef Bassily, Cristobal Guzman, and Anupama Nandi. Non-Euclidean differentially private stochastic convex optimization. arXiv:2103.01278, 2021.
- Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
- Bubeck [2015] Sébastien Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4):231–357, 2015.
- Bubeck et al. [2018] Sébastien Bubeck, Ronen Eldan, and Joseph Lehec. Sampling from a log-concave distribution with Projected Langevin Monte Carlo. Discrete & Computational Geometry, 59(4):757–783, 2018.
- Chaudhuri et al. [2011] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069–1109, 2011.
- Chourasia et al. [2021] Rishav Chourasia, Jiayuan Ye, and Reza Shokri. Differential privacy dynamics of Langevin diffusion and noisy gradient descent. In Advances in Neural Information Processing Systems, volume 34, pages 14771–14781, 2021.
- Dalalyan [2017] Arnak Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79, 6 2017.
- Ding et al. [2017] Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. Advances in Neural Information Processing Systems, 30, 2017.
- Durmus and Moulines [2019] Alain Durmus and Eric Moulines. High-dimensional Bayesian inference via the unadjusted Langevin algorithm. Bernoulli, 25:2854–2882, 11 2019.
- Dwork and Rothblum [2016] Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016.
- Dwork et al. [2006] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Conference on Theory of Cryptography, pages 265–284, 2006.
- Erlingsson et al. [2014] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Conference on Computer and Communications Security, pages 1054–1067, 2014.
- Feldman et al. [2018] Vitaly Feldman, Ilya Mironov, Kunal Talwar, and Abhradeep Thakurta. Privacy amplification by iteration. In Symposium on Foundations of Computer Science, pages 521–532. IEEE, 2018.
- Feldman et al. [2020] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. In Symposium on the Theory of Computing, pages 439–449, 2020.
- [21] Arun Ganesh and Kunal Talwar. In Personal communication.
- Ganesh and Talwar [2020] Arun Ganesh and Kunal Talwar. Faster differentially private samplers via Rényi divergence analysis of discretized Langevin MCMC. In Advances in Neural Information Processing Systems, volume 33, pages 7222–7233, 2020.
- Ganesh et al. [2022] Arun Ganesh, Abhradeep Thakurta, and Jalaj Upadhyay. Langevin diffusion: An almost universal algorithm for private Euclidean (convex) optimization. arXiv:2204.01585v2, 2022.
- Gopi et al. [2022] Sivakanth Gopi, Yin Tat Lee, and Daogao Liu. Private convex optimization via exponential mechanism. arXiv:2203.00263v1, 2022.
- Hardt et al. [2016] Moritz Hardt, Ben Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning, pages 1225–1234, 2016.
- Jain and Thakurta [2014] Prateek Jain and Abhradeep Guha Thakurta. (Near) dimension independent risk bounds for differentially private learning. In International Conference on Machine Learning, pages 476–484. PMLR, 2014.
- Kasiviswanathan et al. [2011] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–826, 2011.
- Kifer et al. [2012] Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization and high-dimensional regression. In Conference on Computational Learning Theory, pages 25–1, 2012.
- Liu and Talwar [2019] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Symposium on Theory of Computing, pages 298–309, 2019.
- McSherry and Talwar [2007] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In Symposium on Foundations of Computer Science, pages 94–103, 2007.
- Minami et al. [2016] Kentaro Minami, Hitomi Arai, Issei Sato, and Hiroshi Nakagawa. Differential privacy without sensitivity. In Advances in Neural Information Processing Systems, volume 29, 2016.
- Mironov [2017] Ilya Mironov. Rényi differential privacy. In Computer Security Foundations Symposium, pages 263–275. IEEE, 2017.
- Mironov et al. [2019] Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the Sampled Gaussian Mechanism. arXiv preprint arXiv:1908.10530, 2019.
- Mitzenmacher and Upfal [2017] Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge University Press, 2017.
- Nesterov [2003] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2003.
- [36] Apple Press. Apple previews IOS 10, the biggest IOS release ever, 2016. https://www.apple.com/pr/library/2016/06/13Apple-Previews-iOS-10-The-Biggest-iOS-Release-Ever.html.
- Rogers et al. [2020] Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, and Parvez Ahammad. LinkedIn’s audience engagements API: A privacy preserving data analytics system at scale. arXiv preprint arXiv:2002.05839, 2020.
- Ryffel et al. [2022] Theo Ryffel, Francis Bach, and David Pointcheval. Differential privacy guarantees for stochastic gradient Langevin dynamics. arXiv:2201.11980, 2022.
- Shalev-Shwartz et al. [2009] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Stochastic convex optimization. In Conference on Computational Learning Theory, 2009.
- Song et al. [2021] Shuang Song, Thomas Steinke, Om Thakkar, and Abhradeep Thakurta. Evading the curse of dimensionality in unconstrained private GLMs. In International Conference on Artificial Intelligence and Statistics, pages 2638–2646. PMLR, 2021.
- Talwar et al. [2015] Kunal Talwar, Abhradeep Thakurta, and Li Zhang. Nearly optimal private Lasso. In Advances in Neural Information Processing Systems, volume 28, pages 3025–3033, 2015.
- Van Erven and Harremos [2014] Tim Van Erven and Peter Harremos. Rényi divergence and Kullback-Leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820, 2014.
- Vempala and Wibisono [2019] Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. In Advances in Neural Information Processing Systems 32, pages 8092–8104. 2019.
- Wang [2021] Hansi Wang. For the U.S. census, keeping your data anonymous and useful is a tricky balance. https://www.npr.org/2021/05/19/993247101/for-the-u-s-census-keeping-your-data-anonymous-and-useful-is-a-tricky-balance, 2021.
- Welling and Teh [2011] Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient Langevin dynamics. In International Conference on Machine Learning, page 681–688, Madison, WI, USA, 2011.
- Williams [1991] David Williams. Probability with martingales. Cambridge University Press, 1991.
- Ye and Shokri [2022] Jiayuan Ye and Reza Shokri. Differentially private learning needs hidden state (or much faster convergence). arXiv:2203.05363, 2022.
- Yousefpour et al. [2021] Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Ghosh, Akash Bharadwaj, Jessica Zhao, et al. Opacus: User-friendly differential privacy library in PyTorch. arXiv preprint arXiv:2109.12298, 2021.