Private Convex Optimization in General Norms
1 Introduction
The study of convex optimization in spaces where the natural geometry is non-Euclidean, beyond being a natural question of independent interest, has resulted in many successes across algorithm design. A basic example of this is the celebrated multiplicative weights, or exponentiated gradient method [AHK12], which caters to the geometry and has numerous applications in learning theory and algorithms. Moreover, optimization in real vector spaces equipped with different norms has found use in sparse recovery [CRT06], combinatorial optimization [KLOS14, KPSW19], multi-armed bandit problems [BC12], fair resource allocation [DFO20], and more (see e.g. [AKPS19, DG21] and references therein). Furthermore, optimization in Schatten- norm geometries (the natural generalization of norms to matrix spaces) has resulted in improved algorithms for matrix completion [ANW10] and outlier-robust PCA [JLT20]. In addition to and Schatten- norms, the theory of non-Euclidean geometries has been very useful in settings such as linear and semidefinite programming [Nem04] and optimization on matrix spaces [AGL+18], amongst others.
The main result of this paper is a framework for differentially private convex optimization in general normed spaces under a Lipschitz parameter bound. Differential privacy [DKM+06, DMNS06] has been adopted as the standard privacy notion for data analysis in both theory and practice, and differentially private algorithms have been deployed in many important settings in the industry as well as the U.S. census [EPK14, Abo16, Tea17, BEM+17, DKY17]. Consequently, differentially private optimization is an increasingly important and fundamental primitive in modern machine learning applications [BST14, ACG+16]. However, despite an extensive body of theoretical work providing privacy-utility tradeoffs (and more) for optimization in the Euclidean norm geometry, e.g. [CM08, CMS11, KST12, JT14, BST14, KJ16] (and many other follow-up works), more general settings have been left relatively unexplored. This state of affairs prohibits the application of private optimization theory to problems where the natural geometry is non-Euclidean. Recent works [AFKT21, BGN21, BGM21] have investigated special cases of private convex optimization, e.g. for spaces or polyhedral sets, under smoothness assumptions, or under structured losses. However, the systematic study of private convex optimization in general normed spaces in the most fundamental setting of Lipschitz losses has been left open, a gap that our work addresses.
Our framework for private convex optimization is simple: we demonstrate strong privacy-utility tradeoffs for a regularized exponential mechanism when optimizing a loss over a set equipped with a norm . More concretely, our algorithms sample from densities
where are tunable parameters, is a (data-dependent) empirical risk, and is a strongly convex regularizer in with bounded range over . In the analogous non-private Lipschitz convex optimization setting, most theoretical developments (namely mirror descent frameworks) have focused on precise applications where such an is readily available [Sha12, Bub15]. In this sense, our framework directly extends existing Lipschitz convex optimization theory to the private setting (and indeed, recovers existing non-private guarantees obtained by mirror descent [NY83]).
In the remainder of the introduction, we summarize our results (Section 1.1), highlight our technical contributions (Section 1.2), and situate our paper in the context of prior work (Section 1.3).
1.1 Our results
We study both the empirical risk minimization (ERM) problem and the stochastic convex optimization (SCO) problem in this paper; the goal in the latter case is to minimize the population risk. We formalize this under the following assumption, which parameterizes the space we are optimizing and the (empirical and population) risks we aim to minimize.
Assumption 1.
We make the following assumptions.
-
(1)
There is a compact, convex set equipped with a norm .
-
(2)
There is a -strongly convex function in , and .
-
(3)
There is a set such that for any , there is a convex function which is -Lipschitz in .
For definitions used above, see Section 2. We remark that by strong convexity, the parameter scales at least as , where is the diameter of with respect to ; in many cases of interest, we may upper bound by as well up to a logarithmic factor.
Finally, throughout the paper when working under Assumption 1, denotes a dataset drawn independently from , a distribution supported on , and we define and by
(1) |
Private ERM and SCO.
We first present the following general results under Assumption 1.
Minimizing the non-private population risk under the same setting as Assumption 1 is a very well-studied problem, with matching upper and lower bounds in many cases of interest, such as norms [NY83, ABRW12, DJW14]. The population utility achieved by our regularized exponential mechanism in Theorem 1 (namely, as ) matches the rate obtained by the classical mirror descent algorithm [NY83], which to our knowledge has not been previously observed. Finally, in Appendix A we provide an analog of Theorem 1 under the stronger assumption that the sample losses are strongly convex, bypassing the need for explicit regularization. Our results in Appendix A recover the optimal rate in the Euclidean case, matching known lower bounds [BST14].
and Schatten- norms.
In Corollaries 2, 3, and 4, we combine known (optimal) uniform convexity estimates for spaces [BCL94] with the algorithms of Theorem 3 and 4 to obtain privacy-utility tradeoffs summarized in Table 1. Interestingly, we achieve all these bounds with a single algorithmic framework, which in all cases matches or partially matches known lower bounds.
norm | Optimality gap | |
---|---|---|
ERM loss | SCO loss | |
We now contextualize our results with regard to the existing literature. In the following discussion, the “privacy-dependent” loss term is the term in the SCO loss scaling with , and the “privacy-independent” loss term is the SCO loss when .
In the case of constant , our Corollary 2 sharpens Theorem 5 of [AFKT21] by a factor in the privacy-dependent loss term, and is the first to match lower bounds of [BGN21, LL22]. It improves bounds by [BGN21] by at least a factor on both parts of the SCO loss, which further loses an factor on the privacy-dependent loss and requires additional smoothness assumptions.
In the important case of , of fundamental interest due to its applications in sparse recovery [CRT06] as well as online learning [Sha12, AHK12], our Corollary 3 improves the privacy-dependent loss term of [AFKT21] by a factor, and matches the privacy-independent loss lower bound in the SCO literature [DJW14], matching the rate of entropic mirror descent. The privacy-dependent loss term incurs an additional overhead of compared to existing lower bounds. However, just as lower bounds on the privacy-independent loss increase as , it is plausible that the upper bound obtained by Corollary 3 is optimal, which we leave as an interesting open direction.
In the case, prior work by [BGN21] obtains a rate matched by Corollary 4. The non-private population risk term in (15) is again known to be optimal [ABRW12]. We again find it an interesting open direction to close the gap between the upper bound (13) and known lower bounds for private convex optimization when , e.g. [BGN21, LL22].
We further demonstrate in Corollary 5 that all of these results have direct analogs in the case of optimization over matrix spaces equipped with Schatten norm geometries. To the best of our knowledge, this is the first such result in the private optimization literature; we believe this showcases the generality and versatility of our approach.
Finally, we mention that all of these results are algorithmic and achieved by samplers with polynomial query complexity and runtime, following developments of [LST21, GLL22]. In all cases, by simple norm equivalence relations, the query complexity of our samplers is at most a factor worse than the case, with improvements as . It is an exciting direction to develop efficient high-accuracy samplers catering to structured densities relevant to the setups considered in this paper, e.g. those whose negative log-likelihoods are strongly convex in norms. The design of sampling algorithms for continuous distributions has been an area of intense research activity in the machine learning community, discussed in greater detail in Section 1.3. We mention here that our hope is that our results and general optimization framework serve as additional motivation for the pursuit of efficient structured sampling algorithms working directly in non-Euclidean geometries.
1.2 Our techniques
Our results essentially build on the recent work of [GLL22], who observed that a regularized exponential mechanism achieves optimal privacy-utility tradeoffs for empirical and population risks when losses are Lipschitz in the norm. Under a Euclidean specialization of Assumption 1, [GLL22] provided variants of Theorem 1 using the regularizer , i.e. reweighting by a Gaussian.
We demonstrate several key tools used in [GLL22] have non-Euclidean extensions by using a simple, general approach based on a convex geometric tool known as localization. For example, the starting point of our developments is relating the privacy curves of two nearby, strongly convex densities with the privacy curve of Gaussians (see Section 2 for definitions).
Theorem 2.
Let be compact and convex, let be -strongly convex in , and let and . Suppose is -Lipschitz in . For all ,
An analog of Theorem 2 when is the Euclidean norm was proven as Theorem 4.1 of [GLL22]. Moreover, the analog of Theorem 1 in [GLL22] follows from combining Theorem 4.1 of that work, and their Theorem 6.10, a reduction from the SCO problem to the ERM problem (containing a generalization error bound). These proofs in [GLL22] rely on powerful inequalities from probability theory, which were initially studied in the Gaussian (Euclidean norm regularizer) setting. For example, Theorem 4.1 applied the Gaussian isoperimetric inequality of [ST74, Bor75a] (see also Theorem 1.1, [Led99]), which states that strongly logconcave distributions in the Euclidean norm have expansion quality at least as good as a corresponding Gaussian. Moreover, the generalization error bound in Theorem 6.10 was proven based on a Euclidean norm log-Sobolev inequality and transportation inequality, relating Wasserstein distances, KL divergences, and Lipschitz bounds on negative log-densities. Fortunately, it turns out that all of these inequalities have non-Euclidean generalizations (possibly losing constant factors). For example, a non-Euclidean log-Sobolev inequality was shown by Proposition 3.1 of [BL00], and a non-Euclidean transport inequality sufficient for our purposes is proved as Proposition 1 of [CE17]. Finally, variants of the Gaussian isoperimetric inequality in general norms are given by [MS08, Kol11]. Plugging in these tools into the proofs of [GLL22] allows us to recover Theorems 1 and 2, as well as our applications.
In this work, we take a different (and in our opinion, simpler) strategy to proving the probability-theoretic inequalities required by Theorems 1, 2, yielding an alternative to the proofs in [GLL22] which we believe may be of independent intellectual interest to the privacy community. In particular, our technical insight is the simple observation that several of the definitions in differential privacy are naturally cast in the language of localization [KLS95, FG04], which characterizes extremal logconcave densities subject to linear constraints (see our proof of Lemma 2 for an example of this). This observation allows us to reduce the proofs of key technical tools used in Theorems 1 and 2 to proving these tools in one dimension, where all norms are equivalent up to constant factor rescaling.111The one-dimensional case can then typically be handled by more straightforward “combinatorial” arguments, see e.g. Section 2.b of [LS93] or Appendix B.3 of [CDWY20] for examples. After deriving several extensions of basic localization arguments in Section 3.1, we follow this reduction approach to give a more unified proof to Theorems 1 and 2. To our knowledge, this is the first direct application of localization techniques in differential privacy.
The interplay between the privacy and probability theory communities is an increasingly active area of exploration [DRS21, GLL22, GTU22] (discussed in more detail in Section 1.3). We are hence optimistic that localization-based proof strategies will have further applications in the privacy literature, especially in situations (beyond this paper) where probability theoretic tools used in the Euclidean case do not have non-Euclidean variants in full generality. In such settings, it may be a valuable endeavor to see if necessary inequalities may be directly recast in the language of localization.
1.3 Prior work
Private optimization in Euclidean norm.
Many prior works on private convex optimization have focused on variants of the ERM and SCO problems studied in this work, under Lipschitz losses and bounded domains, such as [CMS11, KST12, BST14, BFTT19, BFGT20]. The optimal information-theoretic rate for these private optimization problems was given by [BST14], which was matched algorithmically up to constant factors by [BFTT19, BFGT20].
From an algorithmic perspective, a topic of recent interest in the Euclidean case is the problem of attaining optimal privacy-utility tradeoffs in nearly-linear time, namely, with gradient queries [FKT20, AFKT21, KLL21]. Under additional smoothness assumptions, this goal was achieved by [FKT20]; however, achieving near-optimal gradient oracle query rates in the general Lipschitz case remains open. We note that under value oracle access, a near-optimal bound was recently achieved by [GLL22]. This paper primarily focuses on the information-theoretic problem of achieving optimal privacy-utility tradeoffs for a given dataset size. However, we believe the corresponding problem of designing algorithms with near-optimal query complexities and runtimes (under value or gradient oracle access) is also an important open direction in the case of general norm geometries.
Private optimization in non-Euclidean norms.
The study of convex optimization in non-Euclidean geometries was recently initiated by [AFKT21, BGN21], who focused primarily on developing algorithms under regularity assumptions and bounded domains. In follow-up work, [BGM21] gave improved guarantees for the family of generalized linear losses. We discuss the rates we achieve for norm geometries compared to [AFKT21, BGN21] in Section 1.1; in short, we improve prior results by logarithmic factors in the case , and match them when . Independently from our work, [HLL+22] designed an algorithm for private optimization in geometries improving upon [BGN21] in gradient query complexity (matching their privacy-utility tradeoffs); both [BGN21, HLL+22] require further smoothness assumptions on the loss functions.
One of the main motivations for this work was to develop a general theory for private convex optimization under non-Euclidean geometries, beyond setups. In particular, [BGN21] designed a generalized Gaussian mechanism for the case , where gradients were perturbed by a noise distribution catering to the geometry. However, how to design a corresponding mechanism for more general norms may be less clear. The algorithm of [AFKT21] in the non-smooth case was based on a (Euclidean norm) Gaussian mechanism; again, this strategy is potentially more specialized to geometries. Beyond giving a general algorithmic framework for non-Euclidean convex optimization based on structured logconcave sampling, we hope that the information-theoretic properties we show regarding regularized exponential mechanisms (e.g. Theorem 2) may find use in designing “generalized Gaussian mechanisms” beyond norms.
Connections between privacy and sampling.
Our work extends a line of work exploring privacy-utility tradeoffs for the exponential mechanism, a general strategy for designing private algorithms introduced by [MT07] (see additional discussion in [GLL22]). For example, the regularized exponential mechanisms we design are similar in spirit to the exponential mechanism “in the norm222That is, the norm induced by the convex body , not to be confused with the of Assumption 1.” designed by [HT10, BDKT12]. Moreover, our work continues a recent interface between the sampling and privacy literature, where (continuous and discrete-time) sampling algorithms are shown to efficiently obtain strong privacy-utility tradeoffs for optimization problems [GLL22, GTU22]. This work further develops this interface, motivating the design of efficient samplers for densities satisfying non-Euclidean regularity assumptions.
The design of sampling algorithms under general geometries (e.g. “mirrored Langevin algorithms”) has been a topic of great recent interest, independently from applications in private optimization. Obtaining mixing guarantees under regularity assumptions naturally found in applications is a notoriously challenging problem in the recent algorithmic sampling literature [HKRC18, ZPFP20, AC21, Jia21, LTVW22]. For example, it has been observed both theoretically and empirically that without (potentially restrictive) relationships between regularity parameters, natural discretizations of the mirrored Langevin dynamics may not even have vanishing bias [ZPFP20, Jia21, LTVW22]. Recently, [LST21] gave an alternative strategy (to discretizing Langevin dynamics) for designing sampling algorithms in the Euclidean case, used in [GLL22] to obtain private algorithms for -structured ERM and SCO problems (see Proposition 8). Our work suggests a natural non-Euclidean generalization of these sampling problems, which is useful to study from an algorithmic perspective. We are optimistic that a non-Euclidean variant of [LST21] may shed light on these mysteries and yield new efficient private algorithms. More generally (beyond the particular [LST21] framework), we state the direction of designing efficient samplers for densities of the form satisfying Assumption 1 as an important open research endeavor with implications for both sampling and private optimization, the latter of which this paper demonstrates.
2 Preliminaries
General notation.
Throughout, hides logarithmic factors in problem parameters when clear from the context. For , refers to the naturals . We use to denote a compact convex subset of . The standard () Euclidean norm is denoted . We will be concerned with optimizing functions , and will refer to a norm on . The diameter of such a set is denoted . We let be the Gaussian density of specified mean and covariance. We denote the convex hull of a set (when well-defined) by . When , we abuse notation and let be the line segment between and .
Norms.
For , we let applied to a vector-valued variable be the norm, namely for ; the norm is the maximum absolute value. We will use the well-known inequality
(2) |
Matrices will be denoted in boldface throughout, and applied to a matrix-valued variable is the Schatten- norm, i.e. the norm of the singular values of .
Optimization.
In the following discussion, fix some . We say is -Lipschitz in if for all , . We say is -strongly convex in if for all and ,
Probability.
For two densities , we define their total variation distance by and (when the Radon-Nikodym derivative exists) their KL divergence by . We define the -Wasserstein distance by
where is the set of couplings of and . We note satisfies the following inequality.
Fact 1.
Let be the Lipschitz constant in the norm of a function . Then, for densities supported on ,
Proof.
This follows from the dual characterization of the -Wasserstein distance (which shows ), and convexity of the square. ∎
We use to indicate proportionality, e.g. if is a density and we write , we mean where and the integration is over the support of .
We say that a measure on is logconcave if for any and compact ,
We have the following equivalent characterization of logconcave measures.
Proposition 1 ([Bor75b]).
Let be a measure on . Let be the least affine subspace containing the support of , and let be the Lebesgue measure in . Then is logconcave if and only if , is nonnegative and locally integrable, and is convex.
In particular, Proposition 1 shows that the measure of any subspace of according to is zero. If in the characterization of [Bor75b] the function is affine, we say is logaffine. Following [Bor75b], we analogously define strong logconcavity with respect to a norm.
Definition 1 (strong logconcavity).
Let be a measure on . Let be the least affine subspace containing the support of , and let be the Lebesgue measure in . We say is -strongly logconcave with respect to if , is nonnegative and locally integrable, and is -strongly convex in .
Privacy.
Throughout, denotes a mechanism, and denotes a dataset. We say and are neighboring if they differ in one entry. We say a mechanism satisfies -differential privacy if it has output space and for any neighboring ,
We define the privacy curve of two random variables supported on by
We say has a privacy curve if for all neighboring , , . For any , it is clear that such a is -differentially private. We will frequently compare to the privacy curve of a Gaussian, so we recall the following bound from prior work.
Fact 2 (Gaussian privacy curve, Lemma 6.3, [GLL22]).
Let and . For any , .
3 Gaussian differential privacy in general norms
In this section, we give a generalization of Theorem 4.1 of [GLL22], which demonstrates that a regularized exponential mechanism for (Euclidean norm) Lipschitz losses achieves privacy guarantees comparable to an analogous instance of the Gaussian mechanism. The proof from [GLL22] was specialized to the Euclidean setup; to show our more general result, we draw upon the localization technique from convex geometry [LS93, KLS95]. We provide the relevant localization tools we will use in Section 3.1, and prove our Gaussian differential privacy result in Section 3.2.
3.1 Localization
We recall the localization lemma from [FG04]. We remark that the statement in [FG04] is more refined than our statement (in that [FG04] gives a complete characterization of extreme points, whereas we give a superset), but the following form of the [FG04] result suffices for our purposes.
Proposition 2 (Theorem 1, [FG04]).
Let be compact and convex, and let be upper semi-continuous. Let be the set of logconcave densities supported in satisfying . The set of extreme points of satisfies one of the following.
-
•
is a Dirac measure at such that .
-
•
is logaffine and supported on such that .
We next derive several extensions of Proposition 2.
Lemma 1 (Strongly logconcave localization).
Let be compact and convex, let be continuous, and let be upper semi-continuous. Let be the set of probability densities such that is -strongly logconcave with respect to and supported in , such that is also -strongly logconcave, and . The set of extreme points of satisfy one of the following.
-
•
is a Dirac measure at such that .
-
•
is supported on .
Proof.
Clearly, Dirac measures at with are extreme points, so it suffices to consider other extreme points. Given any extreme point which is not a Dirac measure, we prove the least affine subspace containing the support of has dimension one, i.e. denoting the least affine subspace containing the support of by , we prove .
Assume for the sake of contradiction that . There exists in the relative interior of the support of and a two-dimensional subspace such that . Let be the unit circle in , and for any denote and . Finally, define by , such that .
By Proposition 1, we know . Moreover, is continuous since every hyperplane has . Since , by the intermediate value theorem there exists such that . We can hence rewrite as a convex combination of its restrictions to and , both of which are -strongly logconcave, and whose (renormalized) multiplications by are also -strongly logconcave. Since both of these restrictions belong to , contradicting extremality of . ∎
We briefly remark that the proof technique used in Lemma 1 is quite general, and the only property we used about is that it is a subset of logconcave densities, and it is closed under restrictions to convex subsets. Similar arguments hold for other density families with these properties. Further, we note that restrictions to compact sets are upper semi-continuous; it is straightforward to verify our applications of Lemma 1 satisfy the upper semi-continuity assumption.
We prove the following two technical lemmas using Lemma 1.
Lemma 2.
Following notation of Lemma 1, fix a continuous function and a subset . For any probability density on , define the renormalized density . Finally, let
Then where is the subset of densities satisfying one of the following.
-
•
is a Dirac measure at .
-
•
is supported on .
Proof.
Let be the set of such that . We have
where is the (super)set of extreme points of given by the strongly logconcave localization lemma (Lemma 1). These candidate extreme points are Dirac measures at such that , or are supported in . Hence, , and we conclude
(3) | ||||
(4) | ||||
(5) |
The first inequality used that for , and the second used that for any . Since , we have the claim. ∎
Lemma 3.
Following notation of Lemma 1, fix continuous function and upper semi-continuous function . For any probability density on , define to be a renormalized density on . Finally, let
Then where is the subset of densities satisfying one of the following.
-
•
is a Dirac measure at .
-
•
is supported on .
3.2 Gaussian differential privacy
Using an instantiation of the localization lemma, we prove Gaussian differential privacy in general norms by first reducing to one dimension and then using the result of [GLL22] to handle the one-dimensional case. Gaussian differential privacy was introduced by [DRS21] and is a useful tool to compare privacy curves. We first recall the () Gaussian differential privacy result of [GLL22].
Proposition 3 (Theorem 4.1, [GLL22]).
Let be compact and convex, let be -strongly convex in , and let and . Suppose is -Lipschitz in . For all ,
We next give a simple comparison result between norms.
Lemma 4.
For , fix , and let be the restriction of to .
-
(1)
If is -Lipschitz in , is -Lipschitz in .
-
(2)
If is -strongly convex in , is -strongly convex in .
Proof.
To see the first claim, let and for . We have by Lipschitzness of that
Similarly, to see the second claim, by strong convexity of ,
∎
We now present our main result on Gaussian differential privacy with respect to arbitrary norms.
See 2
Proof.
Throughout this proof, fix some which is -Lipschitz in by assumption. We first claim that amongst all -strongly convex (in ) functions such that is also -strongly convex, defining and , some maximizing is either a Dirac measure or supported on . We will prove this by contradiction.
Suppose otherwise, and let be a -strongly convex function that maximizes defined above. Define and . Let be the set achieving
By Lemma 2, there is another -strongly logconcave where the renormalized density is also -strongly logconcave, such that (following notation of Lemma 2) , where is either a Dirac or supported on . We conclude that (since the maximizing set for is at least as good as ), a contradiction.
4 Private ERM and SCO in general norms
In this section, we derive our results for private ERM and SCO in general norms. We will state our results for private ERM (Section 4.1) and SCO (Section 4.2) with respect to an arbitrary compact convex subset of a -dimensional normed space, satisfying Assumption 1. We then use this to derive guarantees for a variety of settings of import in Section 4.3.
4.1 Private ERM under Assumption 1
To develop our private ERM algorithms, we recall the following risk guarantee from [dKL18] of sampling from Gibbs distributions (improving upon [KV06, BST14]).
Proposition 4 ([dKL18], Corollary 1).
Let be compact and convex, let be convex, and let . If ,
We conclude by a simple combination of Proposition 4 (providing a risk guarantee) and Theorem 2 (providing a privacy guarantee), which yields our main result on private ERM.
Theorem 3 (Private ERM).
Proof.
Let be the realization of (1) when is replaced with a neighboring dataset which agrees in all entries except some sample . By Assumption 1, we have is -Lipschitz, and both and are -strongly convex (all with respect to ). Hence, combining Theorem 2 and Fact 2 shows the mechanism is -differentially private, since
(6) |
Let . We obtain the risk guarantee by the calculation (see Proposition 4)
and plugging in our choices of and . ∎
4.2 Private SCO under Assumption 1
We first give a generic comparison result between population risk and empirical risk under Assumption 1. To do so, we use two helper results from prior work. The first was derived in [GLL22] by combining a transportation inequality and a log-Sobolev inequality (see e.g. [OV00]).
Proposition 5 ([GLL22], Theorem 6.7, Lemma 6.8).
Let be compact and convex, let be -strongly convex in , and let and . Suppose is -Lipschitz in . Then, .
Corollary 1.
Let be compact and convex, and let be -Lipschitz and -Lipschitz respectively in . Let be the set of densities over such that is -strongly logconcave with respect to , and is also -strongly logconcave. For any define where . Then, .
Proof.
By Lemma 3 (and following its notation), it suffices to show for all . Clearly this is true for a Dirac measure as then , so consider the other case where is supported on , such that and is -strongly convex in . Further, define , so that is also -strongly convex and supported on .
The second relates the population risk to the empirical risk on an independent sample.
Proposition 6 (Lemma 7, [BE02]).
Suppose is drawn independently from , let be drawn independently from , and let be where is swapped with . Then, for any symmetric333Here, a symmetric mechanism is one which only depends on the set of inputs rather than their order. mechanism ,
where expectations are over and the randomness used in producing and .
By applying Corollary 1 and Proposition 6 (which bound the generalization error of our mechanism), we provide the following extension of Theorem 3, our main result on private SCO.
Theorem 4 (Private SCO).
Proof.
For the given choice of , the privacy proof follows identically to Theorem 3, so we focus on the risk proof. We follow the notation of Proposition 6 and let independently from . By exchanging the expectation and minimum and using that ,
where we bounded the empirical risk in the proof of Theorem 3. Next, let be the density . Our mechanism is symmetric, and hence by Proposition 6,
where the outer expectations are over the randomness of drawing . Finally, for any fixed realization of , the densities satisfy the assumption of Corollary 1 with , and is -Lipschitz, so Corollary 1 shows that
Combining the above three displays bounds the population risk by
for our given value of . The conclusion follows by optimizing over yielding a risk of , and using the scalar inequality for nonnegative . ∎
4.3 Applications
To derive our private optimization algorithms for -norm and Schatten- norm geometries, we recall the following results on the existence of bounded strongly convex regularizers.
Proposition 7 ([BCL94]).
For , letting be the norm of a vector, is -strongly convex in . Similarly, for , letting be the Schatten- norm of a matrix, is -strongly convex in .
We state a useful result on efficiently sampling from Lipschitz, strongly logconcave densities under value oracle access given by [GLL22] (building upon the framework of [LST21]). We slightly specialize the result of [GLL22] by giving a rephrasing sufficient for our purposes.
Proposition 8 ([GLL22], Theorem 2.3).
Let be compact and convex with . Let and let such that all are -Lipschitz in and convex, and is -strongly convex in . For , we can generate a sample within total variation of the density in value queries to some and samples from densities for some , , where
norms.
We state our results on private convex optimization under geometry. As a preliminary, we combine norm equivalence bounds (2) and Proposition 8 to give the following algorithmic result on sampling from a logconcave distribution under value oracle access under geometry.
Proposition 9.
Let and let be compact and convex with . Let and let such that all are -Lipschitz in and convex, and is -strongly convex in . For , we can generate a sample within total variation of the density in value queries to some and samples from densities for some , , where
Proof.
For , note that each is -Lipschitz in the norm by combining (2) and the definition of Lipschitzness. Moreover, because the norm is larger than the norm, remains -strongly convex in the norm. The diameter is only affected by factors when converting norms, which is accounted for by the logarithmic term. Hence, the complexity bound follows by applying Proposition 8 under this change of parameters. For the other case of , the Lipschitz bound is , and the strong convexity bound is by a similar argument. ∎
In the following discussion, we primarily focus on the value oracle query complexity of our samplers. Generic results on logconcave sampling (see e.g. [LV07], or more recent developments by [JLLV21, Che21, KL22]) imply the samples from the densities can be performed in polynomial time, for all the that are relevant in our applications (which are all squared distances). We expect samplers which run in nearly-linear time (in ) may be designed for applications where is structured, such as an ball, but for brevity we omit this discussion.
Corollary 2.
Let be a constant, and let , . Let have , and let where all are convex and -Lipschitz in . Finally, let independently and .
-
(1)
There is an -differentially private algorithm which produces such that
(7) using
(8) -
(2)
There is an -differentially private algorithm which produces such that
(9) using
Proof.
We will parameterize Assumption 1 with the function , where is an arbitrary point, and strong convexity follows from Proposition 7. By assumption, we may set . The conclusions follow by combining Theorem 3, Theorem 4. To obtain -differential privacy, it suffices to run the mechanism with privacy level , run to total variation using Proposition 9, and take a union bound. For both ERM and SCO, note that our choices of and satisfy the relation (6), namely . Since both the Lipschitz and strong convexity parameters are scaled up by in our application of Proposition 9, we have the leading-order term is which yields the conclusion. ∎
For any such that is bounded away from , Corollary 2 matches the information-theoretic lower bound of [BGN21] (and its subsequent sharpening by [LL22]). When this is not the case, we use norm equivalence (2) to obtain a weaker bound.
Corollary 3.
Let , . Let have , and let where all are convex and -Lipschitz in . Finally, let independently and .
-
(1)
There is an -differentially private algorithm which produces such that
(10) using
(11) -
(2)
There is an -differentially private algorithm which produces such that
(12) using
Proof.
The term scaling as in (12), namely the non-private population risk, is known to be optimal from existing lower bounds on SCO [DJW14]. Up to a factor, the non-private empirical risk is optimal with respect to current private optimization lower bounds [BGN21, LL22].
Corollary 4.
Let , and let , . Let have , and let where all are convex and -Lipschitz in . Finally, let independently and .
-
(1)
There is an -differentially private algorithm which produces such that
(13) using
(14) -
(2)
There is an -differentially private algorithm which produces such that
(15) using
Schatten- norms.
Our results extend immediately to matrix spaces equipped with Schatten- norm geometries. We record our relevant results in the following.
Corollary 5.
Let , , , and let have . Let have , and let where all are convex and -Lipschitz in . Finally, let independently and .
-
(1)
For constant , there is an -differentially private algorithm which produces such that
The value oracle complexity of the algorithm is bounded as in (8) for in the non-logarithmic term, and in the logarithmic term.
-
(2)
For , there is an -differentially private algorithm which produces such that
The value oracle complexity of the algorithm is bounded as in (11) for in the non-logarithmic term, and in the logarithmic term.
-
(3)
For , there is an -differentially private algorithm which produces such that
The value oracle complexity of the algorithm is bounded as in (14) for in the non-logarithmic term, and in the logarithmic term.
Proof.
The privacy and utility proofs follow identically to Corollaries 2, 3, and 4, where we use the second portion of Proposition 7 instead of the first. We note that the “dimension-dependent” term in the risk inherited from Proposition 4 scales as (the dimensionality of the matrix space). However, the terms in the risk due to the size of regularizers (inherited from the tradeoffs in (2), for and ) scales as a power of , the maximum dimension of singular values. To obtain the value oracle complexity, we note that by definition of the Schatten norm, it satisfies the relationship (2) as well. Moreover, the Schatten- norm agrees with the vector norm (when the matrix is flattened into a vector), since they are both the Frobenius norm. Hence, we may directly apply Proposition 8 after paying a norm conversion, in the same way as was done in Proposition 9. ∎
Remark on high-probability bounds.
One advantage of using a sampling-based algorithm is an immediate high-probability bound which follows due to the good concentration of Lipschitz functions over samples from strongly logconcave distributions, stated below.
Lemma 5 (Concentration of Lipschitz functions, [Led99], Section 2.3 and [BL00], Proposition 3.1).
Let be a -Lipschitz function and for a -strongly convex function , all with respect to the same norm . For all ,
In particular, we have demonstrated that the population and empirical risks (which are Lipschitz) have good expectations. Naïvely combining Lemma 5 and our main results on the expectation utility bound then yields tight concentration around the mean in some parameter regimes, but we suspect the resulting bound is loose in general. We leave it as an interesting open problem to obtain tight high-probability bounds in all parameter regimes.
Acknowledgments
We would like to thank the anonymous SODA reviewers for their helpful suggestions. We would also like to thank a reviewer and Cristóbal Guzmán for pointing out a mistake in the utility guarantees in Corollary 5 in the original version, which is corrected in this version.
References
- [Abo16] John M. Abowd. The challenge of scientific reproducibility and privacy protection for statistical agencies. Technical report, Census Scientific Advisory Committee, 2016.
- [ABRW12] Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Trans. Inf. Theory, 58(5):3235–3249, 2012.
- [AC21] Kwangjun Ahn and Sinho Chewi. Efficient constrained sampling via the mirror-langevin algorithm. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 28405–28418, 2021.
- [ACG+16] Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Edgar R. Weippl, Stefan Katzenbeisser, Christopher Kruegel, Andrew C. Myers, and Shai Halevi, editors, Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24-28, 2016, pages 308–318. ACM, 2016.
- [AFKT21] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in L1 geometry. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 393–403. PMLR, 2021.
- [AGL+18] Zeyuan Allen-Zhu, Ankit Garg, Yuanzhi Li, Rafael Mendes de Oliveira, and Avi Wigderson. Operator scaling via geodesically convex optimization, invariant theory and polynomial identity testing. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors, Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 172–181. ACM, 2018.
- [AHK12] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta-algorithm and applications. Theory Comput., 8(1):121–164, 2012.
- [AKPS19] Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva. Iterative refinement for p-norm regression. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 1405–1424. SIAM, 2019.
- [ANW10] Alekh Agarwal, Sahand N. Negahban, and Martin J. Wainwright. Fast global convergence rates of gradient methods for high-dimensional statistical recovery. In John D. Lafferty, Christopher K. I. Williams, John Shawe-Taylor, Richard S. Zemel, and Aron Culotta, editors, Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada, pages 37–45. Curran Associates, Inc., 2010.
- [BC12] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn., 5(1):1–122, 2012.
- [BCL94] Keith Ball, Eric A. Carlen, and Elliott H. Lieb. Sharp uniform convexity and smoothness estimates for trace norms. Inventiones mathematicae, 115(1):463–482, 1994.
- [BDKT12] Aditya Bhaskara, Daniel Dadush, Ravishankar Krishnaswamy, and Kunal Talwar. Unconditional differentially private mechanisms for linear queries. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 1269–1284, 2012.
- [BE02] Olivier Bousquet and André Elisseeff. Stability and generalization. J. Mach. Learn. Res., 2:499–526, 2002.
- [BEM+17] Andrea Bittau, Úlfar Erlingsson, Petros Maniatis, Ilya Mironov, Ananth Raghunathan, David Lie, Mitch Rudominer, Ushasree Kode, Julien Tinnés, and Bernhard Seefeld. Prochlo: Strong privacy for analytics in the crowd. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, October 28-31, 2017, pages 441–459. ACM, 2017.
- [BFGT20] Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
- [BFTT19] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 11279–11288, 2019.
- [BGM21] Raef Bassily, Cristóbal Guzmán, and Michael Menart. Differentially private stochastic optimization: New results in convex and non-convex settings. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 9317–9329, 2021.
- [BGN21] Raef Bassily, Cristóbal Guzmán, and Anupama Nandi. Non-euclidean differentially private stochastic convex optimization. In Mikhail Belkin and Samory Kpotufe, editors, Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA, volume 134 of Proceedings of Machine Learning Research, pages 474–499. PMLR, 2021.
- [BL00] Sergey G Bobkov and Michel Ledoux. From brunn-minkowski to brascamp-lieb and to logarithmic sobolev inequalities. GAFA, Geometric and Functional Analysis, 10:1028–1052, 2000.
- [Bor75a] Christer Borell. The brunn-minkowski in gauss space. Inventiones mathematicae, 30:207–216, 1975.
- [Bor75b] Christer Borell. Convex set functions ind-space. Periodica Mathematica Hungarica, 6(2):111–136, 1975.
- [BST14] Raef Bassily, Adam D. Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014, pages 464–473. IEEE Computer Society, 2014.
- [Bub15] Sébastien Bubeck. Convex optimization: Algorithms and complexity. Found. Trends Mach. Learn., 8(3-4):231–357, 2015.
- [CDWY20] Yuansi Chen, Raaz Dwivedi, Martin J. Wainwright, and Bin Yu. Fast mixing of metropolized hamiltonian monte carlo: Benefits of multi-step gradients. J. Mach. Learn. Res., 21:92:1–92:72, 2020.
- [CE17] Dario Cordero-Erausquin. Transport inequalities for log-concave measures, quantitative forms, and applications. Canada J. Math, 69(3):481–501, 2017.
- [Che21] Yuansi Chen. An almost constant lower bound of the isoperimetric coefficient in the kls conjecture. GAFA, Geometric and Functional Analysis, 31:34–61, 2021.
- [CM08] Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Daphne Koller, Dale Schuurmans, Yoshua Bengio, and Léon Bottou, editors, Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 289–296. Curran Associates, Inc., 2008.
- [CMS11] Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12:1069–1109, 2011.
- [CRT06] Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory, 52(2):489–509, 2006.
- [DFO20] Jelena Diakonikolas, Maryam Fazel, and Lorenzo Orecchia. Fair packing and covering on a relative scale. SIAM J. Optim., 30(4):3284–3314, 2020.
- [DG21] Jelena Diakonikolas and Cristóbal Guzmán. Complementary composite minimization, small gradients in general norms, and applications to regression problems. CoRR, abs/2101.11041, 2021.
- [DJW14] John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Privacy aware learning. J. ACM, 61(6):38:1–38:57, 2014.
- [dKL18] Etienne de Klerk and Monique Laurent. Comparison of lasserre’s measure-based bounds for polynomial optimization to bounds obtained by simulated annealing. Math. Oper. Res., 43(4):1317–1325, 2018.
- [DKM+06] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Serge Vaudenay, editor, Advances in Cryptology - EUROCRYPT 2006, 25th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28 - June 1, 2006, Proceedings, volume 4004 of Lecture Notes in Computer Science, pages 486–503. Springer, 2006.
- [DKY17] Bolin Ding, Janardhan Kulkarni, and Sergey Yekhanin. Collecting telemetry data privately. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 3571–3580, 2017.
- [DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. Calibrating noise to sensitivity in private data analysis. In Shai Halevi and Tal Rabin, editors, Theory of Cryptography, Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006, Proceedings, volume 3876 of Lecture Notes in Computer Science, pages 265–284. Springer, 2006.
- [DRS21] Jinshuo Dong, Aaron Roth, and Weijie J. Su. Gaussian differential privacy. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 84(1):3–37, 2021.
- [EPK14] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: randomized aggregatable privacy-preserving ordinal response. In Gail-Joon Ahn, Moti Yung, and Ninghui Li, editors, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014, pages 1054–1067. ACM, 2014.
- [FG04] Matthieu Fradelizi and Olivier Guédon. The extreme points of subsets of s-concave probabilities and a geometric localization theorem. Discrete & Computational Geometry, 31(2):327–335, 2004.
- [FKT20] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 439–449. ACM, 2020.
- [GLL22] Sivakanth Gopi, Yin Tat Lee, and Daogao Liu. Private convex optimization via exponential mechanism. In Po-Ling Loh and Maxim Raginsky, editors, Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 1948–1989. PMLR, 2022.
- [GTU22] Arun Ganesh, Abhradeep Thakurta, and Jalaj Upadhyay. Langevin diffusion: An almost universal algorithm for private euclidean (convex) optimization. CoRR, abs/2204.01585, 2022.
- [HKRC18] Ya-Ping Hsieh, Ali Kavis, Paul Rolland, and Volkan Cevher. Mirrored langevin dynamics. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 2883–2892, 2018.
- [HLL+22] Yuxuan Han, Zhicong Liang, Zhipeng Liang, Yang Wang, Yuan Yao, and Jiheng Zhang. Private streaming sco in geometry with applications in high dimensional online decision making. In International Conference on Machine Learning, pages 8249–8279. PMLR, 2022.
- [HT10] Moritz Hardt and Kunal Talwar. On the geometry of differential privacy. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 705–714, 2010.
- [Jia21] Qijia Jiang. Mirror langevin monte carlo: the case under isoperimetry. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 715–725, 2021.
- [JLLV21] He Jia, Aditi Laddha, Yin Tat Lee, and Santosh S. Vempala. Reducing isotropy and volume to KLS: an o*(n) volume algorithm. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 961–974. ACM, 2021.
- [JLT20] Arun Jambulapati, Jerry Li, and Kevin Tian. Robust sub-gaussian principal component analysis and width-independent schatten packing. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
- [JT14] Prateek Jain and Abhradeep Guha Thakurta. (near) dimension independent risk bounds for differentially private learning. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 476–484. JMLR.org, 2014.
- [KJ16] Shiva Prasad Kasiviswanathan and Hongxia Jin. Efficient private empirical risk minimization for high-dimensional learning. In Maria-Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 488–497. JMLR.org, 2016.
- [KL22] Bo’az Klartag and Joseph Lehec. Bourgain’s slicing problem and kls isoperimetry up to polylog. arXiv preprint arXiv:2203.15551, 2022.
- [KLL21] Janardhan Kulkarni, Yin Tat Lee, and Daogao Liu. Private non-smooth ERM and SCO in subquadratic steps. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 4053–4064, 2021.
- [KLOS14] Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. In Chandra Chekuri, editor, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 217–226. SIAM, 2014.
- [KLS95] Ravi Kannan, László Lovász, and Miklós Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete & Computational Geometry, 13(3):541–559, 1995.
- [Kol11] Alexander V. Kolesnikov. Mass transportation and contractions. arXiv preprint arXiv:1103.1479, 2011.
- [KPSW19] Rasmus Kyng, Richard Peng, Sushant Sachdeva, and Di Wang. Flows in almost linear time via adaptive preconditioning. In Moses Charikar and Edith Cohen, editors, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pages 902–913. ACM, 2019.
- [KST12] Daniel Kifer, Adam D. Smith, and Abhradeep Thakurta. Private convex optimization for empirical risk minimization with applications to high-dimensional regression. In Shie Mannor, Nathan Srebro, and Robert C. Williamson, editors, COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, volume 23 of JMLR Proceedings, pages 25.1–25.40. JMLR.org, 2012.
- [KV06] Adam Tauman Kalai and Santosh S. Vempala. Simulated annealing for convex optimization. Math. Oper. Res., 31(2):253–266, 2006.
- [Led99] Michel Ledoux. Concentration of measure and logarithmic Sobolev inequalities. Seminaire de probabilities XXXIII, 1999.
- [LL22] Daogao Liu and Zhou Lu. Lower bounds for differentially private erm: Unconstrained and non-euclidean. arXiv preprint arXiv:2105.13637, 2022.
- [LS93] László Lovász and Miklós Simonovits. Random walks in a convex body and an improved volume algorithm. Random structures & algorithms, 4(4):359–412, 1993.
- [LST21] Yin Tat Lee, Ruoqi Shen, and Kevin Tian. Structured logconcave sampling with a restricted gaussian oracle. In Conference on Learning Theory, pages 2993–3050. PMLR, 2021.
- [LTVW22] Ruilin Li, Molei Tao, Santosh S. Vempala, and Andre Wibisono. The mirror langevin algorithm converges with vanishing bias. In Sanjoy Dasgupta and Nika Haghtalab, editors, International Conference on Algorithmic Learning Theory, 29-1 April 2022, Paris, France, volume 167 of Proceedings of Machine Learning Research, pages 718–742. PMLR, 2022.
- [LV07] László Lovász and Santosh S. Vempala. The geometry of logconcave functions and sampling algorithms. Random Struct. Algorithms, 30(3):307–358, 2007.
- [MS08] Emanuel Milman and Sasha Sodin. An isoperimetric inequality for uniformly log-concave measures and uniformly convex bodies. Journal of Functional Analysis, 254(5):1235–1268, 2008.
- [MT07] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), October 20-23, 2007, Providence, RI, USA, Proceedings, pages 94–103. IEEE Computer Society, 2007.
- [Nem04] Arkadi Nemirovski. Interior point polynomial time methods in convex programming. Lecture notes, 42(16):3215–3224, 2004.
- [NY83] A. Nemirovski and D.B̃. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
- [OV00] Felix Otto and Cédric Villani. Generalization of an inequality by talagrand and links with the logarithmic sobolev inequality. Journal of Functional Analysis, 173(2):361–400, 2000.
- [Sha12] Shai Shalev-Shwartz. Online learning and online convex optimization. Found. Trends Mach. Learn., 4(2):107–194, 2012.
- [ST74] Vladimir Sudakov and Boris Tsirelson. Extremal properties of half-spaces for spherically invariant measures. J. Soviet Math, 9:9–18, 1974.
- [Tea17] Apple Differential Privacy Team. Learning with privacy at scale. Technical report, Apple, 2017.
- [ZPFP20] Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, and Marcelo Pereyra. Wasserstein control of mirror langevin monte carlo. In Jacob D. Abernethy and Shivani Agarwal, editors, Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria], volume 125 of Proceedings of Machine Learning Research, pages 3814–3841. PMLR, 2020.
Appendix A Private ERM and SCO under strong convexity
In this section, we derive our results for private ERM and SCO in general norms under the assumption that the sample losses are strongly convex. We will state our results for private ERM (Theorem 5) and SCO (Theorem 6) with respect to an arbitrary compact convex subset of a -dimensional normed space, satisfying the following Assumption 2.
Assumption 2.
We make the following assumptions.
-
(1)
There is a compact, convex subspace equipped with a norm .
-
(2)
There is a set such that for any , there is a function which is -Lipschitz and -strongly convex in .
Theorem 5 (Private ERM).
Proof.
Let be the realization of (1) when is replaced with a neighboring dataset which agrees in all entries except some sample . By Assumption 2, we have is -Lipschitz, and both and are -strongly convex (all with respect to ). Hence, combining Theorem 2 and Fact 2 shows the mechanism is -differentially private, since
Let . We obtain the risk guarantee by the calculation (see Proposition 4)
∎
Theorem 6 (Private SCO).
Proof.
For the given choice , the privacy proof follows identically to Theorem 3, so we focus on the risk proof. We follow the notation of Proposition 6 and let independently from . By exchanging the expectation and minimum and using that ,
where we bounded the empirical risk in the proof of Theorem 5. Next, let be the density . Our mechanism is symmetric, and hence by Proposition 6,
where the outer expectations are over the randomness of drawing . Finally, for any fixed realization of , the densities satisfy the assumption of Corollary 1 with , and is -Lipschitz, so Corollary 1 shows that
Combining the above three displays bounds the population risk by
for our given value of . ∎