Stability of
Stochastic Gradient Descent
on Nonsmooth Convex Losses
Abstract
Uniform stability is a notion of algorithmic stability that bounds the worst case change in the model output by the algorithm when a single data point in the dataset is replaced. An influential work of Hardt et al., (2016) provides strong upper bounds on the uniform stability of the stochastic gradient descent (SGD) algorithm on sufficiently smooth convex losses. These results led to important progress in understanding of the generalization properties of SGD and several applications to differentially private convex optimization for smooth losses.
Our work is the first to address uniform stability of SGD on nonsmooth convex losses. Specifically, we provide sharp upper and lower bounds for several forms of SGD and full-batch GD on arbitrary Lipschitz nonsmooth convex losses. Our lower bounds show that, in the nonsmooth case, (S)GD can be inherently less stable than in the smooth case. On the other hand, our upper bounds show that (S)GD is sufficiently stable for deriving new and useful bounds on generalization error. Most notably, we obtain the first dimension-independent generalization bounds for multi-pass SGD in the nonsmooth case. In addition, our bounds allow us to derive a new algorithm for differentially private nonsmooth stochastic convex optimization with optimal excess population risk. Our algorithm is simpler and more efficient than the best known algorithm for the nonsmooth case (Feldman et al.,, 2020).
1 Introduction
Successful applications of a machine learning algorithm require the algorithm to generalize well to unseen data. Thus understanding and bounding the generalization error of machine learning algorithms is an area of intense theoretical interest and practical importance. The single most popular approach to modern machine learning relies on the use of continuous optimization techniques to optimize the appropriate loss function, most notably the stochastic (sub)gradient descent (SGD) method. Yet the generalization properties of SGD are still not well understood.
Consider the setting of stochastic convex optimization (SCO). In this problem, we are interested in the minimization of the population risk , where is an arbitrary and unknown distribution, for which we have access to an i.i.d. sample of size , ; and is convex and Lipschitz for all . The performance of an algorithm is quantified by its expected excess population risk,
where the expectation is taken with respect to the randomness of the sample and internal randomness of . A standard way to bound the excess risk is given by its decomposition into optimization error (a.k.a. training error) and generalization error (see eqn. (3) in Sec. 2). The optimization error can be easily measured empirically but assessing the generalization error requires access to fresh samples from the same distribution. Thus bounds on the generalization error lead directly to provable guarantees on the excess population risk.
Classical analysis of SGD allows obtaining bounds on the excess population risk of one pass SGD. In particular, with an appropriately chosen step size, SGD gives a solution with expected excess population risk of and this rate is optimal (Nemirovsky and Yudin,, 1983). However, this analysis does not apply to multi-pass SGD that is ubiquitous in practice.
In an influential work, Hardt et al., (2016) gave the first bounds on the generalization error of general forms of SGD (such as those that make multiple passes over the data). Their analysis relies on algorithmic stability, a classical tool for proving bounds on the generalization error. Specifically, they gave strong bound on the uniform stability of several variants of SGD on convex and smooth losses (with -smoothness sufficing when all the step sizes are at most ). Uniform stability bounds the worst case change in loss of the model output by the algorithm on the worst case point when a single data point in the dataset is replaced (Bousquet and Elisseeff,, 2002). Formally, for a randomized algorithm , loss functions and and , let , where denotes that the two datasets differ only in a single data point. We say is -uniformly stable if
where the expectation is over the internal randomness of Stronger notions of stability can also be considered, e.g., bounding the probability – over the internal randomness of – that . Using stability, Hardt et al., (2016) showed that several variants of SGD simultaneously achieve the optimal tradeoff between the excess empirical risk and stability with both being . Several works have used this approach to derive new generalization properties of SGD (London,, 2017; Chen et al.,, 2018; Feldman and Vondrák,, 2019).
The key insight of Hardt et al., (2016) is that a gradient step on a sufficiently smooth convex function is a nonexpansive operator (that is, it does not increase the distance between points). Unfortunately, this property does not hold for nonsmooth losses such as the hinge loss. As a result, no non-trivial bounds on the uniform stability of SGD have been previously known in this case.
Uniform stability is also closely related to the notion of differential privacy (DP). DP upper bounds the worst case change in the output distribution of an algorithm when a single data point in the dataset is replaced (Dwork et al., 2006b, ). This connection has been exploited in the design of several DP algorithms for SCO. In particular, bounds on the uniform stability of SGD from (Hardt et al.,, 2016) have been crucial in the design and analysis of new DP-SCO algorithms (Wu et al.,, 2017; Dwork and Feldman,, 2018; Feldman et al.,, 2018; Bassily et al.,, 2019; Feldman et al.,, 2020).
1.1 Our Results
We establish tight bounds on the uniform stability of the (stochastic) subgradient descent method on nonsmooth convex losses. These results demonstrate that in the nonsmooth case SGD can be substantially less stable. At the same time we show that SGD has strong stability properties even in the regime when its iterations can be expansive.
For convenience, we describe our results in terms of uniform argument stability (UAS), which bounds the output sensitivity in -norm w.r.t. an arbitrary change in a single data point. Formally, a (randomized) algorithm has -UAS if
(1) |
This notion is implicit in existing analyses of uniform stability (Bousquet and Elisseeff,, 2002; Shalev-Shwartz et al.,, 2010; Hardt et al.,, 2016) and was explicitly defined by Liu et al., (2017). In this work, we prove stronger – high probability – upper bounds on the random variable ,111In fact, for both GD and fixed-permutation SGD we can obtain w.p. 1 upper bounds on , whereas for sampling-with-replacement SGD, we obtain a high-probability upper bound. and we provide matching lower bounds for the weaker – in expectation – notion of UAS (1). A summary of our bounds is in Table 1. For simplicity, they are provided for constant step size; general step sizes (for upper bounds) are provided in Section 3.
Algorithm | H.p. upper bound | Exp. upper bound | Exp. Lower bound |
---|---|---|---|
GD (full batch) | |||
SGD (w/replacement) | |||
SGD (fixed permutation) |
Compared to the smooth case (Hardt et al.,, 2016), the main difference is the presence of the additional term. This term has important implications for the generalization bounds derived from UAS. The first one is that the standard step size used in single pass SGD leads to a vacuous stability bound. Unfortunately, as shown by our lower bounds, this is unavoidable (at least in high dimension). However, by decreasing the step size and increasing the number of steps, one obtains a variant of SGD with nearly optimal balance between the UAS and the excess empirical risk.
We highlight two major consequences of our bounds:
-
•
Generalization bounds for multi-pass nonsmooth SGD. We prove that the generalization error of multi-pass SGD with passes is bounded by . This result can be easily combined with training error guarantees to provide excess risk bounds for this algorithm. Since training error can be measured directly, our generalization bounds would immediately yield strong guarantees on the excess risk in practical scenarios where we can certify small training error.
-
•
Differentially private stochastic convex optimization for non-smooth losses. We show that a variant of standard noisy SGD (Bassily et al.,, 2014) with constant step size and iterations yields the optimal excess population risk for convex nonsmooth losses under -differential privacy. The best previous algorithm for this problem is substantially more involved: it relies on a multi-phase regularized SGD with decreasing step sizes and variable noise rates and uses gradient computations (Feldman et al.,, 2020).
1.2 Overview of Techniques
-
•
Upper bounds. When gradient steps are nonexpansive, upper-bounding UAS requires simply summing the differences between the gradients on the neighboring datasets when the replaced data point is used (Hardt et al.,, 2016). This gives the bound of in the smooth case.
By contrast, in the nonsmooth case, UAS may increase even when the gradient step is performed on the same function. As a result it may increase in every single iteration. However, we use the fact that the difference in the subgradients has negative inner product with the difference between the iterates themselves (by monotonicity of the subgradient). Thus the increase in distance satisfies a recurrence with a quadratic and a linear term. Solving this recurrence leads to our upper bounds.
-
•
Lower bounds. The lower bounds are based on a function with a highly nonsmooth behavior around the origin. More precisely, it is the maximum of linear functions plus a small linear drift that is controlled by a single data point. We show that, when starting the algorithm from the origin, the presence of the linear drift pushes the iterate into a trajectory in which each subgradient step is orthogonal to the current iterate. Thus, if , we get the increase in UAS. Our lower bounds are also robust to averaging of the iterates.
1.3 Other Related Work
Stability is a classical approach to proving generalization bounds pioneered by Rogers and Wagner, (1978); Devroye and Wagner, 1979a ; Devroye and Wagner, 1979b . It is based on analysis of the sensitivity of the learning algorithm to changes in the dataset such as leaving one of the data points out or replacing it with a different one. The choice of how to measure the effect of the change and various ways to average over multiple changes give rise to a variety of stability notions (e.g., (Bousquet and Elisseeff,, 2002; Mukherjee et al.,, 2006; Shalev-Shwartz et al.,, 2010)). Uniform stability was introduced by Bousquet and Elisseeff, (2002) in order to derive general bounds on the generalization error that hold with high probability. These bounds have been significantly improved in a recent sequence of works (Feldman and Vondrák,, 2018, 2019; Bousquet et al.,, 2019). A long line of work focuses on the relationship between various notions of stability and learnability in supervised setting (see (Kearns and Ron,, 1999; Poggio et al.,, 2004; Shalev-Shwartz et al.,, 2010) for an overview). These works employ relatively weak notions of average stability and derive a variety of asymptotic equivalence results. Chen et al., (2018) establish limits of stability in the smooth convex setting, proving that accelerated methods must satisfy strong stability lower bounds. Stability-based data-dependent generalization bounds for continuous losses were studied in (Maurer,, 2017; Kuzborskij and Lampert,, 2018).
First applications of uniform stability in the context of stochastic convex optimization relied on the stability of the empirical minimizer for strongly convex losses (Bousquet and Elisseeff,, 2002). Therefore a natural approach to achieve uniform stability (and also UAS) is to add a strongly convex regularizer and solve the ERM to high accuracy (Shalev-Shwartz et al.,, 2010). Recent applications of this approach can be found for example in (Koren and Levy,, 2015; Charles and Papailiopoulos,, 2018; Feldman et al.,, 2020). In contrast, our approach does not require strong convexity and applies to all iterates of the SGD and not only to a very accurate empirical minimizer.
Classical approach to generalization relies on uniform convergence of empirical risk to population risk. Unfortunately, without additional structural assumptions on convex functions, a lower bound of on the rate of uniform convergence for convex SCO is known (Shalev-Shwartz et al.,, 2010; Feldman,, 2016). The dependence on the dimension makes the bound obtained via the uniform-convergence approach vacuous in the high-dimensional settings common in modern applications.
Differentially private convex optimization has been studied extensively for over a decade (see, e.g., (Chaudhuri and Monteleoni,, 2008; Chaudhuri et al.,, 2011; Jain et al.,, 2012; Kifer et al.,, 2012; Smith and Thakurta,, 2013; Bassily et al.,, 2014; Ullman,, 2015; Jain and Thakurta,, 2014; Talwar et al.,, 2015; Bassily et al.,, 2019; Feldman et al.,, 2020)). However, until recently, the research focused on minimization of the empirical risk. Population risk for DP-SCO was first studied by Bassily et al., (2014) who gave an upper bound of (Bassily et al.,, 2014, Sec. F) on the excess risk. A recent work of Bassily et al., (2019) established that the optimal rate of the excess population risk for -DP SCO algorithms is . Their algorithms are relatively inefficient, especially in the nonsmooth case. Subsequently, Feldman et al., (2020) gave several new algorithms for DP-SCO with the optimal population risk. For sufficiently smooth losses, their algorithms use a linear number of gradient computations. In the nonsmooth case, as mentioned earlier, their algorithm requires gradient computations and is significantly more involved than the algorithm shown here.
2 Notation and Preliminaries
Throughout we work on the Euclidean space . Therefore, we use unambiguously . Vectors are denoted by lower case letters, e.g. . Random variables (either scalar or vector) are denoted by boldface letters, e.g. .We denote the Euclidean ball of radius centered at by . In what follows, is a compact convex set, and assume we know its Euclidean radius , . Let be the Euclidean projection onto , which is nonexpansive . A convex function is -Lipschitz if
(2) |
Functions with these properties are guaranteed to be subdifferentiable. Moreover, in the convex case, property (2) is “almost” equivalent to having subgradients bounded as , for all .222For equivalence to hold it is necessary that the function is well-defined and satisfies (2) over an open set containing , see Thm. 3.61 in Beck, (2017). We will assume this is the case, which can be done w.l.o.g.. We denote the class of convex -Lipschitz functions as . With slight abuse of notation, given a function , we will denote by an arbitrary choice of . In this work, we will focus on the class defined over a compact convex set . Since the Euclidean radius of is bounded by , we will assume that the range of these functions lies in .
Nonsmooth stochastic convex optimization: We study the standard setting of nonsmooth stochastic convex optimization
Here, is an unknown distribution supported on a set , and for all . In the stochastic setting, we assume access to an i.i.d. sample from , denoted as . Here, we will use the bold symbol to denote a random sample from the unknown distribution. A fixed (not random) dataset from will be denoted as .
A stochastic optimization algorithm is a (randomized) mapping . When the algorithm is randomized, is a random variable depending on both the sample and its own random coins. The performance of is quantified by its excess population risk
Note that is a random variable (due to randomness in the sample and any possible internal randomness of the algorithm). Our guarantees on the excess population risk will be expressed in terms of upper bounds on this quantity that hold with high probability over the randomness of both and the random coins of the algorithm.
Empirical risk minimization (ERM) is one of the most standard approaches to stochastic convex optimization. In the ERM problem, we are given a sample , and the goal is to find
One way to bound the excess population risk is to solve the ERM problem, and appeal to uniform convergence; however, uniform convergence rates in this case are dimension-dependent, (Feldman,, 2016).
Risk decomposition: Guaranteeing low excess population risk for a general algorithm is a nontrivial task. A common way to bound it is by decomposing it into generalization, optimization and approximation error:
(3) |
Here, the optimization error corresponds to the empirical optimization gap, which can be bounded by standard optimization convergence analysis. The expected value of the approximation error is at most zero. One can show, e.g., by Hoeffding’s inequality, that the approximation error is bounded by with high probability (see Lemma 2.1 below.) Therefore, to establish bounds on the excess risk it suffices to upper bound the optimization and generalization errors.
Lemma 2.1.
For any with probability at least , the approximation error is bounded as
Proof.
First, note that . Hence, by independence and the fact that with probability for all , the following follows from Hoeffding’s inequality:
Finally, note that by definition of , we have . Combining this with the above bound completes the proof.
∎
We say that two datasets are neighboring, denoted , if they only differ on a single entry; i.e., there exists s.t. for all , .
Uniform argument stability (UAS): Given an algorithm and datasets , we define the uniform argument stability (UAS) random variable as
The randomness here is due to any possible internal randomness of . For any -Lipschitz function , we have that Hence, upper bounds on UAS can be easily transformed into upper bounds on uniform stability.
In this work, we will consider two types of bounds on UAS.
2.1 High-probability guarantees on UAS
In Section 3, we give upper bounds on UAS for three variants of the (stochastic) gradient descent algorithm, namely, (i) full-batch gradient descent, (ii) sampling-with-replacement stochastic gradient descent, and (iii) fixed-permutation stochastic gradient descent. Variant (i) is deterministic (and hence UAS is a deterministic quantity). For variant (ii), for any pair of neighboring datasets , we give an upper bound on the UAS random variable that holds with high probability over the algorithm’s internal randomness (the sampling with replacement). For variant (iii), we give an upper bound on UAS that holds for an arbitrary choice of permutation; in particular, for any random permutation our upper bound on the UAS random variable that holds with probability 1.
High-probability upper bounds on UAS lead to high-probability upper bounds on generalization error . We will use the following theorem, which follows in a straightforward fashion from (Feldman and Vondrák,, 2019, Theorem 1.1), to derive generalization-error guarantees for our results in Sections 5 and 6 based on our UAS upper bounds in Section 3.
Theorem 2.2 (follows from Theorem 1.1 in Feldman and Vondrák, (2019)).
Let be a randomized algorithm. For any pair of neighboring datasets , suppose that the UAS random variable of satisfies:
Then there is a constant such that for any distribution over and any , we have
where as defined earlier.
2.2 Expectation guarantees on UAS
Our results also include upper and lower bounds on ; that is the supremum of the expected value of the UAS random variable, where the supremum is taken over all pairs of neighboring datasets. In Section 3.3.1, we provide an upper bound on this quantity for the sampling-with-replacement stochastic gradient descent. The upper bounds on the other two variants of the gradient descent method hold in the strongest sense (they hold with probability ). Moreover, in Appendix F, we give slightly tighter expectation guarantees on UAS for both sampling-with-replacement SGD and fixed-permutation SGD with a uniformly random permutation.
In Section 4, we give lower bounds on this quantity for the two variants of the stochastic subgradient method, together with a deterministic lower bound for the full-batch variant.
3 Upper Bounds on Uniform Argument Stability
3.1 The Basic Lemma
We begin by stating a key lemma that encompasses the UAS bound analysis of multiple variants of (S)GD. In particular, all of our UAS upper bounds are obtained by almost a direct application of this lemma. In the lemma we consider two gradient descent trajectories associated to different sequences of objective functions. The degree of concordance of the two sequences, quantified by the distance between the subgradients at the current iterate, controls the deviation between the trajectories. We note that this distance condition is satisfied for all (S)GD variants we study in this work.
Lemma 3.1.
Let and , with , be online gradient descent trajectories for convex -Lipschitz objectives and , respectively; i.e.,
for all . Suppose for every , , for scalars . Then, if
Proof.
Let . By definition of it is clear that . For , we have that .
Now, we derive a recurrence for :
where at the last step we use the monotonicity of the subgradient. Note that
Hence,
(4) | |||||
Now we prove the following bound by induction (notice this claim proves the result):
Indeed, the claim is clearly true for . For the inductive step, we assume it holds for some . To prove the result we consider two cases: first, when , by induction hypothesis we have
In the other case, , we use (4)
which is equivalent to
Taking square root at this inequality, and using the subadditivity of the square root, we obtain the inductive step, and therefore the result. ∎
3.2 Upper Bounds for the Full Batch GD
As a direct corollary of Lemma 3.1, we derive the following upper bound on UAS for the batch gradient descent algorithm.
Theorem 3.2.
Let and . The full-batch gradient descent (Algorithm 1) has uniform argument stability
Proof.
The bound of is obtained directly from the diameter bound on . Therefore, we focus exclusively on the second term. Let be arbitrary neighboring datasets, , and consider the trajectories associated with the batch GD method on datasets and , respectively. We use Lemma 3.1 with and , for all . Notice that
since ; in particular, , with . We conclude by Lemma 3.1 that for all
Hence, the stability bound holds for all the iterates, and thus for by the triangle inequality. ∎
3.3 Upper Bounds for SGD
Next, we state and prove upper bounds on UAS for two variants of stochastic gradient descent: sampling-with-replacement SGD (Section 3.3.1) and fixed-permutation SGD (Section 3.3.2). Here, we give strong upper bounds that hold with high probability (for sampling-with-replacement SGD) and with probability 1 (for fixed-permutation SGD). In Appendix F, we derive tighter upper bounds for these two variants of SGD in the case where the number of iterations the number of samples in the data set ; however, the bounds derived in this case hold only in expectation.
3.3.1 Sampling-with-replacement SGD
Next, we study the uniform argument stability of the sampling-with-replacement stochastic gradient descent (Algorithm 2). This algorithm has the benefit that each iteration is extremely cheap compared to Algorithm 1. Despite these savings, we will show that same bound on UAS holds with high probability.
We now state and prove our upper bound for sampling-with-replacement SGD.
Theorem 3.3.
Let and . The uniform argument stability of the sampling-with-replacement SGD (Algorithm 2) satisfies:
Moreover, if then, for any pair of neighboring datasets, with probability at least (over the algorithm’s internal randomness), the UAS random variable is bounded as
Proof.
The bound of trivially follows from the diameter bound on . We thus focus on the second term of the bound. Let be arbitrary neighboring datasets, , and consider the trajectories associated with the sampled-with-replacement stochastic subgradient method on datasets and , respectively. We use Lemma 3.1 with and . Let us define . Note that at every step , with probability , and otherwise. Moreover, note that is an independent sequence of Bernoulli random variables. Finally, note that .
Hence, by Lemma 3.1, for any realization of the trajectories of the SGD method, we have
(5) |
where . Taking expectation of (5), we have
This establishes the upper bound on UAS but only in expectation. Now, we proceed to prove the high-probability bound. Here, we assume that the step size is fixed; that is, for all . Note that each has variance . Hence, by Chernoff’s bound333Here, we are applying a bound for (scaled) Bernoulli rvs where the exponent is expressed in terms of the variance., we have
Therefore, with probability at least , we have
Putting this together with (5), with probability at least , we have
Finally, by the triangle inequality, we get that with probability at least , the same stability bound holds for the average of the iterates , . ∎
3.3.2 Upper Bounds for the Fixed Permutation SGD
In Algorithm 3, we describe the fixed-permutation stochastic gradient descent. This algorithm works in epochs, where each epoch is a single pass on the data. The order in which data is used is the same across epochs, and is given by a permutation . The algorithm can be alternatively described without the epoch loop simply by
(6) |
We show that the same UAS bound of batch gradient descent and sampling-with-replacement SGD holds for the fixed-permutation SGD. We also observe that a slightly tighter bound can be achieved if we consider the expectation guarantee on UAS when is chosen uniformly at random. We leave these details to Theorem F.2 in the Appendix.
In the next result, we assume that the sequence of step sizes is non-increasing, which is indeed the case for almost all known variants of SGD.
Theorem 3.4.
Let , , and be any permutation over . Suppose the step sizes form a non-increasing sequence. Then the uniform argument stability of the fixed-permutation SGD (Algorithm 3) is bounded as
Proof.
Again, the bound of is trivial. Now, we show the second term of the bound. Let be arbitrary neighboring datasets, , and consider the trajectories associated with the fixed permutation stochastic subgradient method on datasets and , respectively. Since the datasets are arbitrary, we may assume without loss of generality that is the identity, whereas the perturbed coordinate is arbitrary. We use Lemma 3.1 with and . It is easy to see then that , with , where is the indicator of . Hence, by Lemma 3.1, we have
where at the last step we used the fact that is non-increasing; namely, for any
Since the bound holds for all the iterates, using triangle inequality, it holds for the output averaged over the iterates from the epochs. ∎
3.4 Discussion of the upper bounds: examples of specific instantiations
The upper bounds on stability from this section all behave very similarly. Let us explore the consequences of the obtained rates in terms of generalization bounds for different choices of the step size sequence. As a case study, we will consider excess risk bounds for the full-batch subgradient method (Algorithm 1), but similar conclusions hold for all the variants that we studied. We emphasize that prior to this work, no dimension-independent bounds on the excess risk were known of this method (specifically, for nonsmooth losses and without explicit regularization).
To bound the excess risk, we will use the risk decomposition, eqn. (3). For simplicity, we will only be studying excess risk bounds in expectation (in Section 6 we consider stronger, high probability, bounds). In this case, the stability implies generalization result (Theorem 2.2) simplifies to Bousquet and Elisseeff, (2002); Hardt et al., (2016)
Finally, the approximation error (Lemma 2.1) simplifies as well: it is upper bounded by 0 in expectation.
-
•
Fixed stepsize: Let . By Thm. 3.2, UAS is bounded by . On the other hand, the standard analysis of subgradient descent guarantees that . Therefore, by the expected risk decomposition (3)
If we consider the standard method choice, and , the bound above is at least (due to the first term). Consequently, upper bounds obtained from this approach are vacuous.
In order to deal with the term, we need to substantially moderate our stepsize, together with running the algorithm for longer. For example, gives , so by choosing we obtain an expected excess risk bound of , which is optimal. We will see next that it is not possible to obtain the same rates from this bound if , for any choice of . It is also an easy observation that, at least for constant stepsize, it is not possible to recover the optimal excess risk if .
-
•
Varying stepsize: For a general sequence of stepsizes the optimization guarantees of Algorithm 1 are the following
From the risk decomposition, we have
In fact, we can show that any choice of step sizes that makes the quantity above must necessarily have . Indeed, notice that in such case
Therefore, by Cauchy-Schwarz inequality,
The high iteration complexity required to obtain optimal bounds motivates studying whether it is possible to improve our uniform argument stability bounds. We will show that, unfortunately, they are sharp up to absolute constant factors.
4 Lower Bounds on Uniform Argument Stability
In this section we provide matching lower bounds for the previously studied first-order methods. These lower bounds show that our analyses are tight, up to absolute constant factors.
We note that it is possible to prove a general purpose lower bound on stability by appealing to sample complexity lower bounds for stochastic convex optimization (Nemirovsky and Yudin,, 1983). This approach in the smooth convex case was first studied in (Chen et al.,, 2018); there, these lower bounds are sharp. However, in the nonsmooth case they are very far from bounds in the previous section. The idea is that for sufficiently small step size, a first-order method must incur uniform stability. Details of this lower bound can be found on Appendix C. This reasoning leads to an lower bound on uniform argument stability, that can be added to any other lower bound we can prove on specific algorithms that enjoy rates as of gradient descent.
Next we will prove finer lower bounds on the UAS of specific algorithms. For this, note that the objective functions we use are polyhedral, thus the subdifferential is a polytope at any point. Since the algorithm should work for any oracle, we will let the subgradients provided to be extreme points, . Moreover, we can make adversarial choices of the chosen subgradient.
4.1 Lower Bounds for Full Batch GD
Theorem 4.1.
Let , and . For the full-batch gradient descent (Alg. 1) with constant step size , there exist such that the UAS is lower bounded as
The proof of this result is deferred to Appendix D, due to space considerations.
4.2 Lower Bounds for SGD Sampled with Replacement
We use a similar construction as from the previous result to prove a sharp lower bound on the uniform argument stability for stochastic gradient descent where the sampling is with replacement.
Theorem 4.2.
Let , , and . For the sampled with replacement stochastic gradient descent (Algorithm 2) with constant step size , there exist such that the uniform argument stability satisfies
Proof.
Let , and , . Consider and define
where (i.e., supported on the first coordinates). Let the random sequence of indices used by the algorithm: . Let and be neighboring datasets, and denote by and the respective stochastic gradient descent trajectories on and , initialized at . It is easy to see that under , we have for all . Now, suppose that . Then, we only have for all , where . After time , , and consequently for all , where . Note that conditioned on any fixed value for .
Let . Hence, we have . Let from now on. Note that conditioned on any value for is a binomial random variable taking values in . Hence, conditioned on , by the binomial tail, we always have for all (in particular, this conditional probability is zero when ). Also, note that the same upper bound is valid without conditioning on . Hence, by the law of total expectation, we have
where . Hence,
We choose sufficiently large such that . Hence, we have
This gives a lower bound on . Proving that satisfies the same lower bound is analogous to the proof in Theorem 4.1. Finally, can be added to the lower bound by Appendix C.∎
4.3 Lower Bounds for the Fixed Permutation Stochastic Gradient Descent
Finally, we study fixed permutation SGD. The proof of this result is deferred to Appendix E.
Theorem 4.3.
Let , and . For the fixed permutation stochastic gradient descent (Algorithm 3) with constant step size , there exist such that the uniform argument stability is lower bounded by
5 Generalization Guarantees for Multi-pass SGD
One important implication of our stability bounds is that they provide non-trivial generalization error guarantees for multi-pass SGD on nonsmooth losses. Multi-pass SGD is one of the most extensively used settings of SGD in practice, where SGD is run for passes (epochs) over the dataset (namely, the number of iterations ). To the best of our knowledge, aside from the dimension-dependent bounds based on uniform convergence (Shalev-Shwartz et al.,, 2010), no generalization error guarantees are known for the multi-pass setting on general nonsmooth convex losses. Given our uniform stability upper bounds, we can prove the following generalization error guarantees for the multi-pass setting of sampling-with-replacement SGD. Analogous results can be obtained for fixed-permutation SGD .
Theorem 5.1.
Running Algorithm 2 for passes (i.e., for iterations) with constant stepsize yields the following generalization error guarantees:
and there exists , such that for any , with probability ,
Proof.
First, by the expectation guarantee on UAS given in Theorem 3.3 together with the fact that the losses are -Lipschitz, it follows that Algorithm 2 (when run for passes with constant stepsize ) is -uniformly stable, where . Then, by (Hardt et al.,, 2016, Thm. 2.2), we have
For the high-probability bound, we combine the high-probability guarantee on UAS given in Theorem 3.3 with Theorem 2.2 to get the claimed bound. ∎
These bounds on generalization error can be used to obtain excess risk bounds using the standard risk decomposition (see (3)). In practical scenarios where one can certify small optimization error for multi-pass SGD, Thm. 5.1 can be used to readily estimate the excess risk. In Section 6.2 we provide worst-case analysis showing that multi-pass SGD is guaranteed to attain the optimal excess risk of within passes (with appropriately chosen constant stepsize).
6 Implications of Our Stability Bounds
6.1 Differentially Private Nonsmooth Stochastic Convex Optimization
Now we show an application of our stability upper bound to differentially private stochastic convex optimization (DP-SCO). Here, the input sample to the stochastic convex optimization algorithm is a sensitive and private data set, thus the algorithm is required to satisfy the notion of -differential privacy. A randomized algorithm is -differentially private if, for any pair of datasets , and for all events in the output range of , we have where the probability is taken over the random coins of (Dwork et al., 2006b, ; Dwork et al., 2006a, ). For meaningful privacy guarantees, the typical settings of the privacy parameters are and .
Using our UAS upper bounds, we show that a simple variant of noisy SGD (Bassily et al.,, 2014), that requires only gradient computations, yields the optimal excess population risk for DP-SCO. In terms of running time, this is a small improvement over the algorithm of Feldman et al., (2020) for the nonsmooth case, which requires gradient computations. More importantly, our algorithm is substantially simpler. For comparison, the algorithm in (Feldman et al.,, 2020) is based on a multi-phase SGD, where in each phase a separate regularized ERM problem is solved. To ensure privacy, the output of each phase is perturbed with an appropriately chosen amount of noise before being used as the initial point for the next phase.
The description of the algorithm is given in Algorithm 4.
Theorem 6.1 (Privacy guarantee of ).
Algorithm 4 is -differentially private.
The proof of the theorem follows the same lines of (Bassily et al.,, 2014, Theorem 2.1), but we replace their privacy analysis of the Gaussian mechanism with the tighter Moments Accountant method of Abadi et al., (2016). analysis of Abadi et al., (2016).
Theorem 6.2Risk of . (Excess risk of ).
In Algorithm 4, let . Then, for any , with probability at least over the randomness in both the sample and the algorithm, we have
Proof.
Fix any confidence parameter . First, for any data set and any step size by Lemma H.1 in Appendix H, we have the following high-probability guarantee on the training error of :
With probability at least we have
where the probability is over the sampling in step 4 and the independent Gaussian noise vectors . Given the setting of in the theorem, we get
(7) |
Next, it is not hard to show that attains the same UAS bound as (Theorem 3.3). Indeed, the only difference is the noise addition in gradient step; however, this does not impact the stability analysis. This is because the sequence of noise vectors is the same for the trajectories corresponding to the pair of neighboring datasets. Hence, the argument basically follows the same lines of the proof of Theorem 3.3 since the noise terms cancel out. Thus, we conclude that for any pair of neighboring datasets, with probability at least (over the randomness of ), the uniform argument stability of is bounded as: where . Given the setting of in the theorem, this bound reduces to .
Hence, by Theorem 2.2, with probability at least (over the randomness in both the i.i.d. dataset and the algorithm), the generalization error of is bounded as
(8) |
where in the first bound is a universal constant.
Remark 6.3.
Using the expectation guarantee on UAS given in Theorem 3.3 and following similar steps of the analysis above, we can also show that the expected excess population risk of is bounded as:
6.2 Nonsmooth Stochastic Convex Optimization with Multi-pass SGD
Another application of our results concerns obtaining optimal excess risk for stochastic nonsmooth convex optimization via multi-pass SGD. It is known that one-pass SGD is guaranteed to have optimal excess risk, which can be shown via martingale arguments that trace back to the stochastic approximation literature (Robbins and Monro,, 1951; Kiefer and Wolfowitz,, 1952).
Using our UAS bound, we show that Algorithms 2 and 3 can recover nearly-optimal high-probability excess risk bounds by making passes over the data. Analogous bounds hold for Algorithm 1, however these are less interesting from a computational efficiency perspective.
In short, we prove the following excess population risk bounds, and their analyses are deferred to Appendix G,
7 Discussion and Open Problems
In this work we provide sharp upper and lower bounds on uniform argument stability for the (stochastic) subgradient method in stochastic nonsmooth convex optimization. Our lower bounds show inherent limitations of stability bounds compared to the smooth convex case, however we can still derive optimal population risk bounds by reducing the step size and running the algorithms for longer number of iterations. We provide applications of this idea for differentially-private noisy SGD, and for two versions of SGD (sampling-with-replacement and fixed-permutation SGD).
The first open problem regards lower bounds that are robust to general forms of algorithmic randomization. Unfortunately, the methods presented here are not robust in this respect, since random initialization would prevent the trajectories reaching the region of highly nonsmooth behavior of the objective (or doing it in such a way that it does not increase UAS). One may try to strengthen the lower bound by using a random rotation of the objective; however, this leads to an uninformative lower bound. Finding distributional constructions for lower bounds against randomization is a very interesting future direction.
Our privacy application provides optimal risk for an algorithm that runs for steps, which is impractical for large datasets. Other algorithms, e.g. in (Feldman et al.,, 2020), run into similar limitations. Proving that quadratic running time is necessary for general nonsmooth DP-SCO is a very interesting open problem that can be formalized in terms of the oracle complexity of stochastic convex optimization (Nemirovsky and Yudin,, 1983) under stability and/or privacy constraints.
Acknowledgements
Part of this work was done while the authors were visiting the Simons Institute for the Theory of Computing during the “Data Privacy: Foundations and Applications” program. RB’s research is supported by NSF Awards AF-1908281, SHF-1907715, Google Faculty Research Award, and OSU faculty start-up support. Work by CG was partially funded by the Millennium Science Initiative of the Ministry of Economy, Development, and Tourism, grant “Millennium Nucleus Center for the Discovery of Structures in Complex Data.” CG would like to thank Nicolas Flammarion and Juan Peypouquet for extremely valuable discussions at early stages of this work.
References
- Abadi et al., (2016) Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. (2016). Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318. ACM.
- Bassily et al., (2019) Bassily, R., Feldman, V., Talwar, K., and Thakurta, A. G. (2019). Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems, pages 11279–11288.
- Bassily et al., (2014) Bassily, R., Smith, A., and Thakurta, A. (2014). Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (full version available at arXiv:1405.7085), pages 464–473. IEEE.
- Beck, (2017) Beck, A. (2017). First-Order Methods in Optimization. MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics.
- Bousquet and Elisseeff, (2002) Bousquet, O. and Elisseeff, A. (2002). Stability and generalization. JMLR, 2:499–526.
- Bousquet et al., (2019) Bousquet, O., Klochkov, Y., and Zhivotovskiy, N. (2019). Sharper bounds for uniformly stable algorithms. CoRR, abs/1910.07833.
- Charles and Papailiopoulos, (2018) Charles, Z. and Papailiopoulos, D. (2018). Stability and generalization of learning algorithms that converge to global optima. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 745–754, Stockholmsmässan, Stockholm Sweden. PMLR.
- Chaudhuri and Monteleoni, (2008) Chaudhuri, K. and Monteleoni, C. (2008). Privacy-preserving logistic regression. In NIPS.
- Chaudhuri et al., (2011) Chaudhuri, K., Monteleoni, C., and Sarwate, A. D. (2011). Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069–1109.
- Chen et al., (2018) Chen, Y., Jin, C., and Yu, B. (2018). Stability and convergence trade-off of iterative optimization algorithms. CoRR, abs/1804.01619.
- (11) Devroye, L. and Wagner, T. J. (1979a). Distribution-free inequalities for the deleted and holdout error estimates. IEEE Trans. Information Theory, 25(2):202–207.
- (12) Devroye, L. and Wagner, T. J. (1979b). Distribution-free performance bounds with the resubstitution error estimate (corresp.). IEEE Trans. Information Theory, 25(2):208–210.
- Dwork and Feldman, (2018) Dwork, C. and Feldman, V. (2018). Privacy-preserving prediction. CoRR, abs/1803.10266. Extended abstract in COLT 2018.
- (14) Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., and Naor, M. (2006a). Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT.
- (15) Dwork, C., McSherry, F., Nissim, K., and Smith, A. (2006b). Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265–284. Springer.
- Feldman, (2016) Feldman, V. (2016). Generalization of ERM in stochastic convex optimization: The dimension strikes back. CoRR, abs/1608.04414. Extended abstract in NIPS 2016.
- Feldman et al., (2020) Feldman, V., Koren, T., and Talwar, K. (2020). Private stochastic convex optimization: Optimal rates in linear time. Extended abstract in STOC 2020.
- Feldman et al., (2018) Feldman, V., Mironov, I., Talwar, K., and Thakurta, A. (2018). Privacy amplification by iteration. In FOCS, pages 521–532.
- Feldman and Vondrák, (2018) Feldman, V. and Vondrák, J. (2018). Generalization bounds for uniformly stable algorithms. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pages 9770–9780.
- Feldman and Vondrák, (2019) Feldman, V. and Vondrák, J. (2019). High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA, pages 1270–1279.
- Hardt et al., (2016) Hardt, M., Recht, B., and Singer, Y. (2016). Train faster, generalize better: Stability of stochastic gradient descent. In ICML, pages 1225–1234.
- Jain et al., (2012) Jain, P., Kothari, P., and Thakurta, A. (2012). Differentially private online learning. In 25th Annual Conference on Learning Theory (COLT), pages 24.1–24.34.
- Jain and Thakurta, (2014) Jain, P. and Thakurta, A. (2014). (near) dimension independent risk bounds for differentially private learning. In ICML.
- Kearns and Ron, (1999) Kearns, M. J. and Ron, D. (1999). Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation, 11(6):1427–1453.
- Kiefer and Wolfowitz, (1952) Kiefer, J. and Wolfowitz, J. (1952). Stochastic estimation of the maximum of a regression function. Ann. Math. Statist., 23(3):462–466.
- Kifer et al., (2012) Kifer, D., Smith, A., and Thakurta, A. (2012). Private convex empirical risk minimization and high-dimensional regression. In Conference on Learning Theory, pages 25–1.
- Koren and Levy, (2015) Koren, T. and Levy, K. (2015). Fast rates for exp-concave empirical risk minimization. In NIPS, pages 1477–1485.
- Kuzborskij and Lampert, (2018) Kuzborskij, I. and Lampert, C. H. (2018). Data-dependent stability of stochastic gradient descent. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 2820–2829. PMLR.
- Liu et al., (2017) Liu, T., Lugosi, G., Neu, G., and Tao, D. (2017). Algorithmic stability and hypothesis complexity. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 2159–2167.
- London, (2017) London, B. (2017). A pac-bayesian analysis of randomized learning with application to stochastic gradient descent. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems 30, pages 2931–2940. Curran Associates, Inc.
- Maurer, (2017) Maurer, A. (2017). A second-order look at stability and generalization. In COLT, pages 1461–1475.
- Mukherjee et al., (2006) Mukherjee, S., Niyogi, P., Poggio, T., and Rifkin, R. (2006). Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics, 25(1-3):161–193.
- Nedic and Bertsekas, (2001) Nedic, A. and Bertsekas, D. P. (2001). Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization, 12(1):109–138.
- Nemirovsky and Yudin, (1983) Nemirovsky, A. and Yudin, D. (1983). Problem Complexity and Method Efficiency in Optimization. J. Wiley @ Sons, New York.
- Poggio et al., (2004) Poggio, T., Rifkin, R., Mukherjee, S., and Niyogi, P. (2004). General conditions for predictivity in learning theory. Nature, 428(6981):419–422.
- Rigollet, (2015) Rigollet, P. (2015. https://ocw.mit.edu/courses/mathematics/18-s997-high-dimensional-statistics-spring-2015). Lecture Notes. 18.S997: High Dimensional Statistics. MIT Courses/Mathematics.
- Robbins and Monro, (1951) Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Math. Statist., 22(3):400–407.
- Rogers and Wagner, (1978) Rogers, W. H. and Wagner, T. J. (1978). A finite sample distribution-free performance bound for local discrimination rules. The Annals of Statistics, 6(3):506–514.
- Shalev-Shwartz et al., (2010) Shalev-Shwartz, S., Shamir, O., Srebro, N., and Sridharan, K. (2010). Learnability, stability and uniform convergence. The Journal of Machine Learning Research, 11:2635–2670.
- Smith and Thakurta, (2013) Smith, A. and Thakurta, A. (2013). Differentially private feature selection via stability arguments, and the robustness of the LASSO. In Conference on Learning Theory (COLT), pages 819–850.
- Talwar et al., (2015) Talwar, K., Thakurta, A., and Zhang, L. (2015). Nearly optimal private LASSO. In Proceedings of the 28th International Conference on Neural Information Processing Systems, volume 2, pages 3025–3033.
- Ullman, (2015) Ullman, J. (2015). Private multiplicative weights beyond linear queries. In Proceedings of the 34th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, pages 303–312. ACM.
- Wu et al., (2017) Wu, X., Li, F., Kumar, A., Chaudhuri, K., Jha, S., and Naughton, J. (2017). Bolt-on differential privacy for scalable stochastic gradient descent-based analytics. In SIGMOD. ACM.
- Zinkevich, (2003) Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML’03, page 928–935. AAAI Press.
Appendix A Proof of Theorem 3.2
Proof.
The bound of is obtained directly from the diameter bound on . Therefore, we focus exclusively on the second term. Let be arbitrary neighboring datasets, , and consider the trajectories associated with the batch GD method on datasets and , respectively. We use Lemma 3.1 with and , for all . Notice that
since ; in particular, , with . We conclude by Lemma 3.1 that for all
Hence, the stability bound holds for all the iterates, and thus for by the triangle inequality. ∎
Appendix B Proof of Theorem 3.4
Proof.
The stability bound of is implied directly by the diameter of the feasible set. Let , and let be the trajectories of Algorithm 3 on and , respectively, with .
Notice that since is a random permutation, we may assume w.l.o.g. that is the identity, whereas the perturbed coordinate between is . The rest of the proof is a stability analysis conditioned on (which fixes all the randomness of the algorithm), but from the observation above this is the same as conditioning on the random perturbed coordinate .
Let , and so that . Conditioned on , we have that for all ,
Indeed, for all , . For , we have
where we used , and that both gradients are bounded in norm by . Finally, when , we have , and therefore we can leverage the monotonicity of the subgradients
Unravelling this recursion, we get , so by the law of total expectation:
where the first inequality holds by the Jensen inequality, the second inequality comes from the bound on the conditional expectation, the third inequality from the non-increasing stepsize assumption, and the fourth inequality is from Cauchy-Schwarz. Since averaging can only improve stability, we conclude the result. ∎
Appendix C General Lower Bound on Stability of SGD
Let be a -uniformly stable stochastic convex optimization algorithm with , where is increasing and . By the lower bound on the optimal risk of nonsmooth convex optimization, , where is a universal constant (Nemirovsky and Yudin,, 1983). This, combined with the risk decomposition (3), implies that
By our assumption on , for sufficiently large, there always exists such that
which leads to , where is a universal constant.
If algorithm is based on subgradient iterations with constant step size (these could be either stochastic, batch or minibatch), by standard analysis, the optimization guarantee of such algorithm is . Both bounds in combination give
If we further assume that (notice minimizes the optimization error), then . We also emphasize all the choices of step size that we will make to control generalization error will lie in this range.
Appendix D Proof of Theorem 4.1
Proof.
Let , and . We consider , and the objective function
where (i.e., supported on the first coordinates). Notice that for normalization purposes, we need ; furthermore, we will choose sufficiently large such that . Consider the data sets , leading to the empirical objectives:
Let and be the trajectories of the algorithm over datasets and , respectively, initialized from . Clearly, for all . Now ; choosing , we have , and hence Sequentially, the method will perform cumulative subgradient steps on . In particular, for any we have
By orthogonality of the subgradients and given our choice of , we conclude that
and further subgradient steps are only given by the linear term, , which are negligible perturbations.
We finish by arguing that averaging does not help. First, in the case :
And second, in the case :
Finally, the additional term in the lower bound is obtained by Appendix C. ∎
Appendix E Proof of Theorem 4.3
Proof.
We consider the same function class of Thm. 4.1, and neighbor datasets , . We will assume in what follows that , is sufficiently large and . Let and be the trajectories of Algorithm 3 over datasets respectively, both initialized at . Let now . Arguing as in Thm. 4.1, we have that for all , whereas
Later iterations will satisfy if (and otherwise the algorithm stops earlier). Therefore, for all ,
Notice that we used above that is such that . Analogously as in Thm. 4.1, we can obtain the same conclusion for . The lower bound of can be added by Appendix C, so the result follows. ∎
Appendix F Upper bounds on UAS of SGD when
Theorem F.1.
Let and . Suppose . The UAS of sampling-with-replacement stochastic gradient descent (Algorithm 2) satisfies uniform argument stability
Proof.
The bound of is obtained directly from the diameter bound on . Therefore, we focus exclusively on the second term. Let , and let be the entry where both datasets differ. Let be the trajectories of Algorithm 2 on and , respectively, with .
Let denote the event that for some ; that is, is the event that the index is sampled at least once in the first iterations. We note that
(10) |
For the rest of the proof we bound . To do this, we derive a recurrence for . Note that is the union of two mutually exclusive events: and , where is the complement of , i.e., the event “index is never sampled in the first iterations.” Hence,
(11) | |||||
Now, conditioned on the past sampled coordinates , we have
where the last inequality is obtained from convexity and Lipschitzness of the objective. Now, squaring we get
From this formula we derive bounds for the two conditional expectations:
(12) | |||||
(13) |
where (12) holds by independence of and , and in (13) we used that conditioned on .
Combining (12) and (13) in (11), we get
With this last bound we can proceed inductively to show that
The base case, , is evident, and the inductive step can be considered in two separate cases; namely, the case where , which can be obtained by the induction hypothesis; and the case where , for which
Then
and by the Jensen inequality
proving the inductive step. Finally, putting this together with (10) completes the proof. ∎
Theorem F.2.
Let , , be a uniformly random permutation over , and be a non-increasing sequence. Let . The UAS of fixed-permutation stochastic gradient descent (Algorithm 3) satisfies uniform argument stability .
Appendix G Generalization in Stochastic Optimization with Data Reuse
G.0.1 Risk Bounds for Sampling-with-Replacement Stochastic Gradient Descent
Theorem G.1.
Let and . The sampling with replacement stochastic gradient descent (Algorithm 2) with iterations and satisfies for any
It should be noted that, similarly to Remark 6.3, if we are only interested in expectation risk bounds, one can shave off the polylogarithmic factor above, which is optimal for the expected excess risk.
Proof.
Let an i.i.d. random sample for the stochastic convex program, and apply on these data the algorithm for constant step size and iterations.
We consider such that . Notice that the sampling-with-replacement stochastic gradient is a bounded first-order stochastic oracle for the empirical objective. It is direct to verify that the assumptions of Lemma H.1 are satisfied with . Hence, by Lemma H.1, we have that, with probability at least
On the other hand, Theorem 3.3 together with Theorem 2.2, guarantees that with probability at least , we have
Finally, Lemma 2.1 ensures that with probability
By the union bound and the excess risk decomposition (3), we have that, with probability ,
where only at the last step we replaced by the choice of step size and number of iterations from the statement.
∎
G.0.2 Risk Bounds for Fixed-Permutation Stochastic Gradient Descent
As a final application we provide a population risk bound based on the UAS of Algorithm 3. Similarly as in the case of sampling-with-replacement SGD, we need an optimization error analysis, which for completeness is provided in Appendix I, and it is based on the analysis of the incremental subgradient method (Nedic and Bertsekas,, 2001).
Interestingly, the combination of the incremental method analysis for arbitrary permutation (Nedic and Bertsekas,, 2001) and our novel stability bounds that also work for arbitrary permutation, guarantees generalization bounds for fixed permutation SGD without the need of reshuffling, or even any form of randomization. We believe this could be of independent interest.
Theorem G.2.
Algorithm 3 with constant step size and epochs is such that for every ,
where is an absolute constant.
Similarly to the previous result, we can remove the polylogarithmic factor if we are only interested in expected excess risk guarantees.
Proof.
By Corollary I.2
by our choice of . On the other hand, Theorem 3.4 guarantees the algorithm is -UAS with probability 1, where . Therefore, by Theorem 2.2, we have that w.p.
Finally, Lemma 2.1 ensures that with probability
By the union bound and the excess risk decomposition (3), we have that, with probability at least ,
∎
Appendix H High-probability Bound on Optimization Error of SGD with Noisy Gradient Oracle
It is known that standard online-to-batch conversion technique can be used to provide high-probability bound on the optimization error (i.e., the excess empirical risk) of stochastic gradient descent. For the sake of completeness and to make the paper more self-contained, we re-expose this technique here for stochastic gradient descent with noisy gradient oracle. This is done in the following lemma, which is used in the proofs of our results in Section 6.
Lemma H.1 (Optimization error of SGD with noisy gradient oracle).
Let be a dataset. Let be the empirical risk associated with , where for every is convex, -Lipschitz function over for some . Consider the stochastic (sub)gradient method:
with output ; where are drawn uniformly from from with replacement, and for all , is a random map (referred to as noisy gradient oracle) that satisfies the following conditions:
-
1.
Unbiasedness: For every , , where the expectation is taken over the randomness in the gradient oracle .
-
2.
Sub-Gaussian gradient noise: There is such that for every , is -sub-Gaussian random vector; that is, for every is -sub-Gaussian random variable .
-
3.
Independence of the gradient noise across iterations: conditioned on any fixed realization of the sequence of random maps is independent. (Here, randomness comes only from the gradient oracle.)
Then, for any , with probability at least , the optimization error (i.e., the excess empirical risk) of this method is bounded as
Proof.
Let . By convexity of the empirical loss, we have
(14) |
Since are sampled uniformly without replacement from we have
for all . Moreover, since the range of lies in it follows that is a martingale with bounded differences (namely, bounded by ). Therefore, by Azuma’s inequality, the first term in (14) satisfies
(15) |
By Hoeffding’s inequality, the second term in (14) also satisfies the same bound; namely,
(16) |
Using similar analysis to that of the standard online gradient descent analysis Zinkevich, (2003), the last term in (14) can be bounded as
(17) |
By the properties of the gradient oracle stated in the lemma, we can see that for any fixed realization of , the second term in (17) is -sub-Gaussian random variable. Hence,
(18) |
Let . Note that . Moreover, observe (e.g., by (Rigollet,, 2015, Lemma 1.12)) that for any fixed realization of , is a sub-exponential random variable with parameter ; namely, Hence, by a standard concentration argument (e.g., Bernstein’s inequality), we have
(19) |
Putting (18) and (19) together, and noticing that , we conclude that with probability at least the third term of (14) is bounded as
Hence, by the union bound, we conclude that with probability at least the excess empirical risk of the stochastic subgradient method is bounded as
∎
Appendix I Empirical Risk of Fixed-Permutation SGD
Our optimization error analysis is based on (Nedic and Bertsekas,, 2001, Lemma 2.1).
Lemma I.1.
Let us consider the fixed permutation stochastic gradient descent (Algorithm 3), for arbitrary permutation (i.e., not necessarily random) and with constant step size over each epoch (i.e., for all , ). Then
Proof.
First, since the permutation is arbitrary, w.l.o.g. is the identity (we make this choice only for notational convenience). Let now . At each round, the recursion of SGD implies that
Let . Summing up these inequalities from
Re-arranging terms we obtain the result. ∎
Using the previous lemma, it is straightforward to derive the optimization accuracy of the method.
Corollary I.2.
The fixed permutation stochastic gradient descent (Algorithm 3), for arbitrary permutation (i.e., not necessarily random) and with constant step size over each epoch (i.e., for all , ). satisfies