-SGD: Stable Stochastic Optimization via a Double Momentum Mechanism
Abstract
We consider stochastic convex optimization problems where the objective is an expectation over smooth functions. For this setting we suggest a novel gradient estimate that combines two recent mechanism that are related to notion of momentum. Then, we design an SGD-style algorithm as well as an accelerated version that make use of this new estimator, and demonstrate the robustness of these new approaches to the choice of the learning rate. Concretely, we show that these approaches obtain the optimal convergence rates for both noiseless and noisy case with the same choice of fixed learning rate. Moreover, for the noisy case we show that these approaches achieve the same optimal bound for a very wide range of learning rates.
1 Inroduction
Stochastic Convex Optimization (SCO) is a fundamental framework, that captures several classical Machine Learning (ML) problems, such as linear regression, logistic regression and SVMs (Support Vector Machines); amongst others. In the past two decades, SCO has been extensively explored and highly influenced the field of ML: it has popularized the use of Stochastic Gradient Descent (SGD) as the standard workhorse for training ML models; see e.g. (Shalev-Shwartz et al., 2007; Welling and Teh, 2011; Mairal et al., 2009; Recht et al., 2011); as well as has lead to the design of sophisticated SGD variants that play a central role in training modern large scale models (Duchi et al., 2011; Kingma and Ba, 2015).
One practical difficulty in applying SGD-type methods is the need to tune its learning rate among other hyperparameters, and it is well known that the performance of such algorithms crucially relies on the right choice of the learning rate. As a remedy, several adaptive SGD variants have been designed throughout the years (Duchi et al., 2011; Kingma and Ba, 2015; Levy et al., 2018; Kavis et al., 2019; Jacobsen and Cutkosky, 2022); however in practice such methods still require hyperparameter tuning which might be very costly in huge scale scenarios.
In this paper we focus on the prevalent SCO setting where the objective (expected loss) is an Expectation Over Smooth losses (SCO-EOS); this applies e.g. to linear and logistic regression problems (but not to SVMs). In this case, it is well known that SGD requires a careful tuning of the learning rate to obtain the optimal performance. For example, in the noiseless case, SGD (or GD in this case) should employ a learning rate of where is the smoothness parameter of the objective. Nevertheless, if we apply this in the noisy setting, the guarantees of SGD become vacuous. To obtain the optimal SGD guarantees, we should roughly decrease the learning rate by a factor of where is the total number of SGD iterates (and samples), and is the variance of the noise in the gradient estimates. This illustrates the sensitivity of SGD to the choice of . The same applies to stochastic accelerated methods such as (Lan, 2012; Hu et al., 2009; Xiao, 2010).
Contributions.
We introduce a novel gradient estimate for SCO-EOS problems, and show that its square error, , shrinks with the number of updates, , where is the iterate. This, in contrast to the standard SGD estimator where usually . Our new estimate blends two recent machanisms that are related to the notion of momentum: Anytime Averaging, which is due to (Cutkosky, 2019); and a corrected momentum technique (Cutkosky and Orabona, 2019). We therefore denote our estimate by which stands for .
We further design an SGD variant called -SGD, as well as an accelerated version called , that employ our new estimate, and demonstrate their stability with respect to the choice of the learning rates . Concretely,
-
•
Upon using the exact same learning rate of (where is the total number of iterates/data-samples), -SGD enjoys a convergence rate of in the noiseless case, and a rate of in the noisy case.
Moreover, in the noisy case, -SGD enjoys the same convergence rate as of the optimal SGD , for a wide range of learning rate choices i.e. , such that the ratio .
-
•
Upon using the exact same learning rate of , enjoys an optimal convergence rate of in the noiseless case, and an optimal rate of in the noisy case.
Moreover, in the noisy case, enjoys the same optimal convergence of , for an extremely wide range of learning rate choices i.e. , such that the ratio .
Related Work:
The Gradient Descent (GD) algorithm and it stochastic counterpart SGD (Robbins and Monro, 1951) are cornerstones of ML and Optimization. Their wide adoption in ML as well as in other fields has lead to the development of numerous elegant and useful variants (Duchi et al., 2011; Kingma and Ba, 2015; Ge et al., 2015). Curiously, many of these variants that serve in practical training of non-convex learning models; were originally designed under the framework of SCO.
As we mention in the Introduction, the performance of SGD crucially relies on the choice of the learning rate. There is a plethora of work on designing methods that implicitly and optimally adapt the learning rate throughout the learning process (Duchi et al., 2011; Kingma and Ba, 2015; Kavis et al., 2019; Antonakopoulos et al., 2022; Jacobsen and Cutkosky, 2022); and some of these methods have been widely adopted among practitioners.
Momentum (Polyak, 1964) is another widely used practical technique (Sutskever et al., 2013), and it is interestingly related to accelerated method of Nesterov (Nesterov, 1983) – a seminal approach that enables to obtain faster convergence rates compared to GD for smooth and convex objectives. While Nesterov’s accelerated method is fragile to noise, Lan (2012); Hu et al. (2009); Xiao (2010) have designed stochastic accelerated variants that enable to obtain a convergence rate that interpolates between the fast rate in the noiseless case and between the standard SGD rate in the noisy case (depending on the noise magnitude).
Our work builds on two recent mechanisms that are related to the notion of momentum: (i) An Anytime averaging mechanism Cutkosky (2019) which relies on averaging the query points of the gradient oracle. (ii) A corrected momentum technique Cutkosky and Orabona (2019) which relies on averaging the gradients themselves throughout the learning process (while introducing correction). It is interesting to note that the Anytime mechanism has proven to be extremely useful in designing adaptive and accelerated methods (Cutkosky, 2019; Kavis et al., 2019; Antonakopoulos et al., 2022). The corrected momentum mechanism has mainly found use in designing optimal and adaptive algorithms for stochastic non-convex problems (Cutkosky and Orabona, 2019; Levy et al., 2021).
2 Setting
We consider stochastic optimization problems where the objective is convex and is of the following form,
where is a compact convex set, and is an unknown distribution from which we may draw i.i.d. samples . We consider first order optimization methods that iteratively employ such samples in order to generate a sequence of query points and eventually output a solution . Our performance measure is the expected excess loss,
where the expectation is w.r.t. the randomization of the samples.
More concretely, at every iteration such methods maintain a query point which is computed based on the past query points and past samples . Then, the next query point is computed based on and on a gradient estimate that is derived by drawing a fresh sample independently of the past samples, and computing,
note that the above derivative is w.r.t. . The independence between samples implies that is an unbiased estimate of in the following sense, It is often comfortable to think of the computation of as a (noisy) Gradient Oracle that upon receiving a query point outputs a vector , which is an unbiased estimate of .
Assumptions.
We will make the following assumptions,
Bounded Diameter: we assume there exists such that .
Bounded variance: There exists such that,
(1) |
Expectation over smooth functions: We assume that is an expectation of smooth functions, i.e. there exist such,
(2) |
The above assumption also implies that the expected loss is smooth.
Bounded Smoothness Variance.
Note that the assumption that we make in Eq. (2) implies that there exists such,
(3) |
Clearly, in the deterministic setting where , we have . In App. A, we show how Eq. (2) implies Eq. (3).
Notation:
In the rest of this manuscript, relates to gradients with respect to , i.e., . We use to denote the Euclidean norm. Given a sequence we denote . For a positive integer we denote . We also let denote the orthogonal projection onto , i.e. .
3 Momentum Mechanisms
In this section we provide background regarding two mechanisms that can be related to the notion of momentum. Curiously, both of these approaches are related to averaging of different elements of the learning algorithm. Our approach that we present in the following section builds on a combination of these aforementioned mechanisms.
3.1 Mechanism I: Anytime-GD
The mechanism that we discuss here is related to averaging the query points for the noisy gradient oracle. Basically, while in standard SGD we query the gradients at the iterates that we compute, in Anytime-SGD (Cutkosky, 2019), we query the gradients at weighted averages of the iterates that we compute.
More concretely, the Anytime-SGD algorithm (Cutkosky, 2019; Kavis et al., 2019) that we describe in Equations (4) and (5), employs a learning rate and a sequence of non-negative weights . The algorithm maintains two sequences . At initialization , and then we update as follows ,
(4) |
and is a fresh sample from . Then Anytime-SGD updates,
(5) |
The above implies that the ’s are weighted averages of the ’s, i.e. that . Thus, at every iterate, the gradient is queried at which is a weighted average of past iterates, and then is updated similarly to GD with a weight on the gradient .
Curiously, it was shown in Wang et al. (2021), that a very similar algorithm to Anytime-GD, can be related to the classical Heavy-Ball method (Polyak, 1964), and the latter incorporates momentum in its iterates (see Alg. in Section of ).
It was shown in Cutkosky (2019) that Anytime-SGD obtains the same convergence rates as SGD for convex loss functions (both smooth and non-smooth). And this technique was found to be extremely useful in designing universal accelerated methods Cutkosky (2019); Kavis et al. (2019).
The next theorem is crucial in analyzing Anytime-SGD, and actually applies more broadly,
Theorem 3.1 (Rephrased from Theorem 1 in Cutkosky (2019)).
Let be a convex function with a minimum . Also let , and , such that is an weighted average of , i.e. such that , and for any ,
Then the following holds for any ,
Note that the above theorem holds for any sequence , and as a private case it holds for the Anytime-GD algorithm that we have described. Thus the above Theorem relates the excess loss of a given algorithm that computes the sequences to its weighted regret,
3.2 Mechanism II: Recursive Corrected Averaging
The mechanism that we discuss here is related to averaging the gradient estimates that we compute throughout the learning process, which is a common and often crucial technique in practical applications. While such straightforward averaging might incur bias into the gradient estimates, it was suggested in (Cutkosky and Orabona, 2019) to add a bias correction mechanism named (STochastic Recursive Momentum). And it was shown that this mechanism leads to a powerful variance reduction effect.
Concretely, maintains an estimate which is a corrected weighted average of past stochastic gradients, and then it performs an SGD-style update step,
(6) |
The corrected momentum estimates are updated as follows,
(7) |
Note that the correction term implies that . Nevertheless, in general , so is conditionally biased (in contrast to standard SGD gradient estimates). Moreover, the choice of the same sample in the two terms of the above expression is crucial for the variance reduction effect.
4 Our Approach: Double Momentum Mechanism
Our approach is to combine together the two momentum mechanisms that we describe above. Our algorithm is therefore named -SGD (-Stochastic Gradient Descent), and we describe it in Alg. 1. Intuitively, each of these Momentum (averaging) techniques stabilizes the algorithm, and their combination leads to a method that is almost as stable as offline GD.
We first describe algorithm 1, and then present our main result in Theorem 4.1, showing that the error of the gradient estimates of our approach shrinks as we progress. We show the benefit of this result in Thm. 4.2, which demonstrates that -SGD obtains the same convergence rate as standard SGD, for a very wide range of learning rates (in contrast to SGD).
Next we elaborate on the ingredients of Alg. 1:
Update rule: Note that for generality we allow a broad family of update rules in Eq. (9) of Alg. 1. The only constraint on the update rule is that its iterates always belong to . Later, we will specifically analyze the natural SGD-style update rule,
(8) |
Momentum Computation: From Eq. (11) in Alg. 1 we can see that the momentum is updated similarly to the update in Eq. (7), albeit with two main differences: (i) first we incorporate importance weights into and recursively update the weighted momentum ; more importantly (ii) we query the noisy gradients at the averages ’s rather than in the iterates themselves, and the averages (Eq. (10)) are computed in the spirit of Anytime-SGD. These can be seen in the computation of and which query the gradients at the averages rather than the iterates.
Update based on past information/computations up to time | (9) |
(10) |
(11) |
Thus, as promised our algorithms combines two different momentum mechanisms. Next we present our main result which shows that Alg. 1 yields estimates with a very small error. Note that the only limitation on the update rule in Eq. (9) is that its iterates always belong to .
Theorem 4.1.
So according to the above theorem, the overall error of compared to the exact gradient shrinks as we progress. Conversely, in standard SGD (as well as in Anytime-SGD) the expected square error is fixed, namely .
Based on the above theorem, we are now ready to analyze Alg. 1 with the specific SGD-type update rule that we present in Eq. (8).
Theorem 4.2 (-SGD Guarantees).
The Stability of -SGD.
The above lemma shows that -SGD obtains the optimal SGD convergence rates for both offline (noiseless) and noisy case with the same choice of fixed learning rate , which does not depend on the noise . This in contrast to SGD, which require either reducing the offline learning rate by a factor of ; or using sophisticated adaptive learning rates (Duchi et al., 2011; Levy et al., 2018).
Moreover, letting , than it can be seen that in the noisy case our approach enables to employ learning rates in an extremely wide range of ; and still obtain the same optimal SGD convergence rate of . Indeed note that, the ratio .
4.1 Proof Sketch of Thm. 4.1
Proof.
First note that the ’s always belong to , since they are weighted averages of the ’s. Our first step is to bound the difference between consecutive queries. The definition of implies,
yielding,
(12) |
where we have used implying for any , we also used which holds since , finally we use .
Notation: Prior to going on with the proof we shall require some notation. We will denote , and recall the following notation from Alg. 1: . We will also denote,
Now, recalling Eq. (11), and combining it with the above definition of enables to derive the following recursive relation,
where we denote . Now, using , and then it can be shown that ,and . Moreover, . Using these relations in the equation above gives,
(13) |
where for any we denote , as well as . Unrolling the above equation yields an explicit expression for any :
Noticing that the sequence is is martingale difference sequence with respect to the natural filtration induced by the history of the samples up to time ; enables to bound the expected square error as follows,
(14) |
Now, using Eq. (3) together with Eq. (12) allows to bound,
Plugging the above back into Eq. (4.1) and summing establishes the theorem. ∎
5 Accelerated Version:
In this section we present an accelerated version that makes use of a double momentum mechanism as we do for -SGD. Our approach here relies on an algorithmic template named ExtraGradient (Korpelevich, 1976; Nemirovski, 2004; Juditsky et al., 2011). The latter technique has already been combined with the Anytime-SGD mechanism in Cutkosky (2019); Kavis et al. (2019), where it was shown that it leads to acceleration. Here, we further blend an appropriate Mechanism, leading to a new method that we name . Our main result for this section is presented in Thm. 5.2.
Optimistic OGD, Extragradient and UnixGrad:
The extragradient technique is related to an algorithmic template named Optimistic Online GD (Optimistic OGD)(Rakhlin and Sridharan, 2013). In this algorithm we receive a sequence of (possibly arbitrary) loss vectors in an online manner. And our goal is to sequentially compute a sequence of iterates (or decision points) , where is given convex set. Not that we may pick only based on past information . And our goal is to ensure a low weighted regret for any , and the latter is defined as follows,
and is a sequence of predefined weights. Now, in the optimistic setting we assume that we may access another sequence of “hint vectors” and we assume that prior to picking we may also access . It was shown in Rakhlin and Sridharan (2013), that if the hints are useful, in the sense that , then one can substantially reduce the regret by properly incorporating the hints. Thus, in Optimistic-OGD we maintain two sequences: a decision point sequence and an auxiliary sequence that are updated as follows,
Optimistic OGD: | ||||
(15) |
It was shown in (Rakhlin and Sridharan, 2013; Kavis et al., 2019), that the above algorithm enjoys the following regret bound for any ,
Theorem 5.1 (See e.g. the proof of Thm. 1 in (Kavis et al., 2019)).
Let , and be a convex set with bounded diameter . Then the Optimistic-OGD algorithm that appears above enjoys the following regret bound for any ,
The Extragradient algorithm (Nemirovski, 2004) aims to minimize a convex function . To do so, it applies the Optimistic-OGD template with the following choices of loss and hint vectors:
, and .
The UnixGrad algorithm (Kavis et al., 2019), can be seen as an Anytime version of Extragradient, where again we aim to minimize a convex function . In the spirit of Anytime-GD, UnixGrad maintains two sequences of weighted averages based on the sequences as follows,
(16) |
Then, based on the above averages UnixGrad sets the loss and hint vectors as follows: , and . Note that the above averaging rule implies that the ’s are weighted averages of the ’s, i.e. . The latter enables to utilize the Anytime guarantees of Thm. 3.1.
Note that there also exist stochastic versions of the above approaches where we may only query noisy rather than exact gradient estimates.
Our Approach.
We suggest to employ the Optimistic-OGD template together with a mechanism on top of the Anytime mechanism employed by UnixGrad. Specifically, we shall maintain the same weighted averages as in Eq. (16), and define momentum estimates as follows:
At round draw a fresh sample , and based on compute
(17) |
Based on the above compute the (corrected momentum) loss and hint vectors as follows,
(18) |
And then update according to Optimistic OGD in Eq. (5). Notice that the update rule for is the exact same update that we use in Alg. 1; additionally as we have already commented, the sequence in Eq. (16) is an weighted average of the sequence. Therefore, if we pick , and , then we can invoke Thm. 4.1 implying that,
where .
The pseudo-code in Alg. 2 depicts our algorithm with the appropriate computational order. It can be seen that it combines Optimistic-OGD updates (Eq. (5)), together with appropriate Anytime averaging (Eq. (16)), and together with updates for (Eq. (18)).
We are now ready to state the guarantees of ,
The Stability of .
Similarly to -SGD, the above lemma shows that obtains the optimal rates for both offline (noiseless) and noisy case with the same choice of fixed learning rate , which only depends on the smoothness of the objective. This in contrast to existing accelerated methods, which require to either to reduce the offline learning rate by a factor of (Xiao, 2010); or to employ sophisticated adaptive learning rates (Cutkosky, 2019; Kavis et al., 2019).
Moreover, letting , than it can be seen that in the noisy case our approach enables to employ learning rates in an extremely wide range of ; and still obtain the same optimal convergence rate of . Indeed note that, the ratio .
5.1 Proof of Thm. 5.2
Proof of Thm. 5.2.
The proof decomposes according to the techniques that employs.
Part I: Anytime Guarantees. Since the ’s are weighted averages of the
’s we can invoke Thm. 3.1 which implies,
(19) |
where we have denote .
Part II: Optimistic OGD Guarantees. Since the update rule for satisfies the Optimistic-OGD template w.r.t the sequences of loss and hint vectors we can apply Lemma 5.1 to bound the weighted regret in Eq. (19) as follows,
(20) |
where the first line uses Eq. (19) together with Thm. 5.1; the second line uses which follows by Eq. (18); and the third line follows by the definitions of , as well as from , which holds since ; the fourth line follows by our assumption in Eq. (2);
the fifth line holds since which holds due to Eq. (16); the sixth line holds since ;
and the last line follows since which holds since we assume .
Part III: Guarantees.
Notice that our definitions for and satisfy the exact same conditions of Thm. 4.1, This immediately implies that . Using this, and taking the expectation of Eq. (5.1) yields,
(21) |
where the second inequality uses Jensen’s Inequality: which holds for any random variable ; the last inequality follows from , implying that .
Dividing the above equation by and recalling that concludes the proof. ∎
6 Conclusion
By blending two recent momentum techniques, we have designed a new shrinking-error gradient estimate for the SCO-EOS setting. Based on it, we have presented two algorithms that rely on the SGD and on the Extragradient templates, and have shown that latter are extremely stable w.r.t. the choice of the learning rate. For future work, it will be interesting to study our approach in the more general case of expectation over -Hölder smooth functions (with ); as well as to extend our approach to the strongly-convex case.
Acknowledgments
This work has received funding from the Israeli Science Foundation (grant No. 447/20).
References
- Antonakopoulos et al. (2022) K. Antonakopoulos, D. Q. Vu, V. Cevher, K. Levy, and P. Mertikopoulos. Undergrad: A universal black-box optimization method with almost dimension-free convergence rate guarantees. In International Conference on Machine Learning, pages 772–795. PMLR, 2022.
- Cutkosky (2019) A. Cutkosky. Anytime online-to-batch, optimism and acceleration. In International Conference on Machine Learning, pages 1446–1454. PMLR, 2019.
- Cutkosky and Orabona (2019) A. Cutkosky and F. Orabona. Momentum-based variance reduction in non-convex sgd. Advances in neural information processing systems, 32, 2019.
- Duchi et al. (2011) J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011.
- Ge et al. (2015) R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797–842, 2015.
- Hazan et al. (2016) E. Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(3-4):157–325, 2016.
- Hu et al. (2009) C. Hu, W. Pan, and J. T. Kwok. Accelerated gradient methods for stochastic optimization and online learning. In Advances in Neural Information Processing Systems, pages 781–789, 2009.
- Jacobsen and Cutkosky (2022) A. Jacobsen and A. Cutkosky. Parameter-free mirror descent. arXiv preprint arXiv:2203.00444, 2022.
- Juditsky et al. (2011) A. Juditsky, A. Nemirovski, and C. Tauvel. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17–58, 2011.
- Kavis et al. (2019) A. Kavis, K. Y. Levy, F. Bach, and V. Cevher. Unixgrad: a universal, adaptive algorithm with optimal guarantees for constrained optimization. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 6260–6269, 2019.
- Kingma and Ba (2015) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Y. Bengio and Y. LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
- Korpelevich (1976) G. M. Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 12:747–756, 1976.
- Lan (2012) G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 133(1-2):365–397, 2012.
- Levy et al. (2018) K. Y. Levy, A. Yurtsever, and V. Cevher. Online adaptive methods, universality and acceleration. In NeurIPS, 2018.
- Levy et al. (2021) K. Y. Levy, A. Kavis, and V. Cevher. Storm+: Fully adaptive sgd with recursive momentum for nonconvex optimization. In NeurIPS, 2021.
- Mairal et al. (2009) J. Mairal, F. R. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In A. P. Danyluk, L. Bottou, and M. L. Littman, editors, Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, volume 382 of ACM International Conference Proceeding Series, pages 689–696. ACM, 2009. doi: 10.1145/1553374.1553463. URL https://doi.org/10.1145/1553374.1553463.
- Nemirovski (2004) A. Nemirovski. Prox-method with rate of convergence for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229–251, 2004.
- Nesterov (1983) Y. Nesterov. A method of solving a convex programming problem with convergence rate . In Soviet Mathematics Doklady, volume 27, pages 372–376, 1983.
- Polyak (1964) B. T. Polyak. Some methods of speeding up the convergence of iteration methods. Ussr computational mathematics and mathematical physics, 4(5):1–17, 1964.
- Rakhlin and Sridharan (2013) S. Rakhlin and K. Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pages 3066–3074, 2013.
- Recht et al. (2011) B. Recht, C. Ré, S. J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 693–701, 2011. URL https://proceedings.neurips.cc/paper/2011/hash/218a0aefd1d1a4be65601cc6ddc1520e-Abstract.html.
- Robbins and Monro (1951) H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
- Shalev-Shwartz et al. (2007) S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In Proceedings of the 24th international conference on Machine learning, pages 807–814, 2007.
- Sutskever et al. (2013) I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147. PMLR, 2013.
- Wang et al. (2021) J.-K. Wang, J. Abernethy, and K. Y. Levy. No-regret dynamics in the fenchel game: A unified framework for algorithmic convex optimization. arXiv preprint arXiv:2111.11309, 2021.
- Welling and Teh (2011) M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688. Citeseer, 2011.
- Xiao (2010) L. Xiao. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11(Oct):2543–2596, 2010.
Appendix A Explaining the Bounded Smoothness Variance Assumption
Fixing , then Eq. (2) implies that for any there exists such that,
Similarly there exists such that,
And clearly in the deterministic case we have . Therefore,
where we have used , and we denote . This notation implies that , and clearly in the deterministic case for all . Thus, if we denote,
Then satisfies Eq. (3) and is equal to in the deterministic (noiseless) case.
Appendix B Proof of Thm. 4.1
Proof of Thm. 4.1.
First note that the ’s always belong to , since they are weighted averages of the ’s. Next we bound the difference between consecutive queries. By definition,
Implying,
(22) |
where we have used implying for any , we also used which holds since , finally we use .
Notation: Prior to going on with the proof we shall require some notation. We will denote , and recall the following notation form Alg. 1: , and we will also denote, ,and
Now, recalling Eq. (11),
Combining the above with the definition of yields the following recursive relation,
where we denoted . Now, using , and then it can be shown that ,and . Moreover, . Using these relations in the equation above gives,
(23) |
where for any we denote , as well as . Unrolling the above equation yields an explicit expression for any ,
Now, notice that the sequence is is martingale difference sequence with respect to the natural filtration induced by the history of the samples up to time . Indeed,
Thus, using Lemma B.1 below gives,
here the first inequality uses which holds for any ; the second inequality uses the bounded variance assumption; the third inequality uses Eq. (3), and the last inequality uses Eq. (22).
Dividing the above inequality by the lemma follows,
Lemma B.1.
Let be a martingale difference sequence with respect to a filtration , then the following holds for any ,
∎
B.0.1 Proof of Lemma B.1
Proof of Lemma B.1.
We shall prove the lemma by induction over . The base case where clearly holds.
Now for induction step let us assume that the equality holds for and lets prove it holds for . Indeed,
where the third line follows from the induction hypothesis, as well as from the law of total expectations; the fourth lines follows since are measurable w.r.t , and the fifth line follows since . Thus, we have established the induction step and therefore the lemma holds. ∎
Appendix C Proof of Thm. 4.2
Proof of Thm. 4.2.
The proof is a direct combination of Thm. 4.1 together with the standard regret bound of OGD (Online Gradient Descent), which in turn enables to utilize the Anytime guarantees of Thm. 3.1.
Part 1: Regret Bound.
Standard regret analysis of the update rule in Eq. (8) implies the following for any (Hazan et al., 2016),
(24) |
Part 2: Anytime Guarantees.
Since the ’s are weighted averages of the ’s we may invoke Thm. 3.1, which ensures for any ,
where we denote .
Part 3: Combining Guarantees.
Combining the above Anytime guarantees together with the bound in Eq. (24) yields,
(25) |
where the first inequality follows from Cauchy-Schwartz; the second inequality holds since , as well as from using which holds for any , the third inequality follows by the self bounding property for smooth functions (see Lemma C.1 below) implying that ; and the fourth inequality follows due to which holds since .
Lemma C.1.
Next, we will take expectation over Eq. (C), yielding,
(26) |
where the second lines is due to Jensen’s inequality implying that for any random variable ; the third line follows from which holds by Thm. 4.1; the fifth line holds since ; the sixth line follows since , and the last line follows since we pick .
To obtain the final bound we will apply the lemma below to Eq. (C),
Lemma C.2.
Let be a sequence of non-negative elements and , and assume that for any ,
Then the following bound holds,
Taking and provides the following explicit bound,
Dividing by and recalling gives,
which concludes the proof. ∎
C.1 Proof of Lemma C.2
Proof of Lemma C.2.
Summing the inequality over gives,
Re-ordering we obtain,
Plugging this back to the original inequality and taking gives,
which concludes the proof. ∎