Non-stationary Online Convex Optimization with Arbitrary Delays
Abstract
Online convex optimization (OCO) with arbitrary delays, in which gradients or other information of functions could be arbitrarily delayed, has received increasing attention recently. Different from previous studies that focus on stationary environments, this paper investigates the delayed OCO in non-stationary environments, and aims to minimize the dynamic regret with respect to any sequence of comparators. To this end, we first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by under mild assumptions, and in the worst case, where and denote the average and maximum delay respectively, is the time horizon, and is the path-length of comparators. Furthermore, we develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to and , respectively. The key idea is to run multiple DOGD with different learning rates, and utilize a meta-algorithm to track the best one based on their delayed performance. Finally, we demonstrate that our improved algorithm is optimal in the worst case by deriving a matching lower bound.
1 Introduction
Online convex optimization (OCO) has become a popular paradigm for solving sequential decision-making problems (Shalev-Shwartz, 2011; Hazan, 2016; Orabona, 2019). In OCO, an online player acts as the decision maker, which chooses a decision from a convex set at each round . After the decision is committed, the player suffers a loss , where is a convex function selected by an adversary. To improve the performance in subsequent rounds, the player needs to update the decision by exploiting information about loss functions in previous rounds. Plenty of algorithms and theories have been introduced to guide the player (Zinkevich, 2003; Shalev-Shwartz & Singer, 2007; Hazan et al., 2007).
However, most of existing studies assume that the information about each function is revealed at the end of round , which is not necessarily satisfied in many real applications. For example, in online advertisement (McMahan et al., 2013; He et al., 2014), each loss function depends on whether a user clicks an ad or not, which may not be decided even when the user has observed the ad for a long period of time. To tackle this issue, there has been a surge of research interest in OCO with arbitrary delays (Joulani et al., 2013; McMahan & Streeter, 2014; Quanrud & Khashabi, 2015; Joulani et al., 2016; Flaspohler et al., 2021; Wan et al., 2022a, b, 2023a), where the information about is revealed at the end of round , and denotes the delay. However, these studies focus on developing algorithms to minimize the static regret of the player, i.e., , which is only meaningful for stationary environments where at least one fixed decision can minimize the cumulative loss well, and thus cannot handle non-stationary environments where the best decision is drifting over time.
To address this limitation, we investigate the delayed OCO with a more suitable performance metric called dynamic regret (Zinkevich, 2003):
which compares the player against any sequence of changing comparators . It is well-known that in the non-delayed setting, online gradient descent (OGD) can attain a dynamic regret bound of (Zinkevich, 2003), where is the path-length of comparators, and multiple OGD with different learning rates can be combined to achieve an optimal dynamic regret bound of by using a mete-algorithm (Zhang et al., 2018a). Thus, it is natural to ask whether these algorithms and dynamic regret bounds can be generalized into the setting with arbitrary delays.
In this paper, we provide an affirmative answer to the above question. Specifically, we first propose delayed online gradient descent (DOGD), and provide a novel analysis on its dynamic regret. In the literature, Quanrud & Khashabi (2015) have developed a delayed variant of OGD for minimizing the static regret, which performs a gradient descent step by using the sum of gradients received in each round. Different from their algorithm, our DOGD performs a gradient descent step for each delayed gradient according to their arrival order, which allows us to exploit an In-Order property (i.e., delays do not change the arrival order of gradients) to reduce the dynamic regret. Let and denote the average and maximum delay, respectively. Our analysis shows that the dynamic regret of DOGD can be automatically bounded by under mild assumptions such as the In-Order property, and in the worst case.
Furthermore, inspired by Zhang et al. (2018a), we propose an improved algorithm based on DOGD, namely multiple delayed online gradient descent (Mild-OGD). The essential idea is to run multiple DOGD, each with a different learning rate that enjoys small dynamic regret for a specific path-length, and combine them with a meta-algorithm. Compared with Zhang et al. (2018a), the key challenge is that the performance of each DOGD is required by the meta-algorithm, but it is also arbitrarily delayed. To address this difficulty, our meta-algorithm is built upon the delayed Hedge—a technique for prediction with delayed expert advice (Korotin et al., 2020), which can track the best DOGD based on their delayed performance. We prove that the dynamic regret of Mild-OGD can be automatically bounded by under mild assumptions such as the In-Order property, and in the worst case. In the special case without delay, both bounds reduce to the bound achieved by Zhang et al. (2018a). Finally, we demonstrate that our Mild-OGD is optimal in the worst case by deriving a matching lower bound.
2 Related Work
In this section, we briefly review related work on OCO with arbitrary delays and the dynamic regret.
2.1 OCO with Arbitrary Delays
To deal with arbitrary delays, Joulani et al. (2013) first propose a black-box technique, which can extend any non-delayed OCO algorithm into the delayed setting. The main idea is to pool multiple instances of the non-delayed algorithm, each of which runs over a subsequence of rounds that satisfies the non-delayed assumption. Moreover, Joulani et al. (2013) show that if the non-delayed algorithm has a static regret bound of , this technique can attain a static regret bound of . Notice that in the non-delayed setting, there exist plenty of algorithms with an static regret bound, such as OGD (Zinkevich, 2003). As a result, combining with OGD, this technique can achieve a static regret bound of . However, despite the generality of this technique, it needs to run multiple instances of the non-delayed algorithm, which could be prohibitively resource-intensive (Quanrud & Khashabi, 2015; Joulani et al., 2016). For this reason, instead of adopting the technique of Joulani et al. (2013), subsequent studies extend many specific non-delayed OCO algorithms into the delayed setting by only running a single instance of them with the delayed information about all loss functions.
Specifically, Quanrud & Khashabi (2015) propose a delayed variant of OGD, and reduce the static regret to , which depends on the average delay , instead of the maximum delay . By additionally assuming that the In-Order property holds, McMahan & Streeter (2014) develop a delayed variant of the adaptive gradient (AdaGrad) algorithm (McMahan & Streeter, 2010; Duchi et al., 2011), and establish a data-dependent static regret bound, which could be tighter than for sparse data. Later, Joulani et al. (2016) propose another delayed variant of AdaGrad, which can attain a data-dependent static regret bound without the In-Order property. Recently, Flaspohler et al. (2021) develop delayed variants of optimistic algorithms (Rakhlin & Sridharan, 2013; Joulani et al., 2017), which can make use of “hints” about expected future loss functions to improve the static regret. Wan et al. (2022a) extend the delayed variant of OGD (Quanrud & Khashabi, 2015) to further exploit the strong convexity of functions. Wan et al. (2022b, 2023a) develop a delayed variant of online Frank-Wolfe (Hazan & Kale, 2012), and obtain a static regret bound of . Their algorithm is projection-free and can be efficiently implemented over complex constraints. We also notice that Korotin et al. (2020) consider the problem of prediction with expert advice—a special case of OCO with linear functions and simplex decision sets, and propose a delayed variant of Hedge (Freund & Schapire, 1997) to achieve the static regret.
2.2 Dynamic Regret
Dynamic regret of OCO is first introduced by Zinkevich (2003), who demonstrates that OGD can attain a dynamic regret bound of by simply utilizing a constant learning rate. Later, Zhang et al. (2018a) establish a lower bound of for the dynamic regret. Moreover, to improve the upper bound, Zhang et al. (2018a) propose a novel algorithm that runs multiple instances of OGD with different learning rates in parallel, and tracks the best one via Hedge (Freund & Schapire, 1997). Although the strategy of maintaining multiple learning rates is originally proposed to adaptively minimize the static regret for multiple types of functions (van Erven & Koolen, 2016; van Erven et al., 2021), Zhang et al. (2018a) extend it to achieve an optimal dynamic regret bound of . Subsequent studies achieve tighter dynamic regret bounds for special types of data (Cutkosky, 2020) and functions (Zhao et al., 2020; Baby & Wang, 2021, 2022, 2023), and reduce the computational complexity for handling complex constraints (Zhao et al., 2022; Wang et al., 2024). Besides, there also exist plenty of studies (Jadbabaie et al., 2015; Besbes et al., 2015; Yang et al., 2016; Mokhtari et al., 2016; Zhang et al., 2017, 2018b; Baby & Wang, 2019; Wan et al., 2021, 2023b; Zhao & Zhang, 2021; Wang et al., 2021, 2023) that focus on a restricted form of the dynamic regret, in which . However, as discussed by Zhang et al. (2018a), the restricted dynamic regret is too pessimistic and less flexible than the general one.
2.3 Discussions
Although both arbitrary delays and the dynamic regret have attracted much research interest, it is still unclear how arbitrary delays affect the dynamic regret. Recently, Wang et al. (2021, 2023) have demonstrated under a fixed and knowable delay , simply performing OGD with a delayed gradient is able to achieve a restricted dynamic regret bound of when is also knowable.111Note that Wang et al. (2021, 2023) aim to handle a special decision set with long-term constraints, and thus their algorithm is more complicated than OGD with the delayed gradient. Here, we omit other details of their algorithm because such a decision set is beyond the scope of this paper. However, their algorithm and theoretical results do not apply to the general dynamic regret under arbitrary delays. Moreover, one may try to extend existing algorithms with dynamic regret bounds into the delayed setting via the black-box technique of Joulani et al. (2013). However, we want to emphasize that they focus on the static regret, and their analysis cannot directly yield a dynamic regret bound. In addition, since their technique does not achieve the static regret, it seems also unable to achieve the dynamic regret even under the In-Order assumption.
3 Main Results
In this section, we first introduce necessary assumptions, and then present our DOGD and Mild-OGD. Finally, we provide a matching lower bound to demonstrate the optimality of our Mild-OGD in the worst case.
3.1 Assumptions
Assumption 3.1.
The gradients of all functions are bounded by , i.e., for any and .
Assumption 3.2.
The decision set contains the origin , and its diameter is bounded by , i.e., for any .
Assumption 3.3.
Delays do not change the arrival order of gradients, i.e., the gradient is received before the gradient , for any .
Remark: The first two assumptions have been commonly utilized in previous studies on OCO (Shalev-Shwartz, 2011; Hazan, 2016). To further justify the rationality of Assumption 3.3, we notice that parallel and distributed optimization (McMahan & Streeter, 2014; Zhou et al., 2018) is also a representative application of delayed OCO. For parallel optimization with many threads, the delay is mainly caused by the computing time of gradients. Thus, as in McMahan & Streeter (2014), it is reasonable to assume that these delays satisfy the In-Order assumption, because the gradient computed first is more likely to be obtained first. Even for general parallel and distributed optimization, polynomially growing delays, which imply for and thus satisfy the In-Order assumption, have received much attention in recent years (Zhou et al., 2018; Ren et al., 2020; Zhou et al., 2022). Moreover, we want to emphasize that Assumption 3.3 is only utilized to achieve the dynamic regret bound depending on the average delay , and the case without this assumption is also considered.
3.2 DOGD with Dynamic Regret
In the following, we first introduce detailed procedures of DOGD, and then present its theoretical guarantees.
3.2.1 Detailed Procedures
Recall that in the non-delayed setting, the classical OGD algorithm (Zinkevich, 2003) at each round updates the decision as
(1) |
where is a learning rate. To handle the setting with arbitrary delays, Quanrud & Khashabi (2015) have proposed a delayed variant of OGD by replacing with the sum of gradients received in round . However, it ignores the arrival order of gradients, and thus cannot benefit from the In-Order property when minimizing the dynamic regret.
To address this limitation, we propose a new delayed variant of OGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Specifically, our algorithm is named as delayed online gradient descent (DOGD) and outlined in Algorithm 1, where records the number of generated decisions and denotes the -th generated decision. Initially, we set and . At each round , we first play the latest decision and query the gradient .
After that, due to the effect of arbitrary delays, we receive a set of delayed gradients , where
For each , inspired by (1), we perform the following update
(2) |
and then set . Moreover, to utilize the In-Order property, elements in the set are sorted and traversed in the ascending order.
3.2.2 Theoretical Guarantees
We notice that due to the effect of delays, there could exist some gradients that arrive after round . Although our DOGD does not need to utilize these gradients, they are useful to facilitate our analysis and discussion. Therefore, in the analysis of DOGD, we virtually set and perform steps to in Algorithm 1 at some additional rounds . In this way, all queried gradients are utilized to generate decisions
Moreover, we denote the time-stamp of the -th utilized gradient by . To help understanding, one can imagine that DOGD also sets at the beginning of its step .
Theorem 3.4.
Remark: The value of actually counts the number of gradients that have been queried, but still not received at the end of round . Since the gradient will only be counted as an unreceived gradient in rounds, it is easy to verify that
(4) |
Therefore, the first two terms in the right side of (3) are upper bounded by so long as
(5) |
However, we still need to bound the last term in the right side of (3), which reflects the “comparator drift” caused by arbitrary delays, and has never appeared in previous studies on the delayed feedback and dynamic regret.
To this end, we establish the following lemma regarding the comparator drift.
Lemma 3.5.
Remark: Since Algorithm 1 utilizes the received gradients in the ascending order, the value of counts the number of delays that are not in order. Therefore, Lemma 3.5 implies that the comparator drift can be upper bounded by in the worst case because of , and vanishes if the In-Order property holds, i.e., . To facilitate discussions, we mainly focus on these two extremes, though the comparator drift can be bounded by in an intermediate case with .
Corollary 3.6.
Remark: From Corollary 3.6, our DOGD enjoys a dynamic regret bound of , which is adaptive to the upper bound of comparator drift. First, because of and , the dynamic regret of DOGD can be bounded by in the worst case, which magnifies the dynamic regret of OGD (Zinkevich, 2003) in the non-delayed setting by a coefficient depending on the maximum delay . Second, in case , the dynamic regret of DOGD automatically reduces to , which depends on the average delay. According to (6), this condition can be simply satisfied for all possible when the In-Order property holds or . Third, by substituting into Corollary 3.6, we find that DOGD can attain a static regret bound of for arbitrary delays, which matches the best existing result (Quanrud & Khashabi, 2015).
Remark: At first glance, Corollary 3.6 needs to set the learning rate as in (5), which may become a limitation of DOGD, because the value of is generally unknown in practice. However, we note that Quanrud & Khashabi (2015) also face this issue when minimizing the static regret of OCO with arbitrary delays, and have introduced a simple solution by utilizing the standard “doubling trick” (Cesa-Bianchi et al., 1997) to adaptively adjust the learning rate. The main insight behind this solution is that the value of can be calculated on the fly. The details about DOGD with the doubling trick are provided in the appendix.
3.3 Mild-OGD with Improved Dynamic Regret
One unsatisfactory point of DOGD is that the dynamic regret linearly depends on the path-length. Notice that if only a specific path-length is considered, from Theorem 3.4, we can tune the learning rate as
and obtain the dynamic regret sublinear to . However, our goal is to minimize the dynamic regret with respect to any possible path-length . To address this dilemma, inspired by Zhang et al. (2018a), we develop an algorithm that runs multiple DOGD as experts, each with a different learning rate for a specific path-length, and combines them with a meta-algorithm.
It is worth noting that the meta-algorithm of Zhang et al. (2018a) is incompatible to the delayed setting studied here. To this end, we adopt the delayed Hedge (Korotin et al., 2020), an expert-tracking method under arbitrary delays, to design our meta-algorithm. Moreover, there exist two options for the meta-algorithm to maintain these expert-algorithms: running them over the original functions or the surrogate functions , where
(7) |
and is the decision of the meta-algorithm. In this paper, we choose the second option, because the surrogate functions allow expert-algorithms to reuse the gradient of the meta-algorithm, and thus can avoid inconsistent delays between the meta-algorithm and expert-algorithms. Specifically, our algorithm is named as multiple delayed online gradient descent (Mild-OGD), and stated below.
Meta-algorithm
Let denote a set of learning rates for experts. We first activate a set of experts by invoking the expert-algorithm for each learning rate . Let be the -th smallest learning rate in . Following Zhang et al. (2018a), the initial weight of each expert is set as
In each round , our meta-algorithm receives a decision from each expert , and then plays the weighted decision as
After that, it queries the gradient , but only receives due to the effect of arbitrary delays. Then, according to the delayed Hedge (Korotin et al., 2020), we update the weight of each expert as
(8) |
where is a parameter and is defined in (7). This is the critical difference between our meta-algorithm and that in Zhang et al. (2018a), which updates the weight according to the vanilla Hedge (Cesa-Bianchi et al., 1997).
Finally, we send gradients to each expert so that they can update their own decisions without querying additional gradients. The detailed procedures of our meta-algorithm are summarized in Algorithm 2.
Expert-algorithm
The expert-algorithm is instantiated by running DOGD over the surrogate loss function defined in (7), instead of the real loss function. To emphasize this difference, we present its procedures in Algorithm 3. The input and initialization are the same as those in DOGD. At each round , the expert-algorithm first submits the decision to the meta-algorithm, and then receives gradients from the meta-algorithm. For each , it updates the decision as
(9) |
and sets .
We have the following theoretical guarantee for the dynamic regret of Mild-OGD.
Theorem 3.7.
Remark: Theorem 3.7 shows that Mild-OGD can attain a dynamic regret bound of , which is also adaptive to the upper bound of comparator drift. It is easy to verify that this dynamic regret bound becomes in the worst case. Moreover, it reduces to in case , which can be satisfied for all possible when the In-order property holds or for . Compared with the dynamic regret of DOGD, Mild-OGD reduces the linear dependence on to be sublinear. Moreover, compared with the optimal bound achieved in the non-delayed setting (Zhang et al., 2018a), Mild-OGD magnifies it by a coefficient depending on delays. We also notice that although Theorem 3.7 requires the value of to tune parameters, as previously discussed, this requirement can be removed by utilizing the doubling trick. The details about Mild-OGD with the doubling trick are provided in the appendix.
3.4 Lower Bound
Finally, we show that our Mild-OGD is optimal in the worst case by establishing the following lower bound.
Theorem 3.8.
Remark: From Theorem 3.8, if , there exists an lower bound on the dynamic regret, which can be trivially matched by any OCO algorithm including our Algorithm 2. As a result, we mainly focus on the case , and notice that Theorem 3.8 essentially establishes an lower bound, which matches the dynamic regret of our Mild-OGD in the worst case. To the best of our knowledge, this is the first lower bound for the dynamic regret of the delayed OCO.
4 Analysis
In this section, we prove Theorem 3.4, Lemma 3.5, Theorem 3.7, and Theorem 3.8 by introducing some lemmas. The omitted proofs can be found in the appendix.
4.1 Proof of Theorem 3.4
It is easy to verify that
where the inequality is due to the convexity of functions.
Let . Then, combining the above inequality with the fact that is a permutation of , we have
(10) |
where the first equality is due to in Algorithm 1, and the last inequality is due to Assumption 3.1.
Let . For the first term in the right side of (10), we have
(11) |
where the first inequality is due to Assumption 3.1, the second inequality is due to and , and the last inequality is due to Assumption 3.2.
Then, we proceed to bound the second term in the right side of (10). Note that before round , Algorithm 1 has received gradients, and thus has generated . Moreover, let . It is easy to verify that , and thus Algorithm 1 has also generated before round . Since the gradient is used to update in round , we have
(12) |
From (12), we have
(13) |
where the last inequality is due to Assumption 3.1.
Moreover, because of the definitions of and , we have
(14) |
where the second equality is due to the fact that is a permutation of .
4.2 Proof of Lemma 3.5
Since is the -th used gradient and arrives at the end of round , it is not hard to verify that
(16) |
for any , and there are at most arrived gradients before round . Notice that gradients queried at rounds must have arrived at the end of round . Therefore, we also have , which implies that
(17) |
If and , according to (16), we have
(18) |
Otherwise, if and , according to (17), we have
(19) |
Therefore, combining (18) and (19), we have
where the equality is due to the fact that is a permutation of .
Then, we complete this proof by further noticing that Assumption 3.2 and the definition of can ensure
4.3 Proof of Theorem 3.7
Let , where . From Assumption 3.2, we have
which implies that
Therefore, for any possible value of , there must exist a learning rate such that
(20) |
where .
Then, the dynamic regret can be upper bounded as follows
(21) |
To bound the first term in the right side of (21), we introduce the following lemma.
Combining Lemma 4.1 with and , under Assumptions 3.1 and 3.2, we have
where the last inequality is due to (4).
Note that each expert actually is equal to running Algorithm 1 with , where each gradient is delayed to the end of round . Therefore, combining Theorem 3.4 with Lemma 3.5 and the definition of in (6), under Assumptions 3.1 and 3.2, we have
where the second inequality is due to (20), and the last inequality is due to the definition of and (4).
Finally, we complete this proof by combining (21) with the above two inequalities.
4.4 Proof of Theorem 3.8
Inspired by the proof of the lower bound in the non-delayed setting (Zhang et al., 2018a), we first need to establish a lower bound of static regret in the delayed setting. Although the seminal work of Weinberger & Ordentlich (2002) has already provided such a lower bound, it only holds in the special case that divides . To address this limitation, we establish a lower bound of static regret for any and , which is presented in the following lemma.
Lemma 4.2.
Let . We then divide the total rounds into blocks, where the length of the first blocks is and that of the last block is . In this way, we can define the set of rounds in the block as
Moreover, we define the feasible set of as
and construct a subset of as
Notice that the connection is derived by the fact that the comparator sequence in only changes times, and thus its path-length does not exceed .
5 Conclusion and Future Work
In this paper, we study the dynamic regret of OCO with arbitrary delays. To this end, we first propose a simple algorithm called DOGD, the dynamic regret of which can be automatically bounded by under mild assumptions such as the In-Order property, and in the worst case. Furthermore, based on DOGD, we develop an improved algorithm called Mild-OGD, which can automatically enjoy an dynamic regret bound under mild assumptions such as the In-Order property, and an dynamic regret bound in the worst case. Finally, we provide a matching lower bound to show the optimality of our Mild-OGD in the worst case.
It is worth noting that there still are several directions for future research, which are discussed in the appendix due to the limitation of space.
Acknowledgements
This work was partially supported by the National Natural Science Foundation of China (62306275, 62122037), the Zhejiang Province High-Level Talents Special Support Program “Leading Talent of Technological Innovation of Ten-Thousands Talents Program” (No. 2022R52046), the Key Research and Development Program of Zhejiang Province (No. 2023C03192), and the Open Research Fund of the State Key Laboratory of Blockchain and Data Security, Zhejiang University. The authors would also like to thank the anonymous reviewers for their helpful comments.
Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are some potential societal consequences of our work, but we feel that none of them must be specifically highlighted here.
References
- Abernethy et al. (2008) Abernethy, J. D., Bartlett, P. L., Rakhlin, A., and Tewari, A. Optimal stragies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, pp. 415–424, 2008.
- Baby & Wang (2019) Baby, D. and Wang, Y.-X. Online forecasting of total-variation-bounded sequences. In Advances in Neural Information Processing Systems 32, pp. 11071–11081, 2019.
- Baby & Wang (2021) Baby, D. and Wang, Y.-X. Optimal dynamic regret in exp-concave online learning. In Proceedings of 34th Annual Conference on Learning Theory, pp. 359–409, 2021.
- Baby & Wang (2022) Baby, D. and Wang, Y.-X. Optimal dynamic regret in proper online learning with strongly convex losses and beyond. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, pp. 1805–1845, 2022.
- Baby & Wang (2023) Baby, D. and Wang, Y.-X. Second order path variationals in non-stationary online learning. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, pp. 9024–9075, 2023.
- Besbes et al. (2015) Besbes, O., Gur, Y., and Zeevi, A. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015.
- Cesa-Bianchi & Lugosi (2006) Cesa-Bianchi, N. and Lugosi, G. Prediction, Learning, and Games. Cambridge University Press, 2006.
- Cesa-Bianchi et al. (1997) Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D. P., Schapire, R. E., and Warmuth, M. K. How to use expert advice. Journal of the ACM, 44(3):427–485, 1997.
- Cutkosky (2020) Cutkosky, A. Parameter-free, dynamic, and strongly-adaptive online learning. In Proceedings of the 37th International Conference on Machine Learning, pp. 2250–2259, 2020.
- Duchi et al. (2011) Duchi, J. C., Agarwal, A., and Wainwright, M. J. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592–606, 2011.
- Flaspohler et al. (2021) Flaspohler, G. E., Orabona, F., Cohen, J., Mouatadid, S., Oprescu, M., Orenstein, P., and Mackey, L. Online learning with optimism and delay. In Proceedings of the 38th International Conference on Machine Learning, pp. 3363–3373, 2021.
- Freund & Schapire (1997) Freund, Y. and Schapire, R. E. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
- Hazan (2016) Hazan, E. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3–4):157–325, 2016.
- Hazan & Kale (2012) Hazan, E. and Kale, S. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, pp. 1843–1850, 2012.
- Hazan et al. (2007) Hazan, E., Agarwal, A., and Kale, S. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2):169–192, 2007.
- He et al. (2014) He, X., Pan, J., Jin, O., Xu, T., Liu, B., Xu, T., Shi, Y., Atallah, A., Herbrich, R., Bowers, S., and Candela, J. Q. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the 8th International Workshop on Data Mining for Online Advertising, pp. 1–9, 2014.
- Hoeffding (1963) Hoeffding, W. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963.
- Jadbabaie et al. (2015) Jadbabaie, A., Rakhlin, A., Shahrampour, S., and Sridharan, K. Online optimization: Competing with dynamic comparators. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, pp. 398–406, 2015.
- Joulani et al. (2013) Joulani, P., György, A., and Szepesvári, C. Online learning under delayed feedback. In Proceedings of the 30th International Conference on Machine Learning, pp. 1453–1461, 2013.
- Joulani et al. (2016) Joulani, P., György, A., and Szepesvári, C. Delay-tolerant online convex optimization: Unified analysis and adaptive-gradient algorithms. Proceedings of the 30th AAAI Conference on Artificial Intelligence, pp. 1744–1750, 2016.
- Joulani et al. (2017) Joulani, P., György, A., and Szepesvári, C. A modular analysis of adaptive (non-)convex optimization: Optimism, composite objectives, and variational bounds. In Proceedings of the 28th International Conference on Algorithmic Learning Theory, pp. 681–720, 2017.
- Korotin et al. (2020) Korotin, A., V’yugin, V., and Burnaev, E. Adaptive hedging under delayed feedback. Neurocomputing, 397:356–368, 2020.
- McMahan & Streeter (2010) McMahan, H. B. and Streeter, M. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Conference on Learning Theory, pp. 244–256, 2010.
- McMahan & Streeter (2014) McMahan, H. B. and Streeter, M. Delay-tolerant algorithms for asynchronous distributed online learning. In Advances in Neural Information Processing Systems 27, pp. 2915–2923, 2014.
- McMahan et al. (2013) McMahan, H. B., Holt, G., Sculley, D., Young, M., Ebner, D., Grady, J., Nie, L., Phillips, T., Davydov, E., Golovin, D., Chikkerur, S., Liu, D., Wattenberg, M., Hrafnkelsson, A. M., Boulos, T., and Kubica, J. Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1222–1230, 2013.
- Mokhtari et al. (2016) Mokhtari, A., Shahrampour, S., Jadbabaie, A., and Ribeiro, A. Online optimization in dynamic environments: Improved regret rates for strongly convex problems. In Proceedings of 55th Conference on Decision and Control, pp. 7195–7201, 2016.
- Orabona (2019) Orabona, F. A modern introduction to online learning. arXiv:1912.13213, 2019.
- Quanrud & Khashabi (2015) Quanrud, K. and Khashabi, D. Online learning with adversarial delays. In Advances in Neural Information Processing Systems 28, pp. 1270–1278, 2015.
- Rakhlin & Sridharan (2013) Rakhlin, A. and Sridharan, K. Online learning with predictable sequences. In Proceedings of the 26th Annual Conference on Learning Theory, pp. 993–1019, 2013.
- Ren et al. (2020) Ren, Z., Zhou, Z., Qiu, L., Deshpande, A., and Kalagnanam, J. Delay-adaptive distributed stochastic optimization. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, pp. 5503–5510, 2020.
- Shalev-Shwartz (2011) Shalev-Shwartz, S. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011.
- Shalev-Shwartz & Singer (2007) Shalev-Shwartz, S. and Singer, Y. A primal-dual perspective of online learning algorithm. Machine Learning, 69(2–3):115–142, 2007.
- van Erven & Koolen (2016) van Erven, T. and Koolen, W. M. MetaGrad: Multiple learning rates in online learning. In Advances in Neural Information Processing Systems 29, pp. 3666–3674, 2016.
- van Erven et al. (2021) van Erven, T., Koolen, W. M., and van der Hoeven, D. Metagrad: Adaptation using multiple learning rates in online learning. Journal of Machine Learning Research, 22(161):1–61, 2021.
- Wan et al. (2021) Wan, Y., Xue, B., and Zhang, L. Projection-free online learning in dynamic environments. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 10067–10075, 2021.
- Wan et al. (2022a) Wan, Y., Tu, W.-W., and Zhang, L. Online strongly convex optimization with unknown delays. Machine Learning, 111(3):871–893, 2022a.
- Wan et al. (2022b) Wan, Y., Tu, W.-W., and Zhang, L. Online Frank-Wolfe with arbitrary delays. In Advances in Neural Information Processing Systems 35, 2022b.
- Wan et al. (2023a) Wan, Y., Wang, Y., Yao, C., Tu, W.-W., and Zhang, L. Projection-free online learning with arbitrary delays. arXiv:2204.04964v2, 2023a.
- Wan et al. (2023b) Wan, Y., Zhang, L., and Song, M. Improved dynamic regret for online Frank-Wolfe. In Proceedings of the 36th Annual Conference on Learning Theory, 2023b.
- Wang et al. (2021) Wang, J., Liang, B., Dong, M., Boudreau, G., and Abou-Zeid, H. Delay-tolerant constrained OCO with application to network resource allocation. In Proceedings of the 2021 IEEE Conference on Computer Communications, pp. 1–10, 2021.
- Wang et al. (2023) Wang, J., Dong, M., Liang, B., Boudreau, G., and Abou-Zeid, H. Delay-tolerant OCO with long-term constraints: Algorithm and its application to network resource allocation. IEEE/ACM Transactions on Networking, 31(1):147–163, 2023.
- Wang et al. (2024) Wang, Y., Yang, W., Jiang, W., Lu, S., Wang, B., Tang, H., Wan, Y., and Zhang, L. Non-stationary projection-free online learning with dynamic and adaptive regret guarantees. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, pp. 15671–15679, 2024.
- Weinberger & Ordentlich (2002) Weinberger, M. J. and Ordentlich, E. On delayed prediction of individual sequences. IEEE Transactions on Information Theory, 48(7):1959–1976, 2002.
- Yang et al. (2016) Yang, T., Zhang, L., Jin, R., and Yi, J. Tracking slowly moving clairvoyant: Optimal dynamic regret of online learning with true and noisy gradient. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
- Zhang et al. (2017) Zhang, L., Yang, T., Yi, J., Jin, R., and Zhou, Z.-H. Improved dynamic regret for non-degenerate functions. In Advances in Neural Information Processing Systems 30, pp. 732–741, 2017.
- Zhang et al. (2018a) Zhang, L., Lu, S., and Zhou, Z.-H. Adaptive online learning in dynamic environments. In Advances in Neural Information Processing Systems 31, pp. 1323–1333, 2018a.
- Zhang et al. (2018b) Zhang, L., Yang, T., Jin, R., and Zhou, Z.-H. Dynamic regret of strongly adaptive methods. In Proceedings of the 35th International Conference on Machine Learning, pp. 5877–5886, 2018b.
- Zhao & Zhang (2021) Zhao, P. and Zhang, L. Improved analysis for dynamic regret of strongly convex and smooth functions. In Proceedings of the 3rd Conference on Learning for Dynamics and Control, pp. 48–59, 2021.
- Zhao et al. (2020) Zhao, P., Zhang, Y.-J., Zhang, L., and Zhou, Z.-H. Dynamic regret of convex and smooth functions. In Advances in Neural Information Processing Systems 33, pp. 12510–12520, 2020.
- Zhao et al. (2022) Zhao, P., Xie, Y.-F., Zhang, L., and Zhou, Z.-H. Efficient methods for non-stationary online learning. In Advances in Neural Information Processing Systems 35, pp. 11573–11585, 2022.
- Zhou et al. (2018) Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P., Ye, Y., Li, L.-J., and Fei-Fei, L. Distributed asynchronous optimization with unbounded delays: How slow can you go? In Proceedings of the 35th International Conference on Machine Learning, pp. 5970–5979, 2018.
- Zhou et al. (2022) Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P. W., and Ye, Y. Distributed stochastic optimization with large delays. Mathematics of Operations Research, 47(3):2082–2111, 2022.
- Zinkevich (2003) Zinkevich, M. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pp. 928–936, 2003.
Appendix A Detailed Discussions on Future Work
First, we notice that the static regret bound can be achieved under arbitrary delays (Quanrud & Khashabi, 2015). Thus, it is natural to ask whether the dynamic regret bound can also be achieved without additional assumptions. However, from Theorem 3.4, compared with the static regret, it is more challenging to minimize the dynamic regret in the delayed setting, because delays will further cause a comparator drift, i.e., . It seems highly non-trivial to reduce the comparator drift without additional assumptions, and we leave this question as a future work.
Second, we have utilized the doubling trick to avoid tuning the learning rate with the unknown cumulative delay. One potential limitation of this technique is that it needs to repeatedly restart itself, while forgetting all the preceding information. For minimizing the static regret with arbitrary delays, Joulani et al. (2016) have addressed this limitation by continuously adjusting the learning rate according to the norm of received gradients. Thus, it is also appealing to extend this idea for minimizing the dynamic regret with arbitrary delays.
Third, our proposed algorithms require the time-stamp of delayed feedback. It is interesting to investigate how to minimize the dynamic regret with anonymous and arbitrary delays. A potential useful property is that under the In-Order assumption, the arrival order of the delayed gradients already ensures the ascending order of their time-stamps. Since our DOGD in Algorithm 1 only utilizes the time-stamp to sort the elements in , it actually can be implemented by simply performing the gradient descent step in (1) whenever a gradient arrives even without the time-stamp. However, in our Mild-OGD, the time-stamp is further utilized to compute the delayed surrogate loss of experts, i.e., in (8), which cannot be discarded.
Appendix B Proof of Lemma 4.1
We first define
Moreover, we define
According to Algorithm 2, for any , it is easy to verify that
Combining with the above definitions, we have
where and .
Similarly, for any , we define
In this way, for any , we also have , where
Moreover, we define and
(22) |
Then, we will bound the distance between and based on the following lemma.
Lemma B.1.
(Lemma 5 in Duchi et al. (2011)) Let . If is -strongly convex with respect to a norm , it holds that
for any and , where is the dual norm of .
Since is -strongly convex with respect to , by applying Lemma B.1, for any , we have
Let . Note that actually records the time-stamp of gradients that are queried, but still not arrive at the end of round . Then, for , it is not hard to verify that
(23) |
where the last inequality is due to the definition of and the fact that Assumptions 3.1 and 3.2 ensure
(24) |
for any and .
The above inequality shows that is close to . In the following, we first focus on the analysis of , and then combine with the distance between and . To this end, we notice that
(25) |
Next, for any , we have
(26) |
Combining and , we have
(27) |
To proceed, we introduce Hoeffding’s inequality (Hoeffding, 1963).
Lemma B.2.
Let be a random variable with . Then, for any , it holds that
Appendix C Proof of Lemma 4.2
Let . We first divide the total rounds into blocks, where the length of the first blocks is and that of the last block is . In this way, we can define the set of rounds in the block as
For any and , we construct the delay as
which satisfies . These delays ensure that the information of any function in each block is delayed to the end of the block, which is critical for us to construct loss functions that maximize the impact of delays on the static regret.
Note that to establish the lower bound of the static regret in the non-delayed setting, one can utilize a randomized strategy to select loss functions for each round (Abernethy et al., 2008). Here, to maximize the impact of delays, we only select one loss function for all rounds in the same block , i.e., for any . Specifically, we set
where the -th coordinate of is with probability for any and will be denoted as . It is not hard to verify that satisfies Assumption 3.1.
From the above definitions, we have
where the third equality is due to for any , which can be derived by the fact that any decision in the block is made before receiving the information of , and thus is independent with .
Since a linear function is minimized at the vertices of the cube, we further have
(30) |
where the first inequality is due to the Khintchine inequality and the second inequality is due to the Cauchy-Schwarz inequality.
The expected lower bound in (30) implies that for any OCO algorithm and any positive integer , there exists a particular choice of such that
Appendix D DOGD with the Doubling Trick
As discussed after Corollary 3.6, our DOGD needs a learning rate depending on the following value
However, it may be not available beforehand. Fortunately, the doubling trick (Cesa-Bianchi & Lugosi, 2006) provides a way to adaptively estimate this value. Specifically, it will divide the total rounds into several epochs, and run a new instance of DOGD per epoch. Let and respectively denote the start round and the end round of the -th epoch. In this way, to tune the learning rate for the -th epoch, we only need to know the following value
where .
According to the doubling trick, we can estimate this value to be at the start round of the -th epoch. Then, for any , we first judge whether the estimate is still valid, i.e.,
where the left side can be calculated at the beginning of round . If the answer is positive, the round is still assigned to the -th epoch, and the instance of DOGD keeps running. Otherwise, the round is set as the start round of the -th epoch, and a new instance of DOGD is activated. Notice that in the start round of the -th epoch, the new estimate must be valid, since and
Moreover, it is natural to set . Then, the detailed procedures of DOGD with the doubling trick are summarized in Algorithm 4.
Remark: First, in Algorithm 4, the learning rate is set by replacing in the learning rate required by Corollary 3.6 with . Second, in each epoch , we do not need to utilize gradients queried before this epoch. For this reason, in Algorithm 4, we only receive , instead of .
We have the following theorem, which can recover the dynamic regret bound in Corollary 3.6 up to a constant factor.
Theorem D.1.
Proof.
For any and , we first notice that the value of counts the number of gradients that have been queried over interval , but still not arrive at the end of round . Moreover, the gradient will only be counted as an unreceived gradient in rounds. Therefore, for any , it is easy to verify that
For brevity, let denote the final of Algorithm 4, and let . It is easy to verify that
(31) |
Then, let . We notice that for , Algorithm 4 actually starts or restarts Algorithm 1 with the learning rate of at round , which ends at round . Therefore, combining Theorem 3.4 with Lemma 3.5, under Assumptions 3.1 and 3.2, we have
(32) |
where
(33) |
Moreover, we notice that Algorithm 4 also ensures that
(34) |
By substituting the above inequality into (32), we have
(35) |
Then, because of (31), we have
(36) |
Moreover, it is not hard to verify that
which implies that
(37) |
Finally, we complete this proof by substituting (37) and into (36). ∎
Appendix E Mild-OGD with the Doubling Trick
Similar to DOGD, Mild-OGD requires the value of for setting
(38) |
where is the learning rate for updating the weight, and is the learning rate for the -th expert. To address this limitation, we can utilize the doubling trick as described in the previous section. The only change is to replace DOGD with Mild-OGD. The detailed procedures of Mild-OGD with the doubling trick are outlined in Algorithms 5 and 6.
Remark: We would like to emphasize that since multiple instances of the expert-algorithm run over the surrogate losses defined by the meta-algorithm, these instances and the meta-algorithm will start a new epoch synchronously. Moreover, as shown in step 6 of Algorithm 5, in the start of each epoch, we need to reinitialize the weight of each expert . As shown in step , in each epoch , we update the weight by using the learning rate , which replaces in (38) with . Additionally, to facilitate presentation, in step 2 of Algorithm 5, each in only contains the constant part that does not depend on the value of . Meanwhile, according to steps 1 and 10 of Algorithm 6, the -th expert will receive from the meta-algorithm, and combine it with the estimation of to compute the learning rate.
Furthermore, we have the following theorem, which can recover the dynamic regret bound in Theorem 3.7 up to a constant factor.
Theorem E.1.
Proof.
Following the proof of Theorem D.1, we use to denote the final of Algorithms 5 and 6 and define . Moreover, let . It is easy to verify that (31) also holds.
Then, we consider the dynamic regret of Algorithm 5 over the interval for each . Let
From Assumption 3.2, we have
which implies that
Therefore, for any possible value of , there must exist a constant such that
(39) |
where
Moreover, we notice that each expert over the interval actually runs Algorithm 1 with the learning rate to handle the surrogate losses , where each gradient is delayed to the end of round for .
Therefore, by combining Theorem 3.4 with Lemma 3.5, under Assumptions 3.1 and 3.2, we have
(40) |
where is defined in (33), the second inequality is due to the fact that Algorithm 6 also ensures (34), and the third inequality is due to (39) and the definition of .
Moreover, it is also easy to verify that Algorithm 5 actually starts or restarts Algorithm 2 with the learning rate of at round , which ends at round . Then, by using Lemma 4.1 with , under Assumptions 3.1 and 3.2, we have
(41) |
where the second inequality is due to , the definition of , and the fact that Algorithm 5 also ensures (34).
By combining (40) and (41), it is not hard to verify that
(42) |
Then, we have
where the first inequality is due to (42), and the last inequality is due to (31).
Finally, by substituting (37) and into the above inequality, we complete this proof. ∎