Exploiting higher-order derivatives in convex optimization methods
Keywords— High-order methods, tensor methods, convex optimization, inexact methods, stochastic optimization, distributed optimization
1 Introduction
It is well known since the works of I. Newton [64] and L. Kantorovich [45] that the second-order derivative of the objective function can be used in numerical algorithms for solving optimization problems and nonlinear equations and that such algorithms have better convergence guarantees. Higher-order derivatives can be efficiently used for solving nonlinear equations as was shown by P. Chebyshev [17, 24]. For optimization problems, the basic idea known at least since 1970’s [39] and developed in later works [69, 68, 9] is to approximate, at each iteration of an algorithm, the objective by its Taylor polynomial at the current iterate, optionally add a regularization, and minimize this Taylor approximation to obtain the next iterate. In this way, the first Taylor polynomial leads to first-order methods that are very well understood, see, e.g., [54, 58], with the optimal methods existing since 1980’s [54, 56].The second-order methods are using the second Taylor polynomial. The most famous representative is the Newton’s method that minimizes at each iteration the second-order quadratic approximation of the objective function. If the second derivative is Lipschitz continuous, the objective is strongly convex, and the starting point is sufficiently close to the solution, then this algorithm has very fast quadratic convergence and requires iterations to reach an -solution in terms of the objective value [45]. This bound is optimal [54] even for univariate optimization problems with the possibility of using in algorithms derivatives of any order. Different modifications of the basic algorithm, such as the Damped Newton’s method or the Levenberg–Marquardt algorithm achieve global convergence, but have a slower, i.e., linear, convergence [67, 65]. Second-order methods played also the central role in the development by Yu. Nesterov and A. Nemirovskii of interior point methods that have global linear convergence rate and allow proving polinomial solvability of a large class of convex optimization problems [62]. This theory was based on the analysis of the Damped Newton’s method for the class of self-concordant functions that, in particular, include functions without Lipschitz derivatives.
An important idea that eventually led to the current developments of tensor methods was the cubic regularization of the Newton’s method, which dates back to the work of A. Griewank [37]. The global performance of the Cubic regularized Newton’s method was analysed in 2006 by Yu. Nesterov and B. Polyak in [63] for a large list of settings with the main assumption that the second derivative is Lipschitz continuous. The main idea is based on the fact that, for functions with Lipshitz second derivative, the objective’s model consisting of the second Taylor polynomial regularized by the cube of the norm with sufficiently large regularization parameter is an upper bound for the objective function. This model is minimized in each iteration of the algorithm, which, in particular, allowed to obtain global convergence rate for minimizing convex functions with Lipschitz second derivative. Moreover, the authors showed that the complexity of minimizing the third-order polynomial model of the objective is of the same order as the complexity of the standard Newton’s method step. The Cubic regularized Newton’s method was further accelerated in [55] to reach convergence rate for convex problems with Lipschitz second derivative. In 2012 R. Monteiro and B. Svaiter proposed in [53] a very perspective accelerated-proximal envelope (see also the work [16]) that allowed them to develop even faster second-order method with the convergence rate for minimizing convex objectives with Lipschitz second derivative.
Another important step in the development of tensor methods was made by M. Baes in 2009 [8], where the Newton–Nesterov–Polyak algorithm was generalized to the setting of convex minimization with the objective having Lipschitz -th derivative for . The idea was to construct a -th order polynomial model that upper bounds the objective function by taking the -th Taylor polynomial of the objective and regularizing it by the -th power of the Euclidean norm with sufficiently large regularization parameter. The author showed global convergence rate for methods that minimize such model in each iteration and proposed an accelerated version with the rate , all under the assumption of Lipschitz -th derivative. As in the world of first-order methods, where the interest in optimal methods was one of the central driving forces in 2000–2020, a natural question arose on what are the lower bounds for second- and higher-order methods and which algorithms are optimal. The results in [7, 5, 29] gave the answer that the lower bound is , revealing the gap when between the rates of existing methods an lower bounds. One of the drawbacks of tensor methods at this stage was that each iteration required to minimize a higher-order polynomial that may not be convex, leading to the high cost of each iteration and impracticality of such methods.
In 2018, a breakthrough was made by Yu. Nesterov [59] in understanding the significance of higher-order methods in modern convex optimization when the -th derivative satisfies the Lipschitz condition. The breakthrough involved increasing the regularization parameter in M. Baes’s approach for the regularized -th Taylor polynomial of the objective. It was demonstrated that this modified Taylor model is convex. Additionally, it was shown that the convex Taylor model can be efficiently minimized for , with similar complexity to the standard Newton’s method step. Furthermore, it was proven that minimizing the convex Taylor model doesn’t require the calculation of the entire tensor of third derivatives. Instead, directional third derivatives can be calculated, for example, through automatic differentiation. Lastly, a lower bound of was established for convex problems with Lipschitz -th derivative. For each , worst-case functions within the class of convex functions with Lipschitz -th derivatives were constructed, and a research program was outlined to develop optimal tensor methods [58]. It followed from the obtained lower bounds for convex optimization problems that the Monteiro–Svaiter second-order method is optimal up to a logarithmic factor since it used a complicated line-search procedure. Based on the Monteiro–Svaiter method and the convex Taylor model of the objective, in [30, 26, 27] and independently in [11, 42], the authors proposed near-optimal tensor methods for convex problems with Lipschitz -th derivatives, which have the rate up to logarithmic factors. In Section 3, the Monteiro–Svaiter approach with Nesterov’s implementable tensor steps is described. As it was mentioned in [58, 21], the auxiliary line-search procedures that lead to logarithmic factors in the convergence rates of the near-optimal tensor methods may also significantly slow down the convergence in practice. In 2022, two independent works [48] and [14] proposed optimal tensor methods with convergence rates without additional logarithmic factors. At the same time, such methods were proposed also for monotone variational inequalities [1, 51] under higher-order regularity of the operator.
The developments in the theory of tensor methods had also an important impact on the theory of second-order methods. In 2020, Yu. Nesterov proposed an implementation of the third-order method using inexact third derivative constructed via finite differences of gradients [61]. In other words, it appeared to be possible to implement the third-order method using only first and second derivatives, which lead to <<superfast>> second-order method with the convergence rate violating the lower bound thanks to additional assumption that the third derivative is Lipschitz continuous. The key observation was that in the class of functions with Lipschitz second derivative the worst-case function [59] for second-order methods does not have Lipschitz third derivative. These results were further improved in [43, 60], where second-order algorithms with the rate corresponding to optimal third-order methods were proposed for minimizing convex functions with Lipschitz third derivative. These developments are in large contrast to first-order methods, for which the worst-case function is quadratic [57] and has Lipschitz derivatives of any order, thus, preventing improvements under additional assumptions. This line of research was continued in [6], where such reductions were shown to be possible for higher-order methods. The ideas used in superfast methods are described in Section 4.
Tensor methods remain a very active area of research with many extensions of the existing methods. In particular, there are adaptive variants for the setting when the Lipschitz constant is not known [41, 36], universal generalizations for the setting when the -th derivative is Hölder continuous [33, 70, 19], versions with inexact minimization of the Taylor model [21, 35], tensor methods for finding approximate stationary points of convex functions [34, 22]. The ideas described above, especially the Monteiro–Svaiter accelerated proximal point method, turned out to be productive also in the areas not directly related to tensor methods, see non-trivial examples in [10, 12, 15]. Modern second-order and third-order methods demonstrate also their efficiency in Data Science and Machine Learning applications [18, 2, 23, 13]. More details of such applications are given in Section 5.
2 Notation and problem statement
The following optimization problem is considered
(1) |
where and are convex functions. Denote – the solution of the problem (1). If the solution is not unique, let be a solution that is the closest to starting point in -norm.
Denote the Euclidean -norm in ,
Assume that has Lipschitz derivatives of order ():
(2) |
Here and below (see e.g. (5)) it is considered that belongs to the ball in -norm centred at with the radius [60].
satisfies -growth condition () (see [66]) with constant iff
(5) |
3 Optimal Tensor methods
MSN(,,,,,)
(6) |
Theorem 1.
[28] Let be an output point of Algorithm 1 MSN(, , , , , ) after iterations, when and . Then
(7) |
where , .
Moreover, when for : it is required to solve auxiliary problem (6), to find with proper accuracy, times.
Note that Theorem 1 is true also when (independently of ). It can be derived from (4). The condition was used only because it guarantees the convexity of auxiliary problem (6), see [59]. Under the conditions , for there exist efficient ways to solve auxiliary problem (6), see [59]. For there exists an explicit formula for the solution of (6). For the complexity of (6) is almost the same (up to a logarithmic factor in ) as a complexity of Newton’s method iteration [59], see also Section 4. It is important that there is no need to solve (6) accurately. It is sufficient to find such that satisfied
(8) |
where
Such a practical modification slow down the theoretical convergence rate in a factor in the right hand side of (7) [43, 35, 60].
In May 2022, D. Kovalev et al. [48] proposed for an explicit policy for choosing a pair in Algorithm 1 and modify the stopping criteria (8) according to [40]. It allows to solve (6) on each iteration at average only two times rather times as it was before. The final complexity bound for -order oracle calls in [48] matched the lower oracle complexity bound from [59] (see also [25]) obtained for the worst-case function
In concurrent and independent paper [14], the authors also proposed a way of reducing additional logarithmic factors. Note, that approach of [14] does not require any a priory knowledge of smoothness parameters (including Holder continuity assumption instead of Lipschitz one).
If additionally satisfies -growth condition (5) then optimal method can be developed based on restarts procedure [63, 27] – see Algorithm 2.
(9) |
Theorem 2.
Everything that was noted after Theorem 1 will also take place in this case.
In particular, when , and MSN algorithm in Theorem 2 can be replaced by Kovalev’s variant of MSN [48] to obtain
(11) |
-order oracle complexity bound. This upper bound corresponds to the -strongly convex case lower bound from [46] and improves (10) on a logarithmic factor. The dependence on is as it should be locally for tensor methods [26, 20] (see also [54, 63, 5, 19]). But (11) describe two regimes: the first term (complexity independent on ) describe the complexity to reach the vicinity of quadratic convergence, the second term (complexity is ) describe Newton’s type local convergence.
Note that due to the presence of composite term the described above MSN algorithm and its variations can be used for splitting oracle complexities [44, 47]. Namely, assume that and have Lipshitz -th order derivatives with different Lipschitz constants. Based on MSN algorithm one can propose a general framework to accelerate tensor methods by splitting computational complexities. As a result, one can get near-optimal oracle complexity for each function in the sum separately for any , including the first-order methods. To be more precise, if the near optimal complexity to minimize is and to minimize is , then MSN-based sliding algorithm [44, 47] requires no more than oracle calls for and oracle calls for to minimze .
Not also that for and one can take in MSN and obtain Monteiro–Svaiter accelerated proximal envelop. In the cycle of recent papers [40, 16, 47, 49, 48] it was shown that this type of proximal (Catalyst-type [50]) envelop is logarithmic-free due to the well developed stopping rule criteria for the inner (auxiliary) problem. So it opens up different perspectives for improving existing Catalyst-based algorithms, see e.g. [72, 71].
4 Superfast acceleration and the structure of the auxiliary problem
In this section, the computationally efficient solution of the tensor subproblem for is discussed. With the proposed procedure, it is possible to implement third-order methods without the computation and storing of the third-order derivative. At the beginning of the section, it is shown that this subproblem is convex. Then the Bregman-distance gradient method is used as a subsolver for the tensor step. (6).
For this section, it is assumed that and . Then , the third-order Taylor’s polynomial is
(12) |
The regularized third-order model is
(13) |
The basic step for every tensor method is formulated as
(14) |
This step is a major part of almost every third-order method. Next, it is described how to effectively solve this subproblem without the computation of the third-order derivative and only using second-order information.
First, it is clarified that the function (13) is convex for . Note, that if the model (13) is possibly non-convex because of third-order derivative. It means that the minimizing non-convex subproblem is harder than minimizing original convex problem. In 2018 Yu. Nesterov made the breakthrough in [59]. It was shown that if then the model (13) is convex and high-order methods are implementable.
To give some intuition, a sketch of the convexity proof is presented. The following inequality can be derived from the Lipschitz-continuous condition and the convexity of function .
Now, to finish the proof, one need to choose such that
This lead to the crucial detail, that was misleading before
So, from the fact that is a vector and is a singular matrix, the factor in the last inequality is losing. Finally, if then the function (13) is convex. For more details one can check Theorem 1 in [59].
Thus, one can move to the effective subsolver of the problem (14) by Bregman-distance gradient method for relatively smooth functions from [52]. The main idea is to show that the optimized function is -smooth and -strongly convex with respect to some convex function or
Then Bregman-distance gradient method make next steps
where
is a Bregman-distance generated by . Bregman-distance gradient method converges linearly with condition number and convergence rate
One can show that the model (13) with can be optimized as by gradient method with Bregman-distance generated by
, , and . This means that this method is very fast and converges with a fixed number of iterations. It is worth noting that for each step, the full hessian for is computed, but the full third-order derivative is unnecessary because one only need the derivative-vector-vector product to compute . Derivative-vector-vector product can be efficiently and precisely computed by autogradient or approximated by finite difference. To summarize, the tensor method with a convergence rate of is obtained for third-order methods but only the first and second-order derivatives are computed. This approach was firstly proposed in [59] and then it was improved in [61]. Section 5 from [59] is good for a general understanding of this approach. For details and precise proofs, one can read [61].
5 Tensor methods and stochastic distributed setup
As well as first-order methods, modifications of tensor methods are proposed for problems of the finite sum type, stochastic and distributed optimization.
5.1 Stochastic Optimization
In this subsection, the problem (1) with under inexact information on the derivatives of objective up to order is considered. In particular, it is motivated by stochastic optimization. In this case, objective has the following form
(15) |
where the random variable is sampled from a distribution , and an optimization procedure has access only to stochastic realizations of the function via its higher-order derivatives up to the order .
The full Taylor expansion of (3) requires computing all the derivatives up to the order , which can be expensive to calculate. Following the works [32, 3, 4] some approximations are used for the derivatives , to construct an inexact -th order Taylor expansion of the objective:
(16) |
Let us consider the case when approximations are stochastic, namely, they satisfy the following assumption.
Assumption 1 (Unbiased stochastic gradient with bounded variance and bounded variance stochastic high-order derivatives).
For any stochastic gradient and stochastic high-order derivatives satisfy
(17) | |||
Next, stochastic tensor methods under Assumption 1 is described.
Recall, that exact tensor methods are based on minimisation of tensor model (see eq. (13) for ). For inexact tensor methods the model is constructed in the following way [4, 3]:
where is regularization parameter dependent on , and are some constants: .
This model satisfies two main conditions ([3], Theorem 1):
-
•
Model is a global upper bound for the function :
-
•
Model is convex.
At each step of the tensor algorithms, one need to find
. However, finding the respective minimizer is a separate challenge. Instead of computing the exact minimum, one can find a point with a small norm of the gradient.
Definition 3.
Denote by a -inexact solution of subproblem, i.e. a point such that
Everything is now prepared to introduce the Accelerated Stochastic Tensor Method ([4], Algorithm 2) listed as Algorithm 3.
(18) |
Theorem 4.
The Theorem 4 yields several intriguing outcomes and results.
First of all, in the Assumption 1 the unbiasedness of the gradients, but not of the high-order derivatives was assumed. The general case of inexact (and possibly biased) gradients is considered in [3]. In this scenario, a slightly different version of Algorithm 3 converges with the rate , where is the inexactness in -th derivative ().
Moreover, Algorithm 3 also allows for restarts. The total number of iterations of Restarted Accelerated Stochastic Tensor Method ([4], Algorithm 3) to reach desired accuracy in expectation is
(19) |
One can use the mini-batch Restarted Accelerated Stochastic Tensor Method to solve problem (15). For simplicity let and assume that the subproblem can be solved exactly, i.e. . In this approach, the mini-batched stochastic gradient gradients and Hessians are computed as respectively, where and represent the batch sizes. From the convergence estimate (19), one can determine the overall number of stochastic gradient computations , which is similar to the accelerated SGD method [31]. Interestingly, the number of stochastic Hessian computations scales linearly with the desired accuracy , i.e., .
5.2 Distributed optimization
In this subsection, distributed empirical risk minimization problem is considered. In this setup, multiple agents/devices/workers collaborate to solve a machine learning problem by communicating over a central node (server). The goal is to minimize the following finite-sum objective:
(20) |
where is the loss function (parametrized by ) associated with the data stored on the -th machine and the -strongly convex function is the global empirical loss function. Each node has access only to its local data and can communicate with the central server.
Problems of the form (20) arise in distributed machine learning and other applications. Beyond first-order methods, second-order methods have been developed for such Federated Learning problems. One of the keys to efficient second-order methods for this setting is exploiting the statistical similarity of the data [73, 23, 18, 13, 2]. Using statistical arguments, it is possible to show that if , where is the individual loss function of each example stored at the first device chosen to be the central device, then , where is proportional to . Then, defining
(21) |
the global objective becomes strongly convex and smooth relative to , i.e., (cf. Section 4)
where , , and . Thus, as in Section 4 one can apply the Bregman-distance gradient method and its <<accelerated>> variant [38].
When implemented, these methods require to minimize at master node (the first node)
The first term is available due to communications and the last term is a sum-type function with terms according to (21). If is large enough and is moderate then Hessian calculation time can dominate Hessian inversion time. That allows to use efficiently Tensor methods. In particular Superfast second-order methods mentioned in the Section 4.
This idea was exploited and analysed in [23] which resulted in an efficient distributed second-order solver with good practical performance and in a certain regime total arithmetic operations complexity being better than that of the existing variance reduced first-order algorithms for problem (20).
The work of A. Agafonov was supported by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye), No. 075-00337-20-03, project No. 0714-2020-0005. The work of A. Gasnikov was supported by the strategic academic leadership program <<Priority 2030>> (Agreement 075-02-2021-1316 30.09.2021).
6 Conclusion
In each iteration higher-order (also called tensor) methods minimize a regularized Taylor expansion of the objective function, which leads to faster convergence rates if the corresponding higher-order derivative is Lipschitz-continuous. Recently a series of lower iteration complexity bounds for such methods were proved, and a gap between upper an lower complexity bounds was revealed. Moreover, it was shown that such methods can be implementable since the appropriately regularized Taylor expansion of a convex function is also convex and, thus, can be minimized in polynomial time. Only very recently an algorithm with optimal convergence rate was proposed for minimizing convex functions with Lipschitz -th derivative. For convex functions with Lipschitz third derivative, these developments allowed to propose a second-order method with convergence rate , which is faster than the rate of existing second-order methods.
7 MC codes
90C25,65K10,90C15
8 Cross-References
-
•
Automatic Differentiation: Calculation of Newton Steps; Dixon, L.
-
•
Computational Optimal Transport ; Dvurechensky P., Dvinskikh D., Tupitsa N., Gasnikov A.
-
•
Stochastic gradient descent; Lan G.
-
•
Unified analysis of SGD-type methods; Gorbunov E.
-
•
Decentralized Convex Optimization over Time-Varying Graphs; Rogozin A., Gasnikov A., Beznosikov A., Kovalev D.
-
•
Stochastic Quasi-Newton Scheme; Jalilzadeh, A.
References
- [1] Deeksha Adil, Brian Bullins, Arun Jambulapati, and Sushant Sachdeva. Line search-free methods for higher-order smooth monotone variational inequalities. arXiv preprint arXiv:2205.06167, 2022.
- [2] Artem Agafonov, Pavel Dvurechensky, Gesualdo Scutari, Alexander Gasnikov, Dmitry Kamzolov, Aleksandr Lukashevich, and Amir Daneshmand. An accelerated second-order method for distributed stochastic optimization. In 2021 60th IEEE Conference on Decision and Control (CDC), pages 2407–2413. IEEE, 2021.
- [3] Artem Agafonov, Dmitry Kamzolov, Pavel Dvurechensky, and Alexander Gasnikov. Inexact tensor methods and their application to stochastic convex optimization. arXiv preprint arXiv:2012.15636, 2020.
- [4] Artem Agafonov, Dmitry Kamzolov, Alexander Gasnikov, Kimon Antonakopoulos, Volkan Cevher, and Martin Takáč. Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness, 2023.
- [5] Naman Agarwal and Elad Hazan. Lower bounds for higher-order convex optimization. In Conference On Learning Theory, pages 774–792. PMLR, 2018.
- [6] Masoud Ahookhosh and Yurii Nesterov. High-order methods beyond the classical complexity bounds, ii: inexact high-order proximal-point methods with segment search. arXiv preprint arXiv:2109.12303, 2021.
- [7] Yossi Arjevani, Ohad Shamir, and Ron Shiff. Oracle complexity of second-order methods for smooth convex optimization. Mathematical Programming, 178(1):327–360, 2019.
- [8] Michel Baes. Estimate sequence methods: extensions and approximations. Institute for Operations Research, ETH, Zürich, Switzerland, 2009.
- [9] Ali Bouaricha. Tensor methods for large, sparse unconstrained optimization. SIAM Journal on Optimization, 7(3):732–756, 1997.
- [10] Sébastien Bubeck, Qijia Jiang, Yin-Tat Lee, Yuanzhi Li, and Aaron Sidford. Complexity of highly parallel non-smooth convex optimization. Advances in Neural Information Processing Systems, 32, 2019.
- [11] Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford. Near-optimal method for highly smooth convex optimization. In Conference on Learning Theory, pages 492–507. PMLR, 2019.
- [12] Brian Bullins. Highly smooth minimization of non-smooth problems. In Conference on Learning Theory, pages 988–1030. PMLR, 2020.
- [13] Brian Bullins, Kshitij Patel, Ohad Shamir, Nathan Srebro, and Blake E Woodworth. A stochastic newton algorithm for distributed convex optimization. Advances in Neural Information Processing Systems, 34:26818–26830, 2021.
- [14] Yair Carmon, Danielle Hausler, Arun Jambulapati, Yujia Jin, and Aaron Sidford. Optimal and adaptive monteiro-svaiter acceleration. arXiv preprint arXiv:2205.15371, 2022.
- [15] Yair Carmon, Arun Jambulapati, Yujia Jin, and Aaron Sidford. Thinking inside the ball: Near-optimal minimization of the maximal loss. In Conference on Learning Theory, pages 866–882. PMLR, 2021.
- [16] Yair Carmon, Arun Jambulapati, Yujia Jin, and Aaron Sidford. Recapp: Crafting a more efficient catalyst for convex optimization. In International Conference on Machine Learning, pages 2658–2685. PMLR, 2022.
- [17] Pafnuty Chebyshev. Collected works. Vol 5. Strelbytskyy Multimedia Publishing, 2018.
- [18] Amir Daneshmand, Gesualdo Scutari, Pavel Dvurechensky, and Alexander Gasnikov. Newton method over networks is fast up to the statistical precision. In International Conference on Machine Learning, pages 2398–2409. PMLR, 2021.
- [19] Nikita Doikov, Konstantin Mishchenko, and Yurii Nesterov. Super-universal regularized newton method. arXiv preprint arXiv:2208.05888, 2022.
- [20] Nikita Doikov and Yurii Nesterov. Local convergence of tensor methods. Mathematical Programming, 193(1):315–336, 2022.
- [21] Nikita Doikov and Yurii E Nesterov. Inexact tensor methods with dynamic accuracies. In ICML, pages 2577–2586, 2020.
- [22] Pavel Dvurechensky, Alexander Gasnikov, Petr Ostroukhov, César A. Uribe, and Anastasiya Ivanova. Near-optimal tensor methods for minimizing the gradient norm of convex function. arXiv:1912.03381, 2019. WIAS Preprint No. 2694.
- [23] Pavel Dvurechensky, Dmitry Kamzolov, Aleksandr Lukashevich, Soomin Lee, Erik Ordentlich, César A Uribe, and Alexander Gasnikov. Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization. arXiv preprint arXiv:2102.08246, 2021.
- [24] Yurii Gavrilovich Evtushenko and Alexey Anatol’evich Tret’yakov. pth-order approximation of the solution set of nonlinear equations. Computational Mathematics and Mathematical Physics, 53(12):1763–1780, 2013.
- [25] Ankit Garg, Robin Kothari, Praneeth Netrapalli, and Suhail Sherif. Near-optimal lower bounds for convex optimization for all orders of smoothness. Advances in Neural Information Processing Systems, 34:29874–29884, 2021.
- [26] Alexander Gasnikov. Modern numerical methods. universal gradient descent. MIPT, arXiv:1711.00394, 2018.
- [27] Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, and César A Uribe. Optimal tensor methods in smooth convex and uniformly convexoptimization. In Conference on Learning Theory, pages 1374–1391. PMLR, 2019.
- [28] Alexander Vladimirovich Gasnikov, Darina Mikhailovna Dvinskikh, Pavel Evgenievich Dvurechensky, Dmitry Igorevich Kamzolov, Vladislav Vyacheslavovich Matyukhin, Dmitry Arkadievich Pasechnyuk, Nazarii Konstantinovich Tupitsa, and Aleksey Vladimirovich Chernov. Accelerated meta-algorithm for convex optimization problems. Computational Mathematics and Mathematical Physics, 61(1):17–28, 2021.
- [29] Alexander Vladimirovich Gasnikov and Dmitry Alexandrovich Kovalev. A hypothesis about the rate of global convergence for optimal methods (newton’s type) in smooth convex optimization. Computer research and modeling, 10(3):305–314, 2018.
- [30] A.V. Gasnikov, E.A. Gorbunov, D.A. Kovalev, A.A.M. Mohammed, and E.O. Chernousova. Substantiation of the hypothesis about optimal estimates of the rate of convergence of numerical methods of high-order convex optimization. Computer Research and Modeling, 10(6):737–753, 2018.
- [31] Saeed Ghadimi and Guanghui Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: shrinking procedures and optimal algorithms. SIAM Journal on Optimization, 23(4):2061–2089, 2013.
- [32] Saeed Ghadimi, Han Liu, and Tong Zhang. Second-order methods with cubic regularization under inexact information. arXiv preprint arXiv:1710.05782, 2017.
- [33] G. N. Grapiglia and Yu. Nesterov. Tensor methods for minimizing convex functions with hölder continuous higher-order derivatives. SIAM Journal on Optimization, 30(4):2750–2779, 2020.
- [34] G. N. Grapiglia and Yurii Nesterov. Tensor methods for finding approximate stationary points of convex functions. Optimization Methods and Software, 0(0):1–34, 2020.
- [35] Geovani Nunes Grapiglia and Yurii Nesterov. On inexact solution of auxiliary problems in tensor methods for convex optimization. Optimization Methods and Software, 36(1):145–170, 2021.
- [36] Geovani Nunes Grapiglia and Yurii Nesterov. Adaptive third-order methods for composite convex optimization. arXiv:2202.12730, 2022.
- [37] Andreas Griewank. The modification of newton’s method for unconstrained optimization by bounding cubic terms. Technical report, Technical report NA/12, 1981.
- [38] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statistically preconditioned accelerated gradient method for distributed optimization. In International conference on machine learning, pages 4203–4227. PMLR, 2020.
- [39] Karl-Heinz Hoffmann and Hans-Joachim Kornstaedt. Higher-order necessary conditions in abstract mathematical programming. Journal of Optimization Theory and Applications, 26(4):533–568, 1978.
- [40] Anastasiya Ivanova, Dmitry Pasechnyuk, Dmitry Grishchenko, Egor Shulgin, Alexander Gasnikov, and Vladislav Matyukhin. Adaptive catalyst for smooth convex optimization. In International Conference on Optimization and Applications, pages 20–37. Springer, 2021.
- [41] Bo Jiang, Tianyi Lin, and Shuzhong Zhang. A unified adaptive tensor approximation scheme to accelerate composite convex optimization. SIAM Journal on Optimization, 30(4):2897–2926, 2020.
- [42] Bo Jiang, Haoyue Wang, and Shuzhong Zhang. An optimal high-order tensor method for convex optimization. In Conference on Learning Theory, pages 1799–1801. PMLR, 2019.
- [43] Dmitry Kamzolov and Alexander Gasnikov. Near-optimal hyperfast second-order method for convex optimization and its sliding. arXiv preprint arXiv:2002.09050, 2020.
- [44] Dmitry Kamzolov, Alexander Gasnikov, and Pavel Dvurechensky. Optimal combination of tensor optimization methods. In International Conference on Optimization and Applications, pages 166–183. Springer, 2020.
- [45] Leonid Vital’evich Kantorovich. On newton’s method. Trudy Matematicheskogo Instituta imeni VA Steklova, 28:104–144, 1949.
- [46] Guy Kornowski and Ohad Shamir. High-order oracle complexity of smooth and strongly convex optimization. arXiv preprint arXiv:2010.06642, 2020.
- [47] Dmitry Kovalev, Aleksandr Beznosikov, Ekaterina Borodich, Alexander Gasnikov, and Gesualdo Scutari. Optimal gradient sliding and its application to distributed optimization under similarity. arXiv preprint arXiv:2205.15136, 2022.
- [48] Dmitry Kovalev and Alexander Gasnikov. The first optimal acceleration of high-order methods in smooth convex optimization. arXiv preprint arXiv:2205.09647, 2022.
- [49] Dmitry Kovalev and Alexander Gasnikov. The first optimal acceleration of high-order methods in smooth convex optimization. arXiv preprint arXiv:2205.09647, 2022.
- [50] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. Catalyst acceleration for first-order convex optimization: from theory to practice. Journal of Machine Learning Research, 18(1):7854–7907, 2018.
- [51] Tianyi Lin, Michael Jordan, et al. Perseus: A simple high-order regularization method for variational inequalities. arXiv preprint arXiv:2205.03202, 2022.
- [52] Haihao Lu, Robert M Freund, and Yurii Nesterov. Relatively smooth convex optimization by first-order methods, and applications. SIAM Journal on Optimization, 28(1):333–354, 2018.
- [53] Renato D.C. Monteiro and Benar Fux Svaiter. An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods. SIAM Journal on Optimization, 23(2):1092–1125, 2013.
- [54] Arkadij Semenovič Nemirovskij and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983.
- [55] Yu Nesterov. Accelerating the cubic regularization of newton’s method on convex problems. Mathematical Programming, 112(1):159–181, 2008.
- [56] Yurii Nesterov. A method of solving a convex programming problem with convergence rate . Soviet Mathematics Doklady, 27(2):372–376, 1983.
- [57] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2003.
- [58] Yurii Nesterov. Lectures on convex optimization, volume 137. Springer, 2018.
- [59] Yurii Nesterov. Implementable tensor methods in unconstrained convex optimization. Mathematical Programming, 186(1):157–183, 2021.
- [60] Yurii Nesterov. Inexact high-order proximal-point methods with auxiliary search procedure. SIAM Journal on Optimization, 31(4):2807–2828, 2021.
- [61] Yurii Nesterov. Superfast second-order methods for unconstrained convex optimization. Journal of Optimization Theory and Applications, 191(1):1–30, 2021.
- [62] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM, 1994.
- [63] Yurii Nesterov and Boris Polyak. Cubic regularization of newton method and its global performance. Mathematical Programming, 108(1):177–205, 2006.
- [64] Isaac Newton. Methodus fluxionum et serierum infinitarum, 1671. The method of fluxions and infinite series, 1967.
- [65] Jorge Nocedal and Stephen J. Wright. Numerical optimization. Springer, 1999.
- [66] Boris Teodorovich Poljak. Sharp minimum. In Generalized Lagrangians and applications. Oxford: Pergamon Press, 1982.
- [67] B. Polyak. Introduction to optimization. Optimization Software, 1987.
- [68] Robert B Schnabel and Ta-Tung Chow. Tensor methods for unconstrained optimization using second derivatives. SIAM Journal on Optimization, 1(3):293–315, 1991.
- [69] Robert B. Schnabel and Paul D. Frank. Tensor methods for nonlinear equations. SIAM Journal on Numerical Analysis, 21(5):815–843, 1984.
- [70] Chaobing Song, Yong Jiang, and Yi Ma. Unified acceleration of high-order algorithms under general hölder continuity. SIAM Journal on Optimization, 31(3):1797–1826, 2021.
- [71] Ye Tian, Gesualdo Scutari, Tianyu Cao, and Alexander Gasnikov. Acceleration in distributed optimization under similarity. In International Conference on Artificial Intelligence and Statistics, pages 5721–5756. PMLR, 2022.
- [72] Vladislav Tominin, Yaroslav Tominin, Ekaterina Borodich, Dmitry Kovalev, Alexander Gasnikov, and Pavel Dvurechensky. On accelerated methods for saddle-point problems with composite structure. arXiv preprint arXiv:2103.09344, 2021.
- [73] Yuchen Zhang and Lin Xiao. Communication-Efficient Distributed Optimization of Self-concordant Empirical Loss, pages 289–341. Springer International Publishing, Cham, 2018.