This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Superlinear Optimization Algorithms

Hongxia Wang, Yeming Xu, Ziyuan Guo, Huanshui Zhang H. Zhang, H. Wang, Y. Xu, and Z. Guo are with the School of Electrical and Automation Engineering, Shandong University of Science and Technology, Qingdao 266590, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).This work was supported by the Original Exploratory Program Project of National Natural Science Foundation of China (62250056), the Joint Funds of the National Natural Science Foundation of China (U23A20325), the Major Basic Research of Natural Science Foundation of Shandong Province (ZR2021ZD14), and the High-level Talent Team Project of Qingdao West Coast New Area (RCTD-JC-2019-05). (Corresponding author: Huanshui Zhang).
Abstract

This paper proposes several novel optimization algorithms for minimizing a nonlinear objective function. The algorithms are enlightened by the optimal state trajectory of an optimal control problem closely related to the minimized objective function. They are superlinear convergent when appropriate parameters are selected as required. Unlike Newton’s method, all of them can be also applied in the case of a singular Hessian matrix. More importantly, by reduction, some of them avoid calculating the inverse of the Hessian matrix or an identical dimension matrix and some of them need only the diagonal elements of the Hessian matrix. In these cases, these algorithms still outperform the gradient descent method. The merits of the proposed optimization algorithm are illustrated by numerical experiments.

I Introduction

The history of optimization problems is long and spans multiple disciplines, including mathematics, operations research, computer science, and engineering [1, 2, 3]. The roots of optimization can be traced back to the 18th century when mathematicians like Leonhard Euler and Joseph-Louis Lagrange began formulating and solving problems related to finding extrema (maxima or minima) of functions. Several important achievements include: the calculus of variations became a formal branch of mathematics [4]; the simplex method for solving linear programming problems led to the development of operations research [5]; the global optimization, dealing with finding the global optimum rather than the local optima of a nonlinear function, gained attention [6]; evolutionary and metaheuristic optimization algorithms gained popularity [7]; optimization techniques found extensive applications in operations research, engineering, finance, and various other fields. Integer programming, mixed-integer programming, and other discrete optimization techniques were developed to address real-world problems; optimization has been becoming more and more important for training models in machine learning and in various data science applications [8]. Gradient descent and its variants, such as stochastic gradient descent, have become essential optimization techniques in the context of neural networks [9].

The gradient descent [10] and Newton’s methods [11] are predominant optimization methods. The choice of step size is crucial for the convergence and stability of these methods. If the step size is too large, the methods may not converge, and if it is too small, it may converge very slowly [12, 13, 14]. To control the size of each iteration and improve the convergence properties of the algorithms, the line search or backtracking techniques are utilized to determine an appropriate step size at each iteration [15]. It means additional calculation costs for acquiring a step size. In practical implementations, the step size can be adjusted dynamically based on the behavior of the function. A body of variations and enhancements, such as stochastic gradient descent [16], momentum [17], adaptive learning rate methods [18, 19, 20], are thus proposed.

Besides, Newton’s method may diverge if the Hessian matrix is singular or the algorithm encounters numerical instability (when the Hessian matrix is ill-conditioned). To mitigate the aforementioned issues, quasi-Newton method [21], inexact Newton method [22], and modified Newton method [23] have been developed.

The paper attempts to solve optimization problems (OPs) by proposing novel algorithms, which are enlightened by the optimal state trajectory of the closely related optimal control problems (OCPs). Because our original algorithm is proposed almost along the optimal state trajectory of OCP, it holds some remarkable features. It converges more rapidly than gradient descent because it superlinearly converges; meanwhile, it is superior to Newton’s method because it can be applied in the case of a singular Hessian matrix. To reduce calculation costs, we also offer several variants of the original algorithm. It is worth pointing out that the variants of the original algorithm also inherit the above merits. All of them are superlinearly convergent.

The remainder is arranged as follows. In Section II, we propose the novel algorithm almost along an optimal state trajectory. Section III is devoted to the convergence analysis of the proposed algorithm and its several variants. The effectiveness of algorithms is illustrated in Section IV. Some conclusions are achieved in Section V.

Notation: Throughout the paper, the superscript TT stands for matrix transposition; RnR^{n} denotes the nn-dimensional real vector space; InI_{n} is nn-dimensional unit matrix. For any square matrix MM, M>0M>0 means that it is positive definite and ρ(M)\rho(M) represents the spectral radius of MM. |||\cdot| and ||||||\cdot|| denote appropriate vector norm and matrix norm, respectively.

II Optimization algorithms enlightened by Optimal control

Assume that f(x):RnR1f(x):R^{n}\mapsto R^{1} is a twice continuously differentiable nonlinear function. The optimization problem(OP)

minxf(x)\displaystyle\min_{x}f(x) (1)

is generally solved by numerical iteration

xk+1=Φ(xk),a guess of x0\displaystyle x_{k+1}=\Phi(x_{k}),\mbox{a guess of }x_{0} (2)

with Φ()\Phi(\cdot) being a designed function.

We will design Φ()\Phi(\cdot) in (2) with the aid of the following optimal control problem (OPC):

minuk=0N[f(xk)+12ukTRuk]+f(xN+1),\displaystyle\min_{u}\sum_{k=0}^{N}[f(x_{k})+\frac{1}{2}u_{k}^{T}Ru_{k}]+f(x_{N+1}), (3)
subject to xk+1=xk+uk,\displaystyle\mbox{subject to~{}}x_{k+1}=x_{k}+u_{k}, (4)

where xkRnx_{k}\in R^{n} and ukRnu_{k}\in R^{n} are the state and control of system (4), respectively, integer N>0N>0 is the control terminal time, matrix R>0R>0 is the control weight matrix, and (3) is the cost functional of OCP, closely related to OP (1).

By solving OCP (3)-(4), one can obtain the optimal control law uku_{k} and optimal state trajectory xkx_{k} as below.

Lemma 1.

The optimal control strategy and the optimal state trajectory of OCP (3)-(4) admit

uk=\displaystyle u_{k}= R1i=k+1N+1f(xi),\displaystyle-R^{-1}\sum_{i=k+1}^{N+1}\nabla f(x_{i}), (5)
xk+1=\displaystyle x_{k+1}= xkR1i=k+1N+1f(xi),\displaystyle x_{k}-R^{-1}\sum_{i=k+1}^{N+1}\nabla f(x_{i}), (6)

where f(x)\nabla f(x) stands for the gradient of f(x)f(x).

Proof.

The conclusion is direct by using the maximum principle and solving the Hamiltonian systems

xk+1=\displaystyle x_{k+1}= xkR1pk,\displaystyle x_{k}-R^{-1}p_{k}, (7)
pk1=\displaystyle p_{k-1}= pk+f(xk),pN=f(xN+1).\displaystyle p_{k}+\nabla f(x_{k}),p_{N}=\nabla f(x_{N+1}). (8)

Please refer to [24] for the detail. ∎

Along the optimal state trajectory (6), we will explore several algorithms for solving OP (1). It is feasible only if OCP (3)-(4) is solvable. The fact that the global optimality means local optimality indicates that the optimal solution uiu_{i} is in charge of regulating xi+1x_{i+1} to minimize k=i+1N[f(xk)+12ukTRuk]+f(xN+1)\sum_{k=i+1}^{N}[f(x_{k})+\frac{1}{2}u_{k}^{T}Ru_{k}]+f(x_{N+1}). Given that R>0R>0, the optimal uiu_{i} can minimize k=i+1N+1f(xk)\sum_{k=i+1}^{N+1}f(x_{k}) by regulating the state xi+1x_{i+1}. One can thereby assert that the optimal uNu_{N} minimizes f(xN+1)f(x_{N+1}).

It seems impossible to exactly achieve the optimal state trajectory of (3)-(4) because of the nonlinearity of f(x)f(x). Consequently, we attempt to approximate the optimal state trajectory for solving (1) by

xk+1=xkgk,\displaystyle x_{k+1}=x_{k}-g_{k}, (9)
gk=(R+H(xk))1(f(xk)+i=k+1N(f(xi)H(xi)gi),gN=(R+H(xN))1f(xN),\displaystyle g_{k}=(R+H(x_{k}))^{-1}(\nabla f(x_{k})+\sum_{i=k+1}^{N}(\nabla f(x_{i})-H(x_{i})g_{i}),~{}g_{N}=(R+H(x_{N}))^{-1}{\nabla f(x_{N})}, (10)

where H(x)H(x) denotes the Hessian matrix of f(x)f(x).

Lemma 2.

For any 0kN0\leq k\leq N, xkx_{k} generated by algorithm (9)-(10) is almost along the optimal state trajectory (6).

Proof.

It can be proved by using the first-order Taylor expansion of f(x)\nabla f(x) and the optimal trajectory (6) in Lemma 1, please refer to the proof of [25, Lem.2]. ∎

Given that xkx_{k} relies on xix_{i} for ik+1i\geq k+1 in (9)-(10), it can not iterate forward, we propose the following algorithm

xk+1=\displaystyle x_{k+1}= xk[I((R+H(xk))1R)Nk+1]H(xk)1f(xk).\displaystyle x_{k}-[I-((R+H(x_{k}))^{-1}R)^{N-k+1}]H(x_{k})^{-1}\nabla f(x_{k}). (11)
Lemma 3.

Algorithm (11) is obtained from (9)-(10). Different from any algorithm among the existing ones, algorithm (11) recovers Newton’s method when R=0R=0.

Proof.

Replace all xix_{i} for ik+1i\geq k+1 in the right-hand side of (9) with xkx_{k} and get the iterative relation

xk+1=xkzk(xk),\displaystyle x_{k+1}=x_{k}-z_{k}(x_{k}), (12)
zk(xk)=(R+H(xk))1(f(xk)+Rzk+1),zN(xk)=(R+H(xk))1f(xk).\displaystyle z_{k}(x_{k})=(R+H(x_{k}))^{-1}(\nabla f(x_{k})+Rz_{k+1}),~{}z_{N}(x_{k})=(R+H(x_{k}))^{-1}{\nabla f(x_{k})}. (13)

According to (13), zk(xk)z_{k}(x_{k}) can be directly calculated as

zk(xk)=\displaystyle z_{k}(x_{k})= i=0Nk[(R+H(xk))1R]i(R+H(xk))1f(xk)\displaystyle\sum_{i=0}^{N-k}[(R+H(x_{k}))^{-1}R]^{i}(R+H(x_{k}))^{-1}\nabla f(x_{k})
=\displaystyle= [I((R+H(xk))1R)Nk+1]H(xk)1f(xk).\displaystyle[I-((R+H(x_{k}))^{-1}R)^{N-k+1}]H(x_{k})^{-1}\nabla f(x_{k}). (14)

By combining (12) and (14), it is immediate to obtain (11). Observe the structure of (11), it is indeed different from any of the existing algorithms. Let R=0R=0. It is immediate to see that (11) is reduced to Newton’s method. ∎

III Superlinear algorithms

Lemma 3 encourages us to explore more algorithms.

III-A A superlinear algorithm

Note that kk in (11) is required to be less than N+1N+1. We thus modify (11) as
Algorithm I:

xk+1=\displaystyle x_{k+1}= xk[I((R+H(xk))1R)k+1]H(xk)1f(xk).\displaystyle x_{k}-[I-((R+H(x_{k}))^{-1}R)^{k+1}]H(x_{k})^{-1}\nabla f(x_{k}). (15)

Only if f(x)f(x) is strictly convex, there still hold ρ(((R+H(xk))1R)k+1)<1\rho(((R+H(x_{k}))^{-1}R)^{k+1})<1 and ρ(I((R+H(xk))1R)k+1)<1\rho(I-((R+H(x_{k}))^{-1}R)^{k+1})<1. Matrix I((R+H(xk))1R)k+1I-((R+H(x_{k}))^{-1}R)^{k+1} slightly alters the step size and direction of Newton’s method, we thus guess that (15) has some advantages of Newton’s method, which will be proved later.

Theorem 1.

Assume f(x)f(x) is strictly convex. Then {xk}\{x_{k}\} generated by (15) admits

|xk+1x|ρ((R+H(xk))1R))k+1|xkx|.\displaystyle|x_{k+1}-x_{*}|\leq\rho((R+H(x_{k}))^{-1}R))^{k+1}|x_{k}-x_{*}|. (16)

i.e., Algorithm (15) is superlinearly convergent.

Proof.

The proof will be divided into two parts. Prove that the algorithm (15) is convergent and then show that it converges superlinearly.

First, define Lyapunov function candidate Vk(xk)=f(xk)f(x)V_{k}(x_{k})=f(x_{k})-f(x_{*}), where xx_{*} is the minimum point of f(x)f(x). Then Vk(xk)0V_{k}(x_{k})\geq 0 and Vk(x)=0V_{k}(x_{*})=0. Along (15), we can derive

Vk+1(xk+1)Vk(xk)\displaystyle V_{k+1}(x_{k+1})-V_{k}(x_{k})
=\displaystyle= f(xk+1)f(xk)\displaystyle f(x_{k+1})-f(x_{k})
=\displaystyle= [f(xk)]T(xk+1xk)+o(|xk+1xk|2)\displaystyle[\nabla f(x_{k})]^{T}(x_{k+1}-x_{k})+o(|x_{k+1}-x_{k}|^{2})
=\displaystyle= [f(xk)]T[I((R+H(xk))1R)k+1]H(xk)1f(xk)+o(|xk+1xk|2),\displaystyle-[\nabla f(x_{k})]^{T}[I-((R+H(x_{k}))^{-1}R)^{k+1}]H(x_{k})^{-1}\nabla f(x_{k})+o(|x_{k+1}-x_{k}|^{2}), (17)

Recall ρ(((R+H(xk))1R)k+1)<1\rho(((R+H(x_{k}))^{-1}R)^{k+1})<1 and H(xk)1>0H(x_{k})^{-1}>0. According to [26, Thm.H.1.a], each eigenvalue of (R+H(xk))1R)k+1H(xk)1(R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1} is positive and 0<ρ((R+H(xk))1R)k+1H(xk)1)<ρ(H(xk)1)0<\rho((R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1})<\rho(H(x_{k})^{-1}). Moreover, it can be derived from

(R+H(xk))1R)k+1H(xk)1\displaystyle(R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1}
=\displaystyle= (I+R1H(xk))1)k+1H(xk)1\displaystyle(I+R^{-1}H(x_{k}))^{-1})^{k+1}H(x_{k})^{-1}
=\displaystyle= (I+R1H(xk))1(I+R1H(xk))1(I+R1H(xk))1H(xk)1\displaystyle(I+R^{-1}H(x_{k}))^{-1}(I+R^{-1}H(x_{k}))^{-1}\cdots(I+R^{-1}H(x_{k}))^{-1}H(x_{k})^{-1}
=\displaystyle= [H(xk)1H(xk)(I+R1H(xk))1][H(xk)1H(xk)(I+R1H(xk))1]\displaystyle[H(x_{k})^{-1}H(x_{k})(I+R^{-1}H(x_{k}))^{-1}][H(x_{k})^{-1}H(x_{k})(I+R^{-1}H(x_{k}))^{-1}]
[H(xk)1H(xk)(I+R1H(xk))1]H(xk)1\displaystyle\cdots[H(x_{k})^{-1}H(x_{k})(I+R^{-1}H(x_{k}))^{-1}]H(x_{k})^{-1}
=\displaystyle= [H(xk)1(I+H(xk)R1)1H(xk)][H(xk)1(I+H(xk)R1)1H(xk)]\displaystyle[H(x_{k})^{-1}(I+H(x_{k})R^{-1})^{-1}H(x_{k})][H(x_{k})^{-1}(I+H(x_{k})R^{-1})^{-1}H(x_{k})]
[H(xk)1(I+H(xk)R1)1H(xk)]H(xk)1\displaystyle\cdots[H(x_{k})^{-1}(I+H(x_{k})R^{-1})^{-1}H(x_{k})]H(x_{k})^{-1}
=\displaystyle= H(xk)1(I+H(xk)R1)1(I+H(xk)R1)1(I+H(xk)R1)1\displaystyle H(x_{k})^{-1}(I+H(x_{k})R^{-1})^{-1}(I+H(x_{k})R^{-1})^{-1}\cdots(I+H(x_{k})R^{-1})^{-1}
=\displaystyle= H(xk)1R(R+H(xk))1R(R+H(xk))1R(R+H(xk)1)\displaystyle H(x_{k})^{-1}R(R+H(x_{k}))^{-1}R(R+H(x_{k}))^{-1}\cdots R(R+H(x_{k})^{-1})
=\displaystyle= H(xk)1[R(R+H(xk))1]k+1\displaystyle H(x_{k})^{-1}[R(R+H(x_{k}))^{-1}]^{k+1} (18)

that [(R+H(xk))1R)k+1H(xk)1]T=(R+H(xk))1R)k+1H(xk)1[(R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1}]^{T}=(R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1}. Consequently, (R+H(xk))1R)k+1H(xk)1>0(R+H(x_{k}))^{-1}R)^{k+1}H(x_{k})^{-1}>0.

Until now, it is easy to see from (17) that Vk+1(xk+1)Vk(xk)<0V_{k+1}(x_{k+1})-V_{k}(x_{k})<0. As a consequence, (15) is stable and convergent.

Let z¯k(x)=[I((R+H(x))1R)k+1]H(x)1f(x)\bar{z}_{k}(x)=[I-((R+H(x))^{-1}R)^{k+1}]H(x)^{-1}\nabla f(x). Now direct calculation gives

z¯k(x)=\displaystyle\bar{z}_{k}(x_{*})= [I((R+H(x))1R)k+1]H(x)1f(x)=0,\displaystyle[I-((R+H(x_{*}))^{-1}R)^{k+1}]H(x_{*})^{-1}\nabla f(x_{*})=0, (19)
z¯k(x)=\displaystyle\bar{z}_{k}^{\prime}(x_{*})= I((R+H(x))1R)k+1.\displaystyle I-((R+H(x_{*}))^{-1}R)^{k+1}. (20)

From (15),

A\displaystyle A xk+1x\displaystyle x_{k+1}-x_{*}
=\displaystyle= xkxz¯k(xk)\displaystyle x_{k}-x_{*}-\bar{z}_{k}(x_{k})
\displaystyle\approx xkx[z¯k(x)+z¯k(x)(xkx)]\displaystyle x_{k}-x_{*}-[\bar{z}_{k}(x_{*})+\bar{z}_{k}^{\prime}(x_{*})(x_{k}-x_{*})]
=\displaystyle= xkxz¯k(x)(xkx)\displaystyle x_{k}-x_{*}-\bar{z}_{k}^{\prime}(x_{*})(x_{k}-x_{*})
=\displaystyle= (Iz¯k(x))(xkx)\displaystyle(I-\bar{z}_{k}^{\prime}(x_{*}))(x_{k}-x_{*})
=\displaystyle= ((R+H(x))1R)k+1(xkx),\displaystyle((R+H(x_{*}))^{-1}R)^{k+1}(x_{k}-x_{*}), (21)

where the third line has used the Taylor expansion of z¯k(x)\bar{z}_{k}(x) at xx_{*}. Due to that ρ((R+H(x))1R)<1\rho((R+H(x_{*}))^{-1}R)<1 when H(x)>0H(x_{*})>0, it is immediate to conclude that (16) holds. The proof is completed. ∎

III-B Superlinear algorithm without involving the inverse of Hessian matrix

It cannot be denied that in (15), the calculation cost focuses on the inverse of R+H(xk)R+H(x_{k}). The good news is that RR in (15) is an adjustable matrix, so is matrix R+H(xk)R+H(x_{k}). To save calculation, we replace (R+H(xk))1(R+H(x_{k}))^{-1} with an adjustable matrix MM. Note that (R+H(xk))1R=I(R+H(xk))1H(xk)(R+H(x_{k}))^{-1}R=I-(R+H(x_{k}))^{-1}H(x_{k}). (R+H(xk))1R(R+H(x_{k}))^{-1}R can thus be substituted by IMH(xk)I-MH(x_{k}). Then algorithm (15) can be modified as
Algorithm II:

xk+1=xk[I(IMH(xk))k+1]H(xk)1f(xk).\displaystyle x_{k+1}=x_{k}-[I-(I-MH(x_{k}))^{k+1}]H(x_{k})^{-1}\nabla f(x_{k}). (22)

Algorithm (22) inherits the superlinear convergence of algorithm (15), which will be shown in the sequel.

Theorem 2.

Assume f(x)=0\nabla f(x_{*})=0 and H(x)>0H(x_{*})>0. If M>0M>0 and ρ(IMH(x))<1\rho(I-MH(x_{*}))<1, then {xk}\{x_{k}\} generated by (22) admits

|xk+1x|ρ(IMH(x))k+1|xkx|.\displaystyle|x_{k+1}-x_{*}|\leq\rho(I-MH(x_{*}))^{k+1}|x_{k}-x_{*}|. (23)

i.e., Algorithm (22) is superlinearly convergent.

Proof.

Denote

g^k(x)=[I(IMH(x))k+1]H(x)1f(x).\displaystyle\hat{g}_{k}(x)=[I-(I-MH(x))^{k+1}]H(x)^{-1}\nabla f(x). (24)

Direct derivation shows

g^k(x)=\displaystyle\hat{g}_{k}(x_{*})= [I(IMH(x))k+1]H(x)1f(x)=0,\displaystyle[I-(I-MH(x_{*}))^{k+1}]H(x_{*})^{-1}\nabla f(x_{*})=0, (25)
g^k(x)=\displaystyle\hat{g}^{\prime}_{k}(x_{*})= I(IMH(x))k+1.\displaystyle I-(I-MH(x_{*}))^{k+1}. (26)

From (22),

xk+1x\displaystyle x_{k+1}-x_{*}
=\displaystyle= xkxg^k(xk)\displaystyle x_{k}-x_{*}-\hat{g}_{k}(x_{k})
\displaystyle\approx xkx[g^k(x)+gk(x)(xkx)]\displaystyle x_{k}-x_{*}-[\hat{g}_{k}(x_{*})+g^{\prime}_{k}(x_{*})(x_{k}-x_{*})]
=\displaystyle= xkxg^k(x)(xkx)\displaystyle x_{k}-x_{*}-\hat{g}^{\prime}_{k}(x_{*})(x_{k}-x_{*})
=\displaystyle= (Ing^k(x))(xkx)\displaystyle(I_{n}-\hat{g}^{\prime}_{k}(x_{*}))(x_{k}-x_{*})
=\displaystyle= (IMH(x))k+1(xkx).\displaystyle(I-MH(x_{*}))^{k+1}(x_{k}-x_{*}). (27)

In view of ρ(IMH(x))<1\rho(I-MH(x_{*}))<1 when H(x)>0H(x_{*})>0, the proof is completed. ∎

Remark 1.

It should be pointed out that, unlike Newton’s method, algorithm (22) still works even if there occasionally holds H(x)=0H(x)=0 because algorithm (22) has an equivalent realization

xk+1=\displaystyle x_{k+1}= xkg^k(xk),\displaystyle x_{k}-\hat{g}_{k}(x_{k}), (28)
g^k(xk)=\displaystyle\hat{g}_{k}(x_{k})= Mf(xk)+(IMH(xk))g^k1(xk),g^0(xk)=Mf(xk).\displaystyle M\nabla f(x_{k})+(I-MH(x_{k}))\hat{g}_{k-1}(x_{k}),\hat{g}_{0}(x_{k})=M\nabla f(x_{k}). (29)
Remark 2.

When H(x)>0H(x)>0 and M>0M>0, only if the spectral radius of MM is small enough, ρ(IMH(x))<1\rho(I-MH(x))<1 always holds by appropriately selecting MM.

This remark makes us realize that it is possible to select a diagonal matrix MM in algorithm (28)-(29) to ensure that the algorithm is superlinearly convergent.

III-C Superlinear algorithms without involving Hessian matrix

In addition, to alleviate the requirement for the second-order information, the Hessian matrix H(xk)H(x_{k}) in algorithm (28)-(29) can be approximated by the first-order difference. As a result, algorithm (28)-(29) is replaced by
Algorithm III:

xk+1=\displaystyle x_{k+1}= xkg¯k(xk),\displaystyle x_{k}-\bar{g}_{k}(x_{k}), (30)
g¯k(xk)=\displaystyle\bar{g}_{k}(x_{k})= Df(xk)+(IDD1(xk))g¯k1(xk),g¯0(xk)=Df(xk),\displaystyle D\nabla f(x_{k})+(I-DD_{1}(x_{k}))\bar{g}_{k-1}(x_{k}),\bar{g}_{0}(x_{k})=D\nabla f(x_{k}), (31)

where D>0D>0 is an appropriate diagonal matrix and D1(xk)D_{1}(x_{k}) is the first-order backward difference of f(x)\nabla f(x) at point xkx_{k}. Denote xk,ix_{k,i} as the ithi-th element of xkx_{k} and fi(x)\nabla f_{i}(x) as the ithi-th element of \nabla. Then the i,jthi,j-th element of D1(xk)D_{1}(x_{k}) is given by [D1(xk)]i,j=[(fi(xk)fi(xk1))/(xj,kxj,k1)][D_{1}(x_{k})]_{i,j}=[(\nabla f_{i}(x_{k})-\nabla f_{i}(x_{k-1}))/(x_{j,k}-x_{j,k-1})].

To reduce calculation cost further, we replace the Hessian matrix H(xk)H(x_{k}) with a diagonal matrix Λ(xk)\Lambda(x_{k}). It has the identical diagonal elements with the Hessian matrix H(xk)H(x_{k}). That is to say, Λ(xk)=diag{f2(x)x12(x),,f2(x)xn2}|x=xk\Lambda(x_{k})={\rm diag}\{\frac{\partial f^{2}(x)}{\partial x_{1}^{2}(x)},\cdots,\frac{\partial f^{2}(x)}{\partial x_{n}^{2}}\}|_{x=x_{k}}. As a result, algorithm (28)-(29) is modified as
Algorithm IV:

xk+1=\displaystyle x_{k+1}= xkg~k(xk),\displaystyle x_{k}-\tilde{g}_{k}(x_{k}), (32)
g~k(xk)=\displaystyle\tilde{g}_{k}(x_{k})= Df(xk)+(IDΛ(xk))g~k1(xk),g~0(xk)=Df(xk),\displaystyle D\nabla f(x_{k})+(I-D\Lambda(x_{k}))\tilde{g}_{k-1}(x_{k}),\tilde{g}_{0}(x_{k})=D\nabla f(x_{k}), (33)

where D>0D>0 is an appropriate diagonal matrix.

Remark 3.

Only if we select DD such that IDΛ(xk)>0I-D\Lambda(x_{k})>0 and ρ(IDΛ(xk))<1\rho(I-D\Lambda(x_{k}))<1, then Algorithm IV is superlinearly convergent.

From the above discussion, the computational complexity of Algorithm I, Algorithm II, Algorithm III, and Algorithm IV has been reduced sequentially.

IV Conclusion

In the paper, several optimization algorithms have been developed. They are enlightened by the optimal trajectory of the related OCP. The algorithms converge more rapidly than the gradient descent. Meanwhile, unlike Newton’s method, they still work even if the Hessian matrix in iterate is singular or ill-conditioned. The convergence rate of the algorithms will be superlinear if the adjustable matrices are selected as required.

References

  • [1] S. S. Rao, Engineering optimization: theory and practice. John Wiley & Sons, 2019.
  • [2] A. D. Belegundu and T. R. Chandrupatla, Optimization concepts and applications in engineering. Cambridge University Press, 2019.
  • [3] N. V. Banichuk, Introduction to optimization of structures. Springer Science & Business Media, 2013.
  • [4] I. M. Gelfand, R. A. Silverman, et al., Calculus of variations. Courier Corporation, 2000.
  • [5] P. Wolfe, “The simplex method for quadratic programming,” Econometrica: Journal of the Econometric Society, pp. 382–398, 1959.
  • [6] A. Törn and A. Žilinskas, Global optimization, vol. 350. Springer, 1989.
  • [7] M. Abdel-Basset, L. Abdel-Fatah, and A. K. Sangaiah, “Metaheuristic algorithms: A comprehensive review,” Computational intelligence for multimedia big data on the cloud with engineering applications, pp. 185–231, 2018.
  • [8] E. K. Chong, W.-S. Lu, and S. H. Żak, An Introduction to Optimization: With Applications to Machine Learning. John Wiley & Sons, 2023.
  • [9] S. Du, J. Lee, H. Li, L. Wang, and X. Zhai, “Gradient descent finds global minima of deep neural networks,” in International conference on machine learning, pp. 1675–1685, PMLR, 2019.
  • [10] J. D. Faires and R. L. Burden, Numerical methods. Thomson, 2003.
  • [11] S. P. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
  • [12] A. G. Baydin, R. Cornish, D. M. Rubio, M. Schmidt, and F. Wood, “Online learning rate adaptation with hypergradient descent,” arXiv:1703.04782, 2017.
  • [13] Z. Fei, Z. Wu, Y. Xiao, J. Ma, and W. He, “A new short-arc fitting method with high precision using adam optimization algorithm,” Optik, vol. 212, p. 164788, 2020.
  • [14] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv:1609.04747, 2016.
  • [15] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to algorithms. MIT press, 2022.
  • [16] L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers, pp. 177–186, Springer, 2010.
  • [17] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999.
  • [18] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.,” Journal of machine learning research, vol. 12, no. 7, 2011.
  • [19] B. O’donoghue and E. Candes, “Adaptive restart for accelerated gradient schemes,” Foundations of computational mathematics, vol. 15, pp. 715–732, 2015.
  • [20] M. D. Zeiler, “Adadelta: an adaptive learning rate method,” arXiv:1212.5701, 2012.
  • [21] C. G. Broyden, “Quasi-newton methods and their application to function minimisation,” Mathematics of Computation, vol. 21, no. 99, pp. 368–381, 1967.
  • [22] R. S. Dembo, S. C. Eisenstat, and T. Steihaug, “Inexact newton methods,” SIAM Journal on Numerical analysis, vol. 19, no. 2, pp. 400–408, 1982.
  • [23] R. Fletcher and T. L. Freeman, “A modified newton method for minimization,” Journal of Optimization Theory and Applications, vol. 23, pp. 357–372, 1977.
  • [24] Y. Xu, Z. Guo, H. Wang, and H. Zhang, “Optimization method based on optimal control,” arXiv preprint arXiv:2309.05280, 2023.
  • [25] H. Zhang, H. Wang, Y. Xu, and Z. Guo, “Optimization methods rooted in optimal control,” Science China Information Sciences, to appear.
  • [26] A. W. Marshall, I. Olkin, and B. C. Arnold, “Inequalities: theory of majorization and its applications,” 1979.