This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Searching the solution landscape by generalized high-index saddle dynamics

Jianyuan Yin School of Mathematical Sciences, Peking University, Beijing 100871, China (). [email protected]    Bing Yu School of Mathematical Sciences, Peking University, Beijing 100871, China (). [email protected]    Lei Zhang Beijing International Center for Mathematical Research, Center for Quantitative Biology, Peking University, Beijing 100871, China (). [email protected]
Abstract

We introduce a generalized numerical algorithm to construct the solution landscape, which is a pathway map consisting of all stationary points and their connections. Based on the high-index optimization-based shrinking dimer (HiOSD) method for gradient systems, a generalized high-index saddle dynamics (GHiSD) is proposed to compute any-index saddles of dynamical systems. Linear stability of the index-kk saddle point can be proved for the GHiSD system. A combination of the downward search algorithm and the upward search algorithm is applied to systematically construct the solution landscape, which not only provides a powerful and efficient way to compute multiple solutions without tuning initial guesses, but also reveals the relationships between different solutions. Numerical examples, including a three-dimensional example and the phase field model, demonstrate the novel concept of the solution landscape by showing the connected pathway maps.

keywords:
saddle point, energy landscape, solution landscape, pathway map, dynamical system, phase field
{AMS}

37M05, 49K35, 37N30, 34K28, 65P99

1 Introduction

The energy landscape, which is a mapping of all possible configurations of the system to their energy, exhibits a number of local minima separated by barriers. The energy landscape has been widely devoted to elucidating the structure and thermodynamics of the energy functions in a broad range of applications, such as protein folding [30, 32, 37, 42], catalysis [1, 29], Lennard–Jones clusters [3, 43], phase transitions [8, 11, 21], and artificial neural networks [9, 10, 13, 19]. The index-11 saddle point, often characterized as the transition state, is located at the point of the minimal energy barrier between two minima. The minimum energy path connects two minima and the transition state via a continuous curve on the energy landscape [14, 39]. Besides minima and transition states, the stationary points of the energy function also include high-index saddle points. To characterize the nature of a nondegenerate saddle point, the Morse index of a saddle point is the maximal dimension of a subspace on which its Hessian is negative definite [35]. An intriguing mathematical-physics problem is to efficiently search all stationary points of a multivariable energy function, including both minima and saddle points [28, 33]. A large number of numerical methods have been proposed to compute the minima and index-11 saddle points on complicated energy landscapes in the recent two decades [12, 15, 24, 47, 50, 51].

Different from the energy landscape, we introduce a novel and more informative concept of the solution landscape, which is a pathway map consisting of all stationary points and their connections. The solution landscape can present how minima are connected with index-1 saddle points, and how lower-index saddle points are connected to higher-index ones, finally to a parent state, an index-kk saddle point (kk-saddle), as shown in Fig. 1(A). We illustrate the solution landscape with a two-dimensional example,

(1) E(x,y)=(x21)2+(y21)2.E(x,y)=(x^{2}-1)^{2}+(y^{2}-1)^{2}.

According to Morse indices, the solutions to E(x,y)=𝟎\nabla E(x,y)=\bm{0} are classified as: four minima (±1,±1)(\pm 1,\pm 1), four 1-saddles (±1,0)(\pm 1,0) and (0,±1)(0,\pm 1), and one maximum (0,0)(0,0). From the maximum (2-saddle), the solution landscape can be constructed down to four minima by following unstable directions. Specifically, four 1-saddles can found from the 2-saddle along its two unstable eigenvectors of the Hessian, and each 1-saddle connects two minima by following the gradient flows from both sides of the unstable eigenvector, as shown in Fig. 1(B).

Refer to caption
Figure 1: (A) A diagram of the solution landscape. (B) The solution landscape of the function Eq. 1.

The above definition of the solution landscape is intuitive, while it is challenging to define and construct the solution landscape numerically. Multiple stationary points can be found by numerically solving the nonlinear equations E(𝒙)=𝟎\nabla E(\bm{x})=\bm{0} with extensive algorithms, such as homotopy methods [4, 23, 33] and deflation techniques [2, 17]. However, as more solutions are discovered, it becomes increasingly difficult to tune fine initial guesses for the remaining solutions and determine whether all solutions have been found. Moreover, the relationships between all solutions are not portrayed with these methods. Recently, Yin et al. proposed a numerical procedure to construct a pathway map on the energy landscape based on the high-index optimization-based shrinking dimer (HiOSD) method [45, 46]. A downward search enables to construct the pathway map from a kk-saddle to the connected lower-index saddles, and the HiOSD method also embeds an upward search with a selected direction to find a higher-index saddle, so the entire search can navigate up and down on the energy landscape.

Generally, a dynamical system, 𝒙˙=𝑭(𝒙)\dot{\bm{x}}=\bm{F}(\bm{x}), is an evolution rule that defines a trajectory as a function of time on a set of states (the phase space) [34]. It has been widely applied in depicting the motions of physics, chemistry and biology, such as kinetic equations [5, 40], Navier-Stokes equations [31, 41] and biochemical reactions [7, 36, 38]. Similar to gradient systems, an index-kk saddle point is a stationary point where the Jacobian has exactly kk eigenvalues with a positive real part [44]. Although the energy landscape no longer exists in general dynamical systems, the solution landscape is generic and valid in both gradient and non-gradient systems. However, the original procedure presented cannot deal with non-gradient systems and needs to be generalized.

In this paper, we introduce a generalized numerical method to construct the solution landscape for a dynamical system. A generalized high-index saddle dynamics (GHiSD) method is proposed to find high-index saddles of non-gradient systems and linear stability of the index-kk saddle point is proved for the GHiSD system. Based on the GHiSD method, we develop a systematic approach by a combination of the downward search and the upward search to efficiently construct the solution landscape, starting from a high-index saddle and ending with multiple sinks. We apply a three-dimensional example and the phase field model as numerical examples to demonstrate the novel concept of the solution landscape.

2 GHiSD Methods

2.1 Dynamical systems

Given an autonomous dynamical system [44]

(2) 𝒙˙=𝑭(𝒙),𝒙n,\dot{\bm{x}}=\bm{F}(\bm{x}),\quad\bm{x}\in\mathbb{R}^{n},

where 𝑭:nn\bm{F}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n} is a 𝒞r(r2)\mathcal{C}^{r}(r\geqslant 2) function, a point 𝒙n{\bm{x}}^{*}\in\mathbb{R}^{n} is called a stationary point (or equilibrium solution) of Eq. 2 if 𝑭(𝒙)=𝟎\bm{F}(\bm{x}^{*})=\bm{0}. Let J(𝒙)=𝑭(𝒙)J(\bm{x})=\nabla\bm{F}(\bm{x}) denote the Jacobian of 𝑭(𝒙)\bm{F}(\bm{x}), and ,\langle\cdot,\cdot\rangle denote an inner product on n\mathbb{R}^{n}, i.e. 𝒙,𝒚=𝒚𝒙\langle\bm{x},\bm{y}\rangle=\bm{y}^{\top}\bm{x}.

For a stationary point 𝒙\bm{x}^{\ast}, taking 𝒙=𝒙+𝒚\bm{x}=\bm{x}^{*}+\bm{y} in Eq. 2, we have

(3) 𝒚˙=J(𝒙)𝒚+𝒪(𝒚2),\dot{\bm{y}}=J(\bm{x}^{*})\bm{y}+\mathcal{O}(\|\bm{y}\|^{2}),

where \|\cdot\| denotes the norm induced by the inner product. The associated linear system

(4) 𝒚˙=J(𝒙)𝒚\dot{\bm{y}}=J(\bm{x}^{*})\bm{y}

is used to determine the stability of 𝒙\bm{x}^{*}. Let {𝒘1,,𝒘ku}\{\bm{w}_{1},\cdots,\bm{w}_{k_{\mathrm{u}}}\}, {𝒘ku+1,,𝒘ku+ks}\{\bm{w}_{k_{\mathrm{u}}+1},\cdots,\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}}\} and {𝒘ku+ks+1,,𝒘ku+ks+kc}n\{\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}+1},\cdots,\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}+k_{\mathrm{c}}}\}\subset\mathbb{C}^{n} denote the (generalized) right eigenvectors of J(𝒙)J(\bm{x}) corresponding to the eigenvalues of J(𝒙)J(\bm{x}) with positive, negative and zero real parts, respectively, where ku+ks+kc=nk_{\mathrm{u}}+k_{\mathrm{s}}+k_{\mathrm{c}}=n. With these eigenvectors, we define unstable, stable and center subspaces of the Jacobian J(𝒙)J(\bm{x}), respectively, as 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}), 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}) and 𝒲c(𝒙)\mathcal{W}^{\mathrm{c}}(\bm{x}):

(5) 𝒲u(x)\displaystyle\mathcal{W}^{\mathrm{u}}(x) =span{𝒘1,,𝒘ku}n,\displaystyle=\mathrm{span}_{\mathbb{C}}\{\bm{w}_{1},\cdots,\bm{w}_{k_{\mathrm{u}}}\}\cap\mathbb{R}^{n},
𝒲s(x)\displaystyle\mathcal{W}^{\mathrm{s}}(x) =span{𝒘ku+1,,𝒘ku+ks}n,\displaystyle=\mathrm{span}_{\mathbb{C}}\{\bm{w}_{k_{\mathrm{u}}+1},\cdots,\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}}\}\cap\mathbb{R}^{n},
𝒲c(x)\displaystyle\mathcal{W}^{\mathrm{c}}(x) =span{𝒘ku+ks+1,,𝒘ku+ks+kc}n.\displaystyle=\mathrm{span}_{\mathbb{C}}\{\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}+1},\cdots,\bm{w}_{k_{\mathrm{u}}+k_{\mathrm{s}}+k_{\mathrm{c}}}\}\cap\mathbb{R}^{n}.

From the primary decomposition theorem [27], n\mathbb{R}^{n} can be decomposed as a direct sum:

(6) n=𝒲u(𝒙)𝒲s(𝒙)𝒲c(𝒙).\mathbb{R}^{n}=\mathcal{W}^{\mathrm{u}}(\bm{x})\oplus\mathcal{W}^{\mathrm{s}}(\bm{x})\oplus\mathcal{W}^{\mathrm{c}}(\bm{x}).

By treating J(𝒙)J(\bm{x}^{\ast}) as a linear operator on n\mathbb{R}^{n}, 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}^{*}), 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}^{*}) and 𝒲c(𝒙)\mathcal{W}^{\mathrm{c}}(\bm{x}^{*}) are the invariant subspaces of the linear system Eq. 4.

The nonlinear system Eq. 3 possesses a kuk_{\mathrm{u}}-dimensional local invariant unstable manifold locu(𝒙)\mathcal{M}^{\mathrm{u}}_{\mathrm{loc}}(\bm{x}^{*}), a ksk_{\mathrm{s}}-dimensional local invariant stable manifold locs(𝒙)\mathcal{M}^{\mathrm{s}}_{\mathrm{loc}}(\bm{x}^{*}), and a kck_{\mathrm{c}}-dimensional local invariant center manifold locc(𝒙)\mathcal{M}^{\mathrm{c}}_{\mathrm{loc}}(\bm{x}^{*}) near the origin. They are intersecting and tangent to the respective invariant subspaces 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}^{*}), 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}^{*}), 𝒲c(𝒙)\mathcal{W}^{\mathrm{c}}(\bm{x}^{*}) [44]. “Local” here refers to that these manifolds with boundaries are defined only in a neighborhood of 𝒙\bm{x}^{*}. The invariant subspaces of the linear system Eq. 4 can be used to study the stationary point of the nonlinear system Eq. 3.

A stationary point 𝒙\bm{x}^{*} of Eq. 2 is called hyperbolic if no eigenvalues of J(x)J(x^{*}) have zero real part, which means 𝒲c(𝒙)={𝟎}\mathcal{W}^{\mathrm{c}}(\bm{x}^{*})=\{\bm{0}\} and the decomposition Eq. 6 can be rewritten as

(7) n=𝒲u(𝒙)𝒲s(𝒙).\mathbb{R}^{n}=\mathcal{W}^{\mathrm{u}}(\bm{x}^{*})\oplus\mathcal{W}^{\mathrm{s}}(\bm{x}^{*}).

A hyperbolic stationary point is called a saddle if 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}^{*}) and 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}^{*}) are nontrivial. The hyperbolic stationary point xx^{*} is called a sink (source) if all the eigenvalues of J(𝒙)J(\bm{x}^{*}) have negative (positive) real parts. The index of a stationary point 𝒙\bm{x}^{\ast} is defined as the dimension of the unstable subspace 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}^{*}) [44].

2.2 Review of HiOSD method for gradient systems

The HiOSD method is designed for finding index-kk saddles of an energy function E(𝒙)E(\bm{x}) [46]. For gradient systems 𝑭(𝒙)=E(𝒙)\bm{F}(\bm{x})=-\nabla E(\bm{x}), the Jacobian J(𝒙)=2E(𝒙)=G(𝒙)J(\bm{x})=-\nabla^{2}E(\bm{x})=-G(\bm{x}) is self-adjoint with all eigenvalues real, where G(𝒙)G(\bm{x}) denotes the Hessian of E(𝒙)E(\bm{x}). The hyperbolic kk-saddle 𝒙\bm{x}^{*} is a local maximum on the linear manifold 𝒙+𝒲u(𝒙)\bm{x}^{*}+\mathcal{W}^{\mathrm{u}}(\bm{x}^{*}) and a local minimum on 𝒙+𝒲s(𝒙)\bm{x}^{*}+\mathcal{W}^{\mathrm{s}}(\bm{x}^{*}).

Regardless of the dimer approximation temporarily, the high-index saddle dynamics (HiSD) for a kk-saddle (kk-HiSD) has the following form:

(8a) 𝒙˙\displaystyle\dot{\bm{x}} =(I2j=1k𝒗j𝒗j)𝑭(𝒙),\displaystyle=\left(I-2\sum_{j=1}^{k}\bm{v}_{j}\bm{v}_{j}^{\top}\right)\bm{F}(\bm{x}),
(8b) 𝒗˙i\displaystyle\dot{\bm{v}}_{i} =(I𝒗i𝒗i2j=1i1𝒗j𝒗j)G(𝒙)𝒗i,i=1,,k,\displaystyle=-\left(I-\bm{v}_{i}\bm{v}_{i}^{\top}-2\sum_{j=1}^{i-1}\bm{v}_{j}\bm{v}_{j}^{\top}\right)G(\bm{x})\bm{v}_{i},\;i=1,\cdots,k,

which involves a position variable 𝒙\bm{x} and kk direction variables 𝒗i\bm{v}_{i}. The dynamics Eq. 8 is coupled with an initial condition:

(9) 𝒙(0)=𝒙(0)n,𝒗i(0)=𝒗i(0)n,s.t.𝒗j(0),𝒗i(0)=δij,i,j=1,,k.\bm{x}(0)=\bm{x}^{(0)}\in\mathbb{R}^{n},\quad\bm{v}_{i}(0)=\bm{v}_{i}^{(0)}\in\mathbb{R}^{n},\qquad\mathrm{s.t.}\quad\left\langle\bm{v}_{j}^{(0)},\bm{v}_{i}^{(0)}\right\rangle=\delta_{ij},\quad i,j=1,\cdots,k.

The dynamics Eq. 8a represents a transformed gradient flow:

(10) 𝒙˙=𝒫𝒲u(𝒙)𝑭(𝒙)+(𝑭(𝒙)𝒫𝒲u(𝒙)𝑭(𝒙))=(I2𝒫𝒲u(𝒙))𝑭(x),\dot{\bm{x}}=-\mathcal{P}_{\mathcal{W}^{\mathrm{u}}(\bm{x})}\bm{F}(\bm{x})+\left(\bm{F}(\bm{x})-\mathcal{P}_{\mathcal{W}^{\mathrm{u}}(\bm{x})}\bm{F}(\bm{x})\right)=\left(I-2\mathcal{P}_{\mathcal{W}^{\mathrm{u}}(\bm{x})}\right)\bm{F}\bm{(}x),

where 𝒫𝒱\mathcal{P}_{\mathcal{V}} denotes the orthogonal projection operator on a finite-dimensional subspace 𝒱\mathcal{V}. Here, 𝒫𝒲u(𝒙)𝑭(𝒙)-\mathcal{P}_{\mathcal{W}^{\mathrm{u}}(\bm{x})}\bm{F}(\bm{x}) is taken as an ascent direction on the subspace 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}) and 𝑭(𝒙)𝒫𝒲u(𝒙)𝑭(𝒙)\bm{F}(\bm{x})-\mathcal{P}_{\mathcal{W}^{\mathrm{u}}(\bm{x})}\bm{F}(\bm{x}) is a descent direction on the subspace 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}). The dynamics Eq. 8b finds an orthonormal basis of 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}). Thanks to the self-adjoint Jacobian J(𝒙)J(\bm{x}), we can simply take 𝒗i\bm{v}_{i} as a unit eigenvector corresponding to the iith smallest eigenvalue of G(𝒙)G(\bm{x}). Therefore, the iith eigenvector 𝒗i\bm{v}_{i} can be obtained by a constrained optimization problem with the knowledge of 𝒗1,,𝒗i1\bm{v}_{1},\cdots,\bm{v}_{i-1}:

(11) min𝒗inG(𝒙)𝒗i,𝒗i,s.t.𝒗j,𝒗i=δij,j=1,2,,i.\min_{\bm{v}_{i}\in\mathbb{R}^{n}}\quad\langle G(\bm{x})\bm{v}_{i},\bm{v}_{i}\rangle,\qquad\mathrm{s.t.}\quad\langle\bm{v}_{j},\bm{v}_{i}\rangle=\delta_{ij},\quad j=1,2,\cdots,i.

Then the kk Rayleigh quotients Eq. 11 are minimized simultaneously using the gradient flow of 𝒗i\bm{v}_{i}:

(12) 𝒗˙i=G(𝒙)𝒗i+G(𝒙)𝒗i,𝒗i𝒗i+2j=1i1G(𝒙)𝒗i,𝒗j𝒗j,i=1,,k,\dot{\bm{v}}_{i}=-G(\bm{x})\bm{v}_{i}+\langle G(\bm{x})\bm{v}_{i},\bm{v}_{i}\rangle\bm{v}_{i}+2\sum_{j=1}^{i-1}\left\langle G(\bm{x})\bm{v}_{i},\bm{v}_{j}\right\rangle\bm{v}_{j},\quad i=1,\cdots,k,

as the dynamics Eq. 8b.

2.3 GHiSD for non-gradient systems

The GHiSD for a kk-saddle (kk-GHiSD) of the dynamical system Eq. 2 has the following form:

(13a) 𝒙˙\displaystyle\dot{\bm{x}} =(I2j=1k𝒗j𝒗j)𝑭(𝒙),\displaystyle=\left(I-2\sum_{j=1}^{k}\bm{v}_{j}\bm{v}_{j}^{\top}\right)\bm{F}(\bm{x}),
(13b) 𝒗˙i\displaystyle\dot{\bm{v}}_{i} =(I𝒗i𝒗i)J(𝒙)𝒗ij=1i1𝒗j𝒗j(J(𝒙)+J(𝒙))𝒗i,i=1,,k,\displaystyle=\left(I-\bm{v}_{i}\bm{v}_{i}^{\top}\right)J(\bm{x})\bm{v}_{i}-\sum_{j=1}^{i-1}\bm{v}_{j}\bm{v}_{j}^{\top}\Big{(}J(\bm{x})+J^{\top}(\bm{x})\Big{)}\bm{v}_{i},\quad i=1,\cdots,k,

with an initial condition Eq. 9.

The dynamics Eq. 13a is the same as the dynamics Eq. 8a of HiSD. For non-gradient systems, the Jacobian J(𝒙)J(\bm{x}) is not self-adjoint, and the eigenvectors may not be orthogonal. The loss of orthogonality between 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}) and 𝒲s(𝒙)\mathcal{W}^{\mathrm{s}}(\bm{x}) indicates that it may be hard to decompose 𝑭(𝒙)\bm{F}(\bm{x}) according to the direct sum Eq. 7. For this reason, the gentlest ascent dynamics (GAD) for searching 11-saddles in non-gradient systems requires two direction variables to approximate the left and right eigenvectors corresponding to the largest eigenvalue of J(𝒙)J(\bm{x}) [16]. Nevertheless, Gu and Zhou later proposed a simplified GAD in non-gradient systems that uses only one direction variable and proved that its stable critical point corresponds to a 11-saddle of the original system [20]. Thus, we adopt the similar idea that the dynamics Eq. 10 could be used to find high-index saddles of non-gradient systems and the HiSD Eq. 8a is preserved in GHiSD Eq. 13a, as long as 𝒗1,,𝒗k\bm{v}_{1},\cdots,\bm{v}_{k} approximate an orthonormal basis of the subspace 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}).

The dynamics Eq. 13b aims to find an orthonormal basis 𝒗1,,𝒗k\bm{v}_{1},\cdots,\bm{v}_{k} of the subspace 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}). The dynamical flow

(14) 𝒗˙i=J(𝒙)𝒗i+j=1iξij𝒗j,i=1,,k,\dot{\bm{v}}_{i}=J(\bm{x})\bm{v}_{i}+\sum_{j=1}^{i}\xi_{ij}\bm{v}_{j},\quad i=1,\cdots,k,

should preserve the orthonormal condition

(15) 𝒗j,𝒗i=δij,i,j=1,,k,\langle\bm{v}_{j},\bm{v}_{i}\rangle=\delta_{ij},\quad i,j=1,\cdots,k,

which indicates

(16) ξii=J(𝒙)𝒗i,𝒗i;ξij=J(𝒙)𝒗i,𝒗j𝒗i,J(𝒙)𝒗j,j=1,,i1.\xi_{ii}=-\langle J(\bm{x})\bm{v}_{i},\bm{v}_{i}\rangle;\quad\xi_{ij}=-\langle J(\bm{x})\bm{v}_{i},\bm{v}_{j}\rangle-\langle\bm{v}_{i},J(\bm{x})\bm{v}_{j}\rangle,\quad j=1,\cdots,i-1.

Thus, the dynamics Eq. 13b is derived from Eq. 14 and Eq. 16, which also accords with Eq. 8b under the condition J(𝒙)=J(𝒙)J(\bm{x})=J^{\top}(\bm{x}).

Next, we show the following theorem for linear stability for the kk-GHiSD Eq. 13 under some assumptions.

Theorem 2.1.

Assume that 𝐅(𝐱)\bm{F}(\bm{x}) is 𝒞2\mathcal{C}^{2}, 𝐱n\bm{x}^{*}\in\mathbb{R}^{n}, and {𝐯i}i=1kn\{\bm{v}^{*}_{i}\}^{k}_{i=1}\subset\mathbb{R}^{n} satisfying the orthonormal condition Eq. 15. The eigenvalues of the Jacobian J=𝐅(𝐱)J^{*}=\nabla\bm{F}(\bm{x}^{*}) are λ1>>λk>λk+1λn\lambda_{1}>\cdots>\lambda_{k}>\lambda_{k+1}\geqslant\cdots\geqslant\lambda_{n}, real and nonzero, and the corresponding right eigenvector of λj\lambda_{j} is 𝐰jn\bm{w}_{j}\in\mathbb{R}^{n}. Then (𝐱,𝐯1,𝐯2,,𝐯k)(\bm{x}^{*},\bm{v}_{1}^{*},\bm{v}_{2}^{*},\cdots,\bm{v}_{k}^{*}) is a linearly stable stationary point of the dynamics Eq. 13, if and only if 𝐱\bm{x}^{*} is a kk-saddle of the dynamical system Eq. 2, and {𝐯j}j=1i\{\bm{v}^{*}_{j}\}^{i}_{j=1} is an orthonormal basis of span{𝐰1,,𝐰i}\mathrm{span}\{\bm{w}_{1},\cdots,\bm{w}_{i}\} for i=1,2,,ki=1,2,\cdots,k.

Proof 2.2.

The Jacobian of the dynamics Eq. 13 is

(17) 𝕁(𝒙,𝒗1,,𝒗k)=(𝒙˙,𝒗˙1,,𝒗˙k)(𝒙,𝒗1,,𝒗k)=[𝕁xx𝕁x1𝕁x2𝕁x3𝕁xk𝕁11OOO𝕁22OO𝕁kk],\mathbb{J}(\bm{x},\bm{v}_{1},\cdots,\bm{v}_{k})=\dfrac{\partial(\dot{\bm{x}},\dot{\bm{v}}_{1},\cdots,\dot{\bm{v}}_{k})}{\partial(\bm{x},\bm{v}_{1},\cdots,\bm{v}_{k})}=\begin{bmatrix}\mathbb{J}_{xx}&\mathbb{J}_{x1}&\mathbb{J}_{x2}&\mathbb{J}_{x3}&\cdots&\mathbb{J}_{xk}\\ *&\mathbb{J}_{11}&O&O&\cdots&O\\ *&*&\mathbb{J}_{22}&O&\cdots&O\\ \vdots&\vdots&\vdots&\vdots&&\vdots\\ *&*&*&*&\cdots&\mathbb{J}_{kk}\end{bmatrix},

whose blocks have following expressions:

(18) 𝕁xx\displaystyle\mathbb{J}_{xx} =𝒙˙𝒙=(Ij=1k2𝒗j𝒗j)J(𝒙),\displaystyle=\dfrac{\partial\dot{\bm{x}}}{\partial\bm{x}}=\left(I-\sum_{j=1}^{k}2\bm{v}_{j}\bm{v}_{j}^{\top}\right)J(\bm{x}),
𝕁xi\displaystyle\mathbb{J}_{xi} =𝒙˙𝒗i=2(𝒗i𝑭(𝒙)+𝑭(𝒙),𝒗iI),\displaystyle=\dfrac{\partial\dot{\bm{x}}}{\partial\bm{v}_{i}}=-2\left(\bm{v}_{i}\bm{F}(\bm{x})^{\top}+\langle\bm{F}(\bm{x}),\bm{v}_{i}\rangle I\right),
𝕁ii\displaystyle\mathbb{J}_{ii} =𝒗˙i𝒗i=J(𝒙)J(𝒙)𝒗i,𝒗iIj=1i𝒗j𝒗j(J(x)+J(x)).\displaystyle=\dfrac{\partial\dot{\bm{v}}_{i}}{\partial\bm{v}_{i}}=J(\bm{x})-\left\langle J(\bm{x})\bm{v}_{i},\bm{v}_{i}\right\rangle I-\sum_{j=1}^{i}\bm{v}_{j}\bm{v}_{j}^{\top}\left(J(x)+J^{\top}(x)\right).

In the following, 𝕁(𝐱,𝐯1,𝐯2,,𝐯k)\mathbb{J}(\bm{x}^{*},\bm{v}_{1}^{*},\bm{v}_{2}^{*},\cdots,\bm{v}_{k}^{*}) is denoted as 𝕁\mathbb{J}^{*}, and the blocks of 𝕁\mathbb{J}^{*} are denoted as 𝕁\mathbb{J}^{*} with corresponding subscripts.

\Leftarrow”: Because {𝐯j}j=1i\{\bm{v}^{*}_{j}\}^{i}_{j=1} is an orthonormal basis of span{𝐰1,,𝐰i}\mathrm{span}\{\bm{w}_{1},\cdots,\bm{w}_{i}\} for i=1,,ki=1,\cdots,k, the matrix Ak=[𝐯1,,𝐯k]J[𝐯1,,𝐯k]A_{k}=\left[\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}\right]^{\top}J^{*}\left[\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}\right] is upper triangular. We first extend {𝐯j}j=1k\{\bm{v}^{*}_{j}\}^{k}_{j=1} to an orthonormal basis {𝐯1,,𝐯k,𝐯~k+1,,𝐯~n}\{\bm{v}^{*}_{1},\cdots,\bm{v}^{*}_{k},\tilde{\bm{v}}_{k+1},\cdots,\tilde{\bm{v}}_{n}\} of n\mathbb{R}^{n}. With V~=[𝐯1,,𝐯k,𝐯~k+1,,𝐯~n]\tilde{V}=\begin{bmatrix}\bm{v}^{*}_{1},\cdots,\bm{v}^{*}_{k},\tilde{\bm{v}}_{k+1},\cdots,\tilde{\bm{v}}_{n}\end{bmatrix}, the matrix V~JV~\tilde{V}^{\top}J^{*}\tilde{V} can be blocked as [AkOB~k]\begin{bmatrix}A_{k}&*\\ O&\tilde{B}_{k}\\ \end{bmatrix}, and the eigenvalues of B~k(nk)×(nk)\tilde{B}_{k}\in\mathbb{R}^{(n-k)\times(n-k)} are {λk+1,,λn}\{\lambda_{k+1},\cdots,\lambda_{n}\}\subset\mathbb{R}. From the real Schur decomposition theorem [18], there exists an orthogonal matrix QQ so that QB~kQQ^{\top}\tilde{B}_{k}Q is upper triangular with real diagonal entries {λk+1,,λn}\{\lambda_{k+1},\cdots,\lambda_{n}\}. With [𝐯k+1,,𝐯n]=[𝐯~k+1,,𝐯~n]Q[\bm{v}^{*}_{k+1},\cdots,\bm{v}^{*}_{n}]=[\tilde{\bm{v}}_{k+1},\cdots,\tilde{\bm{v}}_{n}]Q, the matrix A=(aij)=VJVA=(a_{ij})={V^{*}}^{\top}J^{*}{V^{*}} is an nn-by-nn real upper triangular matrix with diagonal entries aii=λia_{ii}=\lambda_{i}, where {𝐯i}i=1n\{\bm{v}^{*}_{i}\}^{n}_{i=1} is an orthonormal basis of n\mathbb{R}^{n} and V=[𝐯1,,𝐯n]V^{*}=\begin{bmatrix}\bm{v}_{1}^{*},\cdots,\bm{v}_{n}^{*}\end{bmatrix}. For i=1,2,,ki=1,2,\cdots,k, the matrix AA can be blocked as A=[AiCiOBi]A=\begin{bmatrix}A_{i}&C_{i}\\ O&B_{i}\\ \end{bmatrix}, where Aii×iA_{i}\in\mathbb{R}^{i\times i} and Bi(ni)×(ni)B_{i}\in\mathbb{R}^{(n-i)\times(n-i)} are both upper triangular with diagonal entries λ1,,λi\lambda_{1},\cdots,\lambda_{i} and λi+1,,λn\lambda_{i+1},\cdots,\lambda_{n} respectively.

Since 𝐱\bm{x}^{*} is a kk-saddle of the dynamical system Eq. 2, we have 𝐅(𝐱)=𝟎\bm{F}(\bm{x}^{*})=\bm{0} and λk>0>λk+1\lambda_{k}>0>\lambda_{k+1}. Calculating Eq. 13 at (𝐱,𝐯1,,𝐯k)(\bm{x}^{*},\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}), we have 𝐱˙=𝟎\dot{\bm{x}}=\bm{0} and

(19) 𝒗˙i=(I𝒗i𝒗i)j=1iaji𝒗jj=1i1(aij+aji)𝒗j=𝟎,i=1,,k,\dot{\bm{v}}_{i}=\left(I-{\bm{v}_{i}^{*}}{\bm{v}_{i}^{*}}^{\top}\right)\sum_{j=1}^{i}a_{ji}\bm{v}_{j}^{*}-\sum_{j=1}^{i-1}(a_{ij}+a_{ji}){\bm{v}_{j}^{*}}=\bm{0},\quad i=1,\cdots,k,

so (𝐱,𝐯1,,𝐯k)(\bm{x}^{*},\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}) is a stationary point of the dynamics Eq. 13. Since 𝐅(𝐱)=𝟎\bm{F}(\bm{x}^{*})=\bm{0} also indicates that 𝕁xi\mathbb{J}_{xi}^{*} is null and 𝕁\mathbb{J}^{*} is block lower triangular, so the eigenvalues of 𝕁\mathbb{J}^{*} are determined by 𝕁xx\mathbb{J}^{*}_{xx} and 𝕁ii\mathbb{J}^{*}_{ii}. From

(20) V𝕁xxV=V(I2j=1k𝒗j𝒗j)VA=[IkOOInk]A=[AkCkOBk],{V^{*}}^{\top}\mathbb{J}^{*}_{xx}V^{*}={V^{*}}^{\top}\Big{(}I-2\sum_{j=1}^{k}\bm{v}^{*}_{j}{\bm{v}^{*}_{j}}^{\top}\Big{)}V^{*}A=\begin{bmatrix}-I_{k}&O\\ O&I_{n-k}\\ \end{bmatrix}A=\begin{bmatrix}-A_{k}&-C_{k}\\ O&B_{k}\\ \end{bmatrix},

the eigenvalues of 𝕁xx\mathbb{J}^{*}_{xx} are λ1,,λk,λk+1,,λn-\lambda_{1},\cdots,-\lambda_{k},\lambda_{k+1},\cdots,\lambda_{n}, all negative. Similarly, from

(21) V𝕁iiV\displaystyle{V^{*}}^{\top}\mathbb{J}^{*}_{ii}V^{*} =V(JλiIj=1i𝒗j𝒗j(J+J))V\displaystyle={V^{*}}^{\top}\left(J^{*}-\lambda_{i}I-\sum_{j=1}^{i}\bm{v}^{*}_{j}{\bm{v}^{*}_{j}}^{\top}\left(J^{*}+{J^{*}}^{\top}\right)\right)V^{*}
=AλiIV(j=1i𝒗j𝒗j)V(A+A)\displaystyle=A-\lambda_{i}I-{V^{*}}^{\top}\left(\sum_{j=1}^{i}\bm{v}^{*}_{j}{\bm{v}^{*}_{j}}^{\top}\right)V^{*}\left(A+A^{\top}\right)
=AλiI[IiOOO][Ai+AiCiCiBi+Bi]=[AiOOBi]λiI,\displaystyle=A-\lambda_{i}I-\begin{bmatrix}I_{i}&O\\ O&O\\ \end{bmatrix}\begin{bmatrix}A_{i}+A_{i}^{\top}&C_{i}\\ C_{i}^{\top}&B_{i}+B_{i}^{\top}\\ \end{bmatrix}=\begin{bmatrix}-A_{i}^{\top}&O\\ O&B_{i}\\ \end{bmatrix}-\lambda_{i}I,

the eigenvalues of 𝕁ii\mathbb{J}^{*}_{ii} are λ1λi,,λi1λi,2λi,λi+1λi,,λnλi-\lambda_{1}-\lambda_{i},\cdots,-\lambda_{i-1}-\lambda_{i},-2\lambda_{i},\lambda_{i+1}-\lambda_{i},\cdots,\lambda_{n}-\lambda_{i}, all negative as well. Since all the eigenvalues of 𝕁\mathbb{J}^{*} are negative, (𝐱,𝐯1,,(\bm{x}^{*},\bm{v}_{1}^{*},\cdots, 𝐯k)\bm{v}_{k}^{*}) is a linearly stable stationary point of the dynamics Eq. 13.

\Rightarrow”: Since (𝐱,𝐯1,,𝐯k)(\bm{x}^{*},\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}) is a stationary point of the dynamics Eq. 13, we have

(22) 𝟎=𝒙˙=(I2i=1k𝒗i𝒗i)𝑭(𝒙).\bm{0}=\dot{\bm{x}}=\left(I-2\sum_{i=1}^{k}\bm{v}^{*}_{i}{\bm{v}^{*}_{i}}^{\top}\right)\bm{F}(\bm{x}^{*}).

Then we have 𝐅(𝐱)=𝟎\bm{F}(\bm{x}^{*})=\bm{0}, indicating that xx^{*} is a stationary point of the dynamical system Eq. 2 and 𝕁\mathbb{J}^{*} is block lower triangular. With aij=J𝐯j,𝐯ia_{ij}=\left\langle J^{*}\bm{v}_{j}^{*},\bm{v}_{i}^{*}\right\rangle, we obtain

(23) 𝟎=𝒗˙i=J𝒗iaii𝒗ij=1i1(aij+aji)𝒗j,i=1,2,,k.\bm{0}=\dot{\bm{v}}_{i}=J^{*}\bm{v}_{i}^{*}-a_{ii}\bm{v}^{*}_{i}-\sum_{j=1}^{i-1}(a_{ij}+a_{ji})\bm{v}^{*}_{j},\quad i=1,2,\cdots,k.

Therefore, for j<ij<i,

(24) aji=J𝒗i,𝒗j=l=1i1(ail+ali)𝒗l+aii𝒗i,𝒗j=aij+ajia_{ji}=\left\langle J^{*}\bm{v}^{*}_{i},\bm{v}^{*}_{j}\right\rangle=\left\langle\sum_{l=1}^{i-1}(a_{il}+a_{li})\bm{v}^{*}_{l}+a_{ii}\bm{v}^{*}_{i},\bm{v}^{*}_{j}\right\rangle=a_{ij}+a_{ji}

indicates that the matrix Ak=(aij)k×kA_{k}=(a_{ij})\in\mathbb{R}^{k\times k} is upper triangular. Since all eigenvalues of JJ^{*} are real, we can similarly extend {𝐯j}j=1k\{\bm{v}^{*}_{j}\}^{k}_{j=1} to an orthonormal basis {𝐯i}i=1n\{\bm{v}^{*}_{i}\}^{n}_{i=1} so that A=(aij)=VJVA=(a_{ij})={V^{*}}^{\top}J^{*}{V^{*}} is an nn-by-nn upper triangular matrix where V=[𝐯1,,𝐯n]V^{*}=\begin{bmatrix}\bm{v}_{1}^{*},\cdots,\bm{v}_{n}^{*}\end{bmatrix}. The eigenvalues of JJ^{*} are {a11,,ann}\{a_{11},\cdots,a_{nn}\}, the diagonal of AA.

With a derivation similar to Eq. 20, the eigenvalues of 𝕁xx\mathbb{J}^{*}_{xx} are a11,,akk-a_{11},\cdots,-a_{kk}, ak+1,k+1,,anna_{k+1,k+1},\cdots,a_{nn}, which are all negative from the linear stability of (𝐱,𝐯1,,𝐯k)(\bm{x}^{*},\bm{v}_{1}^{*},\cdots,\bm{v}_{k}^{*}). Therefore, 𝐱\bm{x}^{*} is a kk-saddle of the dynamics Eq. 2. Similarly to Eq. 21, for i=1,,ki=1,\cdots,k, from the eigenvalues of 𝕁ii\mathbb{J}^{*}_{ii},

(25) {a11aii,,ai1,i1aii,2aii,ai+1,i+1aii,,annaii},\{-a_{11}-a_{ii},\cdots,-a_{i-1,i-1}-a_{ii},-2a_{ii},a_{i+1,i+1}-a_{ii},\cdots,a_{nn}-a_{ii}\},

are all negative, we have aii>ai+1,i+1a_{ii}>a_{i+1,i+1}. Therefore, the diagonal elements of AA satisfy a11>>akk>0a_{11}>\cdots>a_{kk}>0, and consequently aii=λia_{ii}=\lambda_{i} for i=1,2,,ki=1,2,\cdots,k. Furthermore, from Eq. 23, {𝐯j}j=1i\{\bm{v}^{*}_{j}\}^{i}_{j=1} is an orthonormal basis of span{𝐰1,,𝐰i}\mathrm{span}\{\bm{w}_{1},\cdots,\bm{w}_{i}\} for i=1,2,,ki=1,2,\cdots,k, which completes our proof.

Remark: In the above theorem, all eigenvalues of J(𝒙)J(\bm{x}^{*}) are assumed to be real to simplify the derivations. These assumptions can be relaxed as the eigenvalues λk+1,,λn\lambda_{k+1},\cdots,\lambda_{n} have nonzero real parts. Specifically, assumed that λ1>>λk>Reλk+1Reλn\lambda_{1}>\cdots>\lambda_{k}>\mathrm{Re}\lambda_{k+1}\geqslant\cdots\geqslant\mathrm{Re}\lambda_{n}, are nonzero, the proof can be accomplished similarly.

Assumed that the eigenvalues λ1,,λk\lambda_{1},\cdots,\lambda_{k} are real, the only difference between GHiSD Eq. 13 and the HiSD Eq. 8 results from J(𝒙)J(𝒙)J(\bm{x})\neq J(\bm{x})^{\top}. However, if JJ^{*} has a pair of complex eigenvalues with positive real parts, (𝒙,𝒗1,,𝒗k)(\bm{x}^{*},\bm{v}_{1},\cdots,\bm{v}_{k}) cannot be a stable stationary point of GHiSD Eq. 13 for any (𝒗1,,𝒗k)(\bm{v}_{1},\cdots,\bm{v}_{k}). In spite of the loss of stable convergence of each 𝒗i\bm{v}_{i}, the subspace span{𝒗1,,𝒗k}\mathrm{span}\{\bm{v}_{1},\cdots,\bm{v}_{k}\} can still converge stably to 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}), which also achieves our goal. Therefore, GHiSD Eq. 13 can also be applied to search hyperbolic high-index saddle points with complex eigenvalues.

2.4 Numerical algorithms of GHiSD

Now we consider the numerical implementation of GHiSD. The dynamics Eq. 13b of GHiSD iterates 𝒗i\bm{v}_{i} along J(𝒙)𝒗iJ(\bm{x})\bm{v}_{i} while maintaining the orthonormal condition Eq. 15. Applying an explicit Euler scheme with a sufficiently small step size β>0\beta>0, we can establish 𝒲u(𝒙)\mathcal{W}^{\mathrm{u}}(\bm{x}) via

(26) {𝒗~i(m+1)=𝒗i(m)+βJ(𝒙)𝒗i(m),i=1,,k,[𝒗1(m+1),,𝒗k(m+1)]=orth([𝒗~1(m+1),,𝒗~k(m+1)]).\left\{\begin{aligned} &\tilde{\bm{v}}^{(m+1)}_{i}=\bm{v}^{(m)}_{i}+\beta J(\bm{x})\bm{v}^{(m)}_{i},\qquad i=1,\cdots,k,\\ &\left[\bm{v}_{1}^{(m+1)},\cdots,\bm{v}_{k}^{(m+1)}\right]=\mathrm{orth}\left(\left[\tilde{\bm{v}}_{1}^{(m+1)},\cdots,\tilde{\bm{v}}_{k}^{(m+1)}\right]\right).\end{aligned}\right.

which can be regarded as a power method of I+βJ(𝒙)I+\beta J(\bm{x}) to calculate the eigenvectors of the eigenvalues of J(𝒙)J(\bm{x}) with positive real parts at an exponential rate. Here orth()\mathrm{orth}(\cdot) is a normalized orthogonalization function, which can be realized by a Gram–Schmidt procedure.

Since the Jacobian is usually expensive to compute and store in practice, we adopt the dimer method to avoid evaluating the Jacobian explicitly [25]. A dimer centered at 𝒙\bm{x} with a direction of 𝒗i\bm{v}_{i} and length 2l2l is applied to approximate J(𝒙)𝒗iJ(\bm{x})\bm{v}_{i} in Eq. 26:

(27) {𝒗~i(m+1)=𝒗i(m)+β𝑭(𝒙+l𝒗i(m))𝑭(𝒙l𝒗i(m))2l,i=1,,k,[𝒗1(m+1),,𝒗k(m+1)]=orth([𝒗~1(m+1),,𝒗~k(m+1)]),\left\{\begin{aligned} &\tilde{\bm{v}}^{(m+1)}_{i}=\bm{v}^{(m)}_{i}+\beta\dfrac{\bm{F}(\bm{x}+l\bm{v}^{(m)}_{i})-\bm{F}(\bm{x}-l\bm{v}^{(m)}_{i})}{2l},\qquad i=1,\cdots,k,\\ &\left[\bm{v}_{1}^{(m+1)},\cdots,\bm{v}_{k}^{(m+1)}\right]=\mathrm{orth}\left(\left[\tilde{\bm{v}}_{1}^{(m+1)},\cdots,\tilde{\bm{v}}_{k}^{(m+1)}\right]\right),\end{aligned}\right.

which is essentially a central difference for directional derivatives and l>0l>0 is a small constant. Eventually, by applying an explicit Euler scheme to the dynamics Eq. 13a of GHiSD, we obtain a numerical algorithm for searching kk-saddle of non-gradient systems:

(28) {𝒙(m+1)=𝒙(m)+α(𝑭(𝒙(m))2j=1k𝑭(𝒙(m)),𝒗j(m)𝒗j(m)),𝒗~i(m+1)=𝒗i(m)+β𝑭(𝒙(m+1)+l𝒗i(m))𝑭(𝒙(m+1)l𝒗i(m))2l,i=1,,k,[𝒗1(m+1),,𝒗k(m+1)]=orth([𝒗~1(m+1),,𝒗~k(m+1)]),\left\{\begin{aligned} &\bm{x}^{(m+1)}=\bm{x}^{(m)}+\alpha\Big{(}\bm{F}(\bm{x}^{(m)})-2\sum_{j=1}^{k}\left\langle\bm{F}(\bm{x}^{(m)}),\bm{v}_{j}^{(m)}\right\rangle\bm{v}_{j}^{(m)}\Big{)},\\ &\tilde{\bm{v}}_{i}^{(m+1)}=\bm{v}_{i}^{(m)}+\beta\dfrac{\bm{F}(\bm{x}^{(m+1)}+l\bm{v}^{(m)}_{i})-\bm{F}(\bm{x}^{(m+1)}-l\bm{v}^{(m)}_{i})}{2l},i=1,\cdots,k,\\ &\left[\bm{v}_{1}^{(m+1)},\cdots,\bm{v}_{k}^{(m+1)}\right]=\mathrm{orth}\left(\left[\tilde{\bm{v}}_{1}^{(m+1)},\cdots,\tilde{\bm{v}}_{k}^{(m+1)}\right]\right),\end{aligned}\right.

where α,β>0\alpha,\beta>0 are step sizes. In practice, the discretization scheme Eq. 28 is implemented with sufficiently small step sizes to ensure numerical stability and convergence.

2.5 Construction of solution landscape

In this subsection, we introduce a systematic numerical procedure to construct the solution landscape of a dynamical system. This procedure consists of a downward search algorithm and an upward search algorithm, which is a generalization of constructing the pathway map [45].

The downward search is the core procedure to search stationary points starting from a high-index saddle. Given a kk-saddle 𝒙^\hat{\bm{x}}, let 𝒗^1,,𝒗^k\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{k} denote the orthonormal basis of 𝒲u(𝒙^)\mathcal{W}^{\mathrm{u}}(\hat{\bm{x}}) in Theorem 2.1. We choose a direction 𝒗^i\hat{\bm{v}}_{i} from the unstable directions {𝒗^1,,𝒗^k}\{\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{k}\}, and slightly perturb the parent state 𝒙^\hat{\bm{x}} along this direction. An mm-GHiSD (m<k)(m<k) is started from the point 𝒙^±ϵ𝒗^i\hat{\bm{x}}\pm\epsilon\hat{\bm{v}}_{i} and the mm initial directions from the unstable directions need to exclude 𝒗^i\hat{\bm{v}}_{i}. A typical choice for mm-GHiSD in a downward search is (𝒙^±ϵ𝒗^m+1,𝒗^1,,𝒗^m)(\hat{\bm{x}}\pm\epsilon\hat{\bm{v}}_{m+1},\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{m}). This procedure is repeated to newly-found saddles until sinks are found. The downward search algorithm is presented in detail as Algorithm 1.

Algorithm 1 Downward search
0:  𝑭𝒞2(n,n)\bm{F}\in\mathcal{C}^{2}(\mathbb{R}^{n},\mathbb{R}^{n}), a k^\hat{k}-saddle 𝒙^\hat{\bm{x}} of 𝑭\bm{F}, ϵ>0\epsilon>0.
1:  Calculate an orthonormal basis {𝒗^1,,𝒗^k^}\{\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{\hat{k}}\} of 𝒲u(𝒙^)\mathcal{W}^{\mathrm{u}}(\hat{\bm{x}}) using Eq. 27;
2:  Set the queue 𝒜={(𝒙^,k^1,{𝒗^1,,𝒗^k^})}\mathcal{A}=\{(\hat{\bm{x}},\hat{k}-1,\{\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{\hat{k}}\})\}, the solution set 𝒮={𝒙^}\mathcal{S}=\{\hat{\bm{x}}\}, and the relation set =\mathcal{R}=\varnothing;
3:  while 𝒜\mathcal{A} is not empty do
4:     Pop (𝒙,m,{𝒗1,,𝒗k})(\bm{x},m,\{\bm{v}_{1},\cdots,\bm{v}_{k}\}) from 𝒜\mathcal{A};
5:     Push (𝒙,m1,{𝒗1,,𝒗k})(\bm{x},m-1,\{\bm{v}_{1},\cdots,\bm{v}_{k}\}) into 𝒜\mathcal{A} if m1m\geqslant 1;
6:     for i=1,,ki=1,\cdots,k do
7:        Determine the initial directions: {𝒗j:j=1,,m+1,jmin(i,m+1)}\{\bm{v}_{j}:j=1,\cdots,m+1,j\neq\min(i,m+1)\};
8:        if mm-GHiSD from 𝒙±ϵ𝒗i\bm{x}\pm\epsilon\bm{v}_{i} converges to (𝒙~,𝒗~1,,𝒗~m)(\tilde{\bm{x}},\tilde{\bm{v}}_{1},\cdots,\tilde{\bm{v}}_{m}) then
9:           {(𝒙,𝒙~)}\mathcal{R}\leftarrow\mathcal{R}\cup\{(\bm{x},\tilde{\bm{x}})\};
10:           if 𝒙~𝒮\tilde{\bm{x}}\notin\mathcal{S} then
11:              𝒮𝒮{𝒙~}\mathcal{S}\leftarrow\mathcal{S}\cup\{\tilde{\bm{x}}\};
12:              Push (𝒙~,m1,{𝒗~1,,𝒗~m})(\tilde{\bm{x}},m-1,\{\tilde{\bm{v}}_{1},\cdots,\tilde{\bm{v}}_{m}\}) into 𝒜\mathcal{A} if m1m\geqslant 1;
12:  The solution set 𝒮\mathcal{S} and the relation set \mathcal{R}.

The downward search algorithm drives a systematic search of stationary points starting from a given parent state in a controlled procedure. On the other hand, if the parent state is unknown or multiple parent states exist, an upward search algorithm is adopted to search high-index saddles from a sink (or a saddle). For an upward search, more directions 𝒗^1,,𝒗^K\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{K} at 𝒙^\hat{\bm{x}} need to be computed using Eq. 27, where K>kK>k is the highest index of the saddle to search. Similarly, we choose a direction 𝒗^i\hat{\bm{v}}_{i} from the stable directions {𝒗^k+1,,𝒗^K}\{\hat{\bm{v}}_{k+1},\cdots,\hat{\bm{v}}_{K}\}, and start mm-GHiSD from 𝒙^±ϵ𝒗^i\hat{\bm{x}}\pm\epsilon\hat{\bm{v}}_{i}, where the mm initial directions should include v^i\hat{v}_{i}. A typical choice for mm-GHiSD (m>k)(m>k) in an upward search is (𝒙^±ϵ𝒗^m,𝒗^1,,𝒗^m)(\hat{\bm{x}}\pm\epsilon\hat{\bm{v}}_{m},\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{m}). Algorithm 2 presents the upward search algorithm.

Algorithm 2 Upward search
0:  𝑭𝒞2(n,n)\bm{F}\in\mathcal{C}^{2}(\mathbb{R}^{n},\mathbb{R}^{n}), a k^\hat{k}-saddle 𝒙^\hat{\bm{x}} of 𝑭\bm{F}, ϵ>0\epsilon>0, the highest searching index KK.
1:  Calculate KK orthonormal directions {𝒗1,,𝒗K}\{\bm{v}_{1},\cdots,\bm{v}_{K}\} at 𝒙\bm{x} using Eq. 27;
2:  Set the stack 𝒜={(𝒙^,k^+1,{𝒗^1,,𝒗^K})}\mathcal{A}=\{(\hat{\bm{x}},\hat{k}+1,\{\hat{\bm{v}}_{1},\cdots,\hat{\bm{v}}_{K}\})\} and the solution set 𝒮={𝒙^}\mathcal{S}=\{\hat{\bm{x}}\},
3:  while 𝒜\mathcal{A} is not empty do
4:     Pop (𝒙,m,{𝒗1,,𝒗K})(\bm{x},m,\{\bm{v}_{1},\cdots,\bm{v}_{K}\}) from 𝒜\mathcal{A};
5:     Push (𝒙,m+1,{𝒗1,,𝒗K})(\bm{x},m+1,\{\bm{v}_{1},\cdots,\bm{v}_{K}\}) into 𝒜\mathcal{A} if m<Km<K;
6:     if mm-GHiSD from 𝒙±ϵ𝒗m\bm{x}\pm\epsilon\bm{v}_{m} converges to (𝒙~,𝒗~1,,𝒗~m)(\tilde{\bm{x}},\tilde{\bm{v}}_{1},\cdots,\tilde{\bm{v}}_{m}) then
7:        if 𝒙~𝒮\tilde{\bm{x}}\notin\mathcal{S} then
8:           Calculate an orthonormal basis {𝒗~1,,𝒗~K~}\{\tilde{\bm{v}}_{1},\cdots,\tilde{\bm{v}}_{\tilde{K}}\} of 𝒲u(𝒙~)\mathcal{W}^{\mathrm{u}}(\tilde{\bm{x}});
9:           𝒮𝒮{𝒙~}\mathcal{S}\leftarrow\mathcal{S}\cup\{\tilde{\bm{x}}\};
10:           Push (𝒙~,m+1,{𝒗~1,,𝒗~K})(\tilde{\bm{x}},m+1,\{\tilde{\bm{v}}_{1},\cdots,\tilde{\bm{v}}_{K}\}) into 𝒜\mathcal{A} if m<Km<K;
10:  The solution set 𝒮\mathcal{S}.

By using a combination of downward searches and upward searches, the entire search can navigate up and down systematically to find the complete solution landscape, as long as the saddle points are connected somewhere.

3 Numerical examples

3.1 Three-dimensional example

We first study a simple three-dimensional (3D) dynamical system to illustrate the solution landscape:

(29) 𝒙˙=[0.60.100.10.60.0500.10.6]𝒙+5[(1+(x15)2)1(1+(x25)2)1(1+(x35)2)1],𝒙=[x1x2x3]3.\dot{\bm{x}}=-\begin{bmatrix}0.6&0.1&0\\ -0.1&0.6&-0.05\\ 0&-0.1&0.6\end{bmatrix}\bm{x}+5\begin{bmatrix}(1+(x_{1}-5)^{2})^{-1}\\ (1+(x_{2}-5)^{2})^{-1}\\ (1+(x_{3}-5)^{2})^{-1}\end{bmatrix},\quad\bm{x}=\begin{bmatrix}x_{1}\\ x_{2}\\ x_{3}\end{bmatrix}\in\mathbb{R}^{3}.

All stationary points of Eq. 29 are listed in Table 1 and the complete solution landscape is shown in Fig. 2. To search the solution landscape, we first find the source a1a_{1} using 3-GHiSD, which is the inverse dynamics of Eq. 29. Then we apply the downward search algorithm to construct the complete solution landscape, as shown in Fig. 2(B). Fig. 2(A) shows a 3D diagram of the solution landscape, which cannot be visualized in a high-dimensional space.

Tag Coordinates Tag Coordinates
a1a_{1} (4.1198,3.4539,3.7131)(4.1198,3.4539,3.7131) c7c_{7} (0.3491,3.7831,5.7853)(-0.3491,3.7831,5.7853)
b1b_{1} (4.0355,1.6896,3.8422)(4.0355,1.6896,3.8422) c8c_{8} (5.5292,5.8847,3.4660)(5.5292,5.8847,3.4660)
b2b_{2} (0.3626,3.8561,3.6793)(-0.3626,3.8561,3.6793) c9c_{9} (5.5930,3.4322,1.0816)(5.5930,3.4322,1.0816)
b3b_{3} (5.5995,3.1849,3.7347)(5.5995,3.1849,3.7347) c10c_{10} (4.2222,5.8207,1.6531)(4.2222,5.8207,1.6531)
b4b_{4} (4.2233,5.8467,3.4710)(4.2233,5.8467,3.4710) c11c_{11} (4.2247,5.8813,5.8445)(4.2247,5.8813,5.8445)
b5b_{5} (4.1265,3.6022,1.1193)(4.1265,3.6022,1.1193) d1d_{1} (0.2790,0.4730,0.4653)(0.2790,0.4730,0.4653)
b6b_{6} (4.1123,3.2900,5.7716)(4.1123,3.2900,5.7716) d2d_{2} (5.6382,1.7000,0.7135)(5.6382,1.7000,0.7135)
c1c_{1} (0.2136,0.8094,3.8979)(0.2136,0.8094,3.8979) d3d_{3} (0.1779,0.9943,5.7094)(0.1779,0.9943,5.7094)
c2c_{2} (5.6253,2.1940,3.8080)(5.6253,2.1940,3.8080) d4d_{4} (0.6987,5.6858,1.6174)(-0.6987,5.6858,1.6174)
c3c_{3} (4.0148,1.2843,0.6284)(4.0148,1.2843,0.6284) d5d_{5} (0.7089,5.7422,5.8405)(-0.7089,5.7422,5.8405)
c4c_{4} (4.0496,1.9728,5.7357)(4.0496,1.9728,5.7357) d6d_{6} (5.5299,5.8584,1.6631)(5.5299,5.8584,1.6631)
c5c_{5} (0.7032,5.7106,3.4882)(-0.7032,5.7106,3.4882) d7d_{7} (5.5283,5.9203,5.8456)(5.5283,5.9203,5.8456)
c6c_{6} (0.3769,3.9328,1.1935)(-0.3769,3.9328,1.1935)
Table 1: The stationary points of the 3D example. Both the tag and the color of each point specify its index: aa for a source, bb for a 2-saddle, cc for a 1-saddle, and dd for a sink.
Refer to caption
Figure 2: (A) A 3D diagram of the solution landscape. (B) The solution landscape of the 3D example. Each node represents a stationary point, and each arrow represents a dynamical pathway of GHiSD.

The solution landscape not only shows the relationships between different sinks, but also reveals rich information on the pathways of the dynamical system. For example, d1d_{1} and d2d_{2} are two neighbour sinks connected by the 1-saddle c3c_{3}. However, d1d_{1} and d7d_{7} are not neighbours, and the transition pathway from d1d_{1} to d7d_{7} needs to pass a metastable state d3d_{3} through the pathway sequence d1c1d3c4d7d_{1}\rightarrow c_{1}\rightarrow d_{3}\rightarrow c_{4}\rightarrow d_{7}. In the solution landscape, it is easy to find that d1d_{1} and d7d_{7} are connected by a 2-saddle b1b_{1}, and we can choose a pathway sequence d1c1b1c4d7d_{1}\rightarrow c_{1}\rightarrow b_{1}\rightarrow c_{4}\rightarrow d_{7} to avoid dropping into a sink.

3.2 Phase field model

We consider the phase field model as the second numerical example. The phase field models have been widely employed to investigate nucleation and microstructure evolution in phase transformations [6, 22, 48, 49, 51]. Here we consider a phase field model of the order parameter ϕ\phi in a fixed square domain Ω=[0,1]2\Omega=[0,1]^{2} with the periodic boundary condition. The Ginzburg–Landau free energy is

(30) E(ϕ)=Ω(κ2|ϕ|2+14(1ϕ2)2)d𝒙,E(\phi)=\int_{\Omega}\left(\dfrac{\kappa}{2}|\nabla\phi|^{2}+\frac{1}{4}(1-\phi^{2})^{2}\right)\mathrm{d}\bm{x},

where κ>0\kappa>0 is the gradient-energy coefficient for isotropic interfacial energy, which is a critical parameter that determines the solution landscape. A finite difference scheme of mesh grids 64×6464\times 64 is applied to spatially discretize ϕ\phi in following numerical simulations. We have tested that the solution landscapes, including solutions and their indices, remain the same under the mesh refinements.

3.2.1 Gradient system

We first consider the Allen–Cahn equation of Eq. 30,

(31) ϕ˙=δEδϕ=κΔϕ+ϕϕ3,\dot{\phi}=-\dfrac{\delta E}{\delta\phi}=\kappa\Delta\phi+\phi-\phi^{3},

which is a gradient system. For any κ>0\kappa>0, three homogeneous phases, namely, ϕ11\phi_{1}\equiv 1, ϕ00\phi_{0}\equiv 0, and ϕ11\phi_{-1}\equiv-1, remain to be stationary points of Eq. 31. By analyzing the spectral set of the Hessian 2E(ϕ)=3ϕ21κΔ\nabla^{2}E(\phi)=3\phi^{2}-1-\kappa\Delta, both ϕ1\phi_{1} and ϕ1\phi_{-1} are minima of Eq. 30, while the index of ϕ0\phi_{0} increases as κ\kappa decreases to zero, as shown in Table 2.

1/κ1/\kappa (0,4π2)(0,4\pi^{2}) (4π2,8π2)(4\pi^{2},8\pi^{2}) (8π2,16π2)(8\pi^{2},16\pi^{2}) (16π2,20π2)(16\pi^{2},20\pi^{2}) (20π2,+)(20\pi^{2},+\infty)
index of ϕ0\phi_{0} 1 5 9 13 \geqslant 21
Table 2: Index of ϕ0\phi_{0} at different κ\kappa for the Ginzburg–Landau free energy.

Starting from the stationary point ϕ0\phi_{0}, we can obtain multiple stationary points by Algorithm 1. The solution landscapes at some different κ\kappa are shown in Fig. 3. Each small image denotes a stationary point of the dynamical system Eq. 31, and each solid arrow represents a GHiSD (HiSD in this case) pathway. In each subfigure, an upper state has a higher energy than a lower state. Furthermore, if ϕ^\hat{\phi} is a stationary point of Eq. 31, so is ϕ^-\hat{\phi}. Because of the periodic boundary condition, a translation of a nonhomogeneous stationary point is also a stationary point, so only one phase is shown in the solution landscape for simplicity. For the simplest case of κ>(4π2)10.0253\kappa>(4\pi^{2})^{-1}\approx 0.0253, ϕ0\phi_{0} is a 11-saddle connecting ϕ1\phi_{1} and ϕ1\phi_{-1}, as shown in Fig. 3(A).

Refer to caption
Figure 3: Solution landscapes for the phase field model at different κ\kappa. (A) κ>0.0253\kappa>0.0253. (B) κ=0.02\kappa=0.02. (C) κ=0.01\kappa=0.01. (D) κ=0.006\kappa=0.006.

At κ=0.02\kappa=0.02, ϕ0\phi_{0} becomes a 5-saddle. The 1-saddles between ϕ1\phi_{1} and ϕ1\phi_{-1} are two kinds of lamellar phases, horizontal and vertical respectively (related by a π/2\pi/2 rotation), and the two 1-saddles are further connected to a 2-saddle. The periodic boundary condition implies that the Hessian at a nonhomogeneous state has one or two zero eigenvalues. Specifically, the Hessian at the 1-saddle has one zero eigenvalues, while the Hessian at the 2-saddle has two. In Fig. 3(B), only one representative stationary point of each state is shown in the solution landscape. The emergence of zero eigenvalues also explains why no 3-saddles or 4-saddles are found in this system.

As κ\kappa decreases, more stationary points emerge in this system and the solution landscape becomes more complicated. When κ\kappa decreases to 0.010.01, ϕ0\phi_{0} becomes a 9-saddle, and the solution landscape is shown in Fig. 3(C). In order to visualize solution landscapes better, the dashed line represents stationary points related by a translation. At κ=0.006\kappa=0.006, ϕ0\phi_{0} becomes a 1313-saddle. Besides the two kinds of lamellar phases along axes, there are another two 1-saddles between ϕ1\phi_{1} and ϕ1\phi_{-1}, which are lamellar phases along x=yx=y and x=yx=-y respectively. These 1-saddles are bifurcated from the 5-saddles at κ=0.02\kappa=0.02 via several pitchfork bifurcations, and may be ignored with traditional methods because of their relatively high energy. With the help of the solution landscape, these stationary points can be easily obtained and clearly illustrated.

3.2.2 Non-gradient system

By adding a shear flow to the dynamics Eq. 31, we can obtain a non-gradient dynamical system of the phase field model. We consider a shear flow of a pushing force along the xx-axis, as shown in Fig. 4. In the presence of the shear flow, the corresponding non-gradient dynamics is

(32) ϕ˙=κΔϕ+ϕϕ3+γsin(2πy)xϕ,\dot{\phi}=\kappa\Delta\phi+\phi-\phi^{3}+\gamma\sin(2\pi y)\;\partial_{x}\phi,

where γ\gamma is the shear rate [20, 26]. The dynamics Eq. 32 also preserves the symmetry of ϕϕ\phi\to-\phi. Because of the shear flow, a translation along the xx-axis of a stationary point remains a stationary point, so Hessians at nonhomogeneous (along the xx-axis) stationary points have one zero eigenvalue and only one phase is shown in the solution landscape in most cases as well. With the parameter κ\kappa fixed as 0.010.01 to compare with Fig. 3(C), we increase the shear rate γ\gamma from zero gradually to obtain different solution landscapes.

Refer to caption
Figure 4: An illustration of the shear flow.

For any γ>0\gamma>0, the three homogeneous states and the horizontal lamellar phase remain stationary points of Eq. 32. Starting from the stationary point ϕ0\phi_{0}, we can obtain multiple stationary points by Algorithm 1, and the solution landscapes at different γ\gamma are presented in Fig. 5. At γ=0.02\gamma=0.02, more stationary points emerge in this system, as shown in Fig. 5(A). The appearance of the 3-saddles and 7-saddles can be explained via pitchfork bifurcations from the 2-saddle and the 6-saddle at γ=0\gamma=0 in Fig. 3(C). Besides, the two 55-saddles (related by a π/2\pi/2 rotation) and a 11-saddle (the vertical lamellar phase) are twisted. When γ\gamma increases to 0.040.04, the 6-saddles at γ=0.02\gamma=0.02 vanish first because a pitchfork bifurcation takes place between the 66-saddle and 55-saddles as illustrated in Fig. 6(A).

Refer to caption
Figure 5: Solution landscapes for the phase field model at κ=0.01\kappa=0.01 with different shear rates. (A) γ=0.02\gamma=0.02. (B) γ=0.04\gamma=0.04. (C) γ=0.06\gamma=0.06. (D) γ=0.16\gamma=0.16.
Refer to caption
Figure 6: Bifurcations for the phase field model at κ=0.01\kappa=0.01 with increasing shear rates. (A) 0.02γ0.040.02\leq\gamma\leq 0.04. (B) 0.04γ0.060.04\leq\gamma\leq 0.06. (C) 0.06γ0.160.06\leq\gamma\leq 0.16.

When γ\gamma becomes larger, the solution landscape is simplified from the vanishing of multiple stationary points. The twisted 11-saddle no longer exists at γ=0.06\gamma=0.06, which has also been discovered by simplified GAD [20]. This can be well explained with a pitchfork bifurcation between the twisted 11-saddle and 22-saddles as illustrated in Fig. 6(B). It is worth pointing out that the two lamellar phases go through different and separate changes as γ\gamma increases. Furthermore, three other pitchfork bifurcations and two saddle-node bifurcations which occur in γ(0.04,0.06)\gamma\in(0.04,0.06) are illustrated in Fig. 6(B) as well. As γ\gamma increases to 0.160.16, the shear flow becomes dominant and the solution landscape is shown in Fig. 5(D). The 7-saddles at γ=0.06\gamma=0.06 vanish via a pitchfork bifurcation which turns ϕ0\phi_{0} into a 7-saddle, as illustrated in Fig. 6(C). Thus, all the remaining stationary points at γ=0.16\gamma=0.16 are homogeneous along the xx-axis.

4 Conclusions and discussions

A long-standing and fundamental problem in computational mathematics and physics is how to find the family tree of all possible stationary points. In this paper, we introduce the GHiSD method to search the solution landscape for dynamical systems. The concept of the solution landscape describes the pathway map that starts with a parent state and then relates the entire family completely down to the minima (sinks). It not only guides our understanding of the relationships between the minima and the transition states on the energy landscape, but also provides a full picture of the entire family of stationary points in dynamical systems.

The GHiSD method is a generalization of the HiOSD method for gradient systems. It is formulated under the framework of dynamical systems and applicable to search high-index saddle points in both gradient systems and non-gradient systems. With the GHiSD method, we can efficiently construct the solution landscape using the downward search and upward search algorithms to find multiple possible pathways. We use a 3D system and the phase field model with a shear flow as numerical examples to illustrate the solution landscapes in dynamical systems.

Advantages of the proposed method are tremendous. First, it overcomes the difficulty of tuning initial guesses to search the stationary points of dynamical systems required in many other existing methods. Letting the system gently roll off the high-index saddle along its unstable directions, we have an efficient and controlled procedure to find connected lower-index saddles and sinks with the help of GHiSD. Second, our method offers a general mechanism for finding all possible sinks (hence the global minima for energy function) without limitations on system types, as long as the system has a finite number of stationary points. Third, the solution landscape not only identifies the stationary points, but also shows the relationships between all stationary points through the pathway map. More importantly, this hierarchy structure of stationary points reveals rich and hidden physical properties and processes, leading to a deep understanding of physical systems.

Nevertheless, there are several open questions for the solution landscape. The parent state is the highest-index saddle and plays a critical role in the solution landscape, but how do we find the parent state in the solution landscape in general? Apparently the parent state is not unique in a finite-dimensional system with multiple local maxima, but it may be unique in an infinite-dimensional system. An observation is that the unique parent state often has a better symmetry. For example, the parent state is the homogeneous state ϕ0\phi_{0} in the phase field model and the well order-reconstruction solution in the Landau–de Gennes free energy in a square box [45]. Moreover, can limit cycles, which are very important in the study of dynamical systems, be identified under the framework of GHiSD? How do we construct the solution landscape in a constrained manifold? Can the evolution of the solution landscape be explained and predicted with the bifurcation theory? Both theoretical and numerical investigations of these questions will be of great use.

The concept of the solution landscape can be a critical key for many mathematical problems raised from physics, chemistry and biology. All applications of the energy landscape may be solved by the solution landscape. For example, the energy landscape of protein folding has been widely studied in decades [30, 32, 37, 42]. One unsolved puzzle is “folding mechanism”. Cosmological timescales are required for each protein to find the folded configuration in existing models. However, naturally occurring proteins fold in milliseconds to seconds. This paradox of protein folding may be solved by the solution landscape, which is able to capture dynamical pathways on the complicated energy landscape of protein folding.

References

  • [1] Benkovic, S. J., Hammes, G. G., and Hammes-Schiffer, S. Free-energy landscape of enzyme catalysis. Biochemistry 47, 11 (2008), 3317–3321.
  • [2] Brow, K. M., and Gearhart, W. B. Deflation techniques for the calculation of further solutions of a nonlinear system. Numer. Math. 16, 4 (1971), 334–342.
  • [3] Cai, Y., and Cheng, L. Single-root networks for describing the potential energy surface of Lennard-Jones clusters. J. Chem. Phys. 149, 8 (2018), 084102.
  • [4] Chen, C., and Xie, Z. Search extension method for multiple solutions of a nonlinear problem. Comput. Math. with Appl. 47, 2-3 (2004), 327–343.
  • [5] Chen, H., Kandasamy, S., Orszag, S., Shock, R., Succi, S., and Yakhot, V. Extended Boltzmann kinetic equation for turbulent flows. Science 301, 5633 (2003), 633–636.
  • [6] Chen, L.-Q. Phase-field models for microstructure evolution. Annu. Rev. Mater. Res. 32, 1 (2002), 113–140.
  • [7] Chen, W. W., Niepel, M., and Sorger, P. K. Classic and contemporary approaches to modeling biochemical reactions. Genes Dev. 24, 17 (2010), 1861–1875.
  • [8] Cheng, X., Lin, L., E, W., Zhang, P., and Shi, A.-C. Nucleation of ordered phases in block copolymers. Phys. Rev. Lett. 104 (Apr 2010), 148301.
  • [9] Das, R., and Wales, D. J. Energy landscapes for a machine-learning prediction of patient discharge. Phys. Rev. E 93, 6 (2016), 063310.
  • [10] Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F. A. Essentially no barriers in neural network energy landscape. In Proceedings of the 35th International Conference on Machine Learning (Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018), J. Dy and A. Krause, Eds., vol. 80 of Proceedings of Machine Learning Research, PMLR, pp. 1309–1318.
  • [11] Du, Q., Li, R., and Zhang, L. Variational phase field formulations of polarization and phase transition in ferroelectric thin films. SIAM Journal on Applied Mathematics 80, 3 (2020), 1590–1606.
  • [12] Du, Q., and Zhang, L. A constrained string method and its numerical analysis. Commun. Math. Sci. 7, 4 (2009), 1039–1051.
  • [13] E, W., Ma, C., and Wu, L. A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics. Science China Mathematics 63, 7 (2020), 1235–1258.
  • [14] E, W., Ren, W., and Vanden-Eijnden, E. String method for the study of rare events. Phys. Rev. B 66, 5 (2002), 052301.
  • [15] E, W., and Vanden-Eijnden, E. Transition-path theory and path-finding algorithms for the study of rare events. Annu. Rev. Phys. Chem. 61 (2010), 391–420.
  • [16] E, W., and Zhou, X. The gentlest ascent dynamics. Nonlinearity 24, 6 (2011), 1831–1842.
  • [17] Farrell, P. E., Birkisson, A., and Funke, S. W. Deflation techniques for finding distinct solutions of nonlinear partial differential equations. SIAM J. Sci. Comput. 37, 4 (2015), A2026–A2045.
  • [18] Golub, G. H., and Van Loan, C. F. Matrix Computations, fourth ed., vol. 3. JHU Press, Baltimore, Maryland, 2012.
  • [19] Goodfellow, I., Bengio, Y., and Courville, A. Deep Learning. The MIT Press, Cambridge, MA, 2016.
  • [20] Gu, S., and Zhou, X. Simplified gentlest ascent dynamics for saddle points in non-gradient systems. Chaos 28, 12 (2018), 123106.
  • [21] Han, Y., Hu, Y., Zhang, P., and Zhang, L. Transition pathways between defect patterns in confined nematic liquid crystals. J. Comput. Phys. 396 (2019), 1–11.
  • [22] Han, Y., Xu, Z., Shi, A.-C., and Zhang, L. Pathways connecting two opposed bilayers with a fusion pore: a molecularly-informed phase field approach. Soft Matter 16 (2020), 366–374.
  • [23] Hao, W., Hauenstein, J. D., Hu, B., and Sommese, A. J. A bootstrapping approach for computing multiple solutions of differential equations. J. Comput. Appl. Math. 258 (2014), 181–190.
  • [24] Henkelman, G., Jóhannesson, G., and Jónsson, H. Methods for finding saddle points and minimum energy paths. In Theoretical Methods in Condensed Phase Chemistry, S. D. Schwartz, Ed. Springer, Dordrecht, 2002, pp. 269–302.
  • [25] Henkelman, G., and Jónsson, H. A dimer method for finding saddle points on high dimensional potential surfaces using only first derivatives. J. Chem. Phys. 111, 15 (1999), 7010–7022.
  • [26] Heymann, M., and Vanden-Eijnden, E. Pathways of maximum likelihood for rare events in nonequilibrium systems: Application to nucleation in the presence of shear. Phys. Rev. Lett. 100, 14 (2008), 140601.
  • [27] Hirsch, M. W., and Smale, S. Differential Equations, Dynamical Systems, and Linear Algebra. Academic Press, New York, 1974.
  • [28] Hughes, C., Mehta, D., and Wales, D. J. An inversion-relaxation approach for sampling stationary points of spin model hamiltonians. Journal of Chemical Physics 140, 19 (2014), 194104.
  • [29] Kerns, S. J., Agafonov, R. V., Cho, Y.-J., Pontiggia, F., Otten, R., Pachov, D. V., Kutter, S., Phung, L. A., Murphy, P. N., Thai, V., et al. The energy landscape of adenylate kinase during catalysis. Nat. Struct. Mol. Biol. 22, 2 (2015), 124–131.
  • [30] Leeson, D. T., Gai, F., Rodriguez, H. M., Gregoret, L. M., and Dyer, R. B. Protein folding and unfolding on a complex energy landscape. Proc. Natl. Acad. Sci. U.S.A. 97, 6 (2000), 2527–2532.
  • [31] Lemarié-Rieusset, P. G. Recent Developments in the Navier-Stokes Problem. CRC Press, Boca Raton, 2002.
  • [32] Mallamace, F., Corsaro, C., Mallamace, D., Vasi, S., Vasi, C., Baglioni, P., Buldyrev, S. V., Chen, S.-H., and Stanley, H. E. Energy landscape in protein folding and unfolding. Proc. Natl. Acad. Sci. U.S.A. 113, 12 (2016), 3159–3163.
  • [33] Mehta, D. Finding all the stationary points of a potential-energy landscape via numerical polynomial-homotopy-continuation method. Phys. Rev. E 84 (Aug 2011), 025702.
  • [34] Meiss, J. D. Differential Dynamical Systems. SIAM, Philadelphia, 2007.
  • [35] Milnor, J. W. Morse Theory, vol. 51. Princeton University Press, Princeton. NJ, 1963.
  • [36] Nie, Q., Qiao, L., Qiu, Y., Zhang, L., and Zhao, W. Noise control and utility: From regulatory network to spatial patterning. Science China Mathematics 63, 3 (2020), 425–440.
  • [37] Onuchic, J. N., Luthey-Schulten, Z., and Wolynes, P. G. Theory of protein folding: The energy landscape perspective. Annu. Rev. Phys. Chem. 48, 1 (1997), 545–600.
  • [38] Qiao, L., Zhao, W., Tang, C., Nie, Q., and Zhang, L. Network topologies that can achieve dual function of adaptation and noise attenuation. Cell Systems 9, 3 (2019), 271–285.e7.
  • [39] Ren, W., and Vanden-Eijnden, E. A climbing string method for saddle point search. Journal of Chemical Physics 138, 13 (2013), 134105.
  • [40] Shakhov, E. M. Generalization of the Krook kinetic relaxation equation. Fluid Dyn. 3, 5 (1968), 95–96.
  • [41] Temam, R. Navier-Stokes Equations: Theory and Numerical Analysis. American Mathematical Society, Providence, RI, 2001.
  • [42] Wales, D. J. Energy Landscapes. Cambridge University Press, Cambridge, England, 2003.
  • [43] Wales, D. J., and Doye, J. P. K. Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms. J. Phys. Chem. A 101, 28 (1997), 5111–5116.
  • [44] Wiggins, S. Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer-Verlag, New York, 2003.
  • [45] Yin, J., Wang, Y., Chen, J. Z. Y., Zhang, P., and Zhang, L. Construction of a pathway map on a complicated energy landscape. Phys. Rev. Lett. 124, 9 (2020), 090601.
  • [46] Yin, J., Zhang, L., and Zhang, P. High-index optimization-based shrinking dimer method for finding high-index saddle points. SIAM J. Sci. Comput. 41, 6 (2019), A3576–A3595.
  • [47] Yu, B., and Zhang, L. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems-B doi: 10.3934/dcdsb.2020139 (2020).
  • [48] Zhang, L., Chen, L.-Q., and Du, Q. Morphology of critical nuclei in solid-state phase transformations. Phys. Rev. Lett. 98 (Jun 2007), 265703.
  • [49] Zhang, L., Chen, L.-Q., and Du, Q. Simultaneous prediction of morphologies of a critical nucleus and an equilibrium precipitate in solids. Commun. Comput. Phys. 7, 4 (4 2010), 674–682.
  • [50] Zhang, L., Du, Q., and Zheng, Z. Optimization-based shrinking dimer method for finding transition states. SIAM J. Sci. Comput. 38, 1 (2016), A528–A544.
  • [51] Zhang, L., Ren, W., Samanta, A., and Du, Q. Recent developments in computational modelling of nucleation in phase transformations. npj Comput. Mater. 2 (2016), 16003.