LyZNet: A Lightweight Python Tool for Learning and Verifying Neural Lyapunov Functions and Regions of Attraction
Abstract.
In this paper, we describe a lightweight Python framework that provides integrated learning and verification of neural Lyapunov functions for stability analysis. The proposed tool, named LyZNet, learns neural Lyapunov functions using physics-informed neural networks (PINNs) to solve Zubov’s equation and verifies them using satisfiability modulo theories (SMT) solvers. What distinguishes this tool from others in the literature is its ability to provide verified regions of attraction close to the domain of attraction. This is achieved by encoding Zubov’s partial differential equation (PDE) into the PINN approach. By embracing the non-convex nature of the underlying optimization problems, we demonstrate that in cases where convex optimization, such as semidefinite programming, fails to capture the domain of attraction, our neural network framework proves more successful. The tool also offers automatic decomposition of coupled nonlinear systems into a network of low-dimensional subsystems for compositional verification. We illustrate the tool’s usage and effectiveness with several numerical examples, including both non-trivial low-dimensional nonlinear systems and high-dimensional systems.
1. Introduction
Stability analysis of nonlinear dynamical systems has been a focal point of research in control and dynamical systems. In many applications, characterizing the domain of attraction for an asymptotically stable equilibrium point is crucial. For example, in power systems, understanding the domain of attraction is essential for assessing whether the system can recover to a stable equilibrium after experiencing a fault.
Since Lyapunov’s landmark paper more than a hundred years ago (Lyapunov, 1992), Lyapunov functions have become a cornerstone of nonlinear stability analysis, providing an instrumental tool for performing such analyses and estimating the domain of attraction. Consequently, extensive research has been conducted on Lyapunov functions. One of the key technical challenges is the construction of Lyapunov functions. To address this challenge, both analytical (Haddad and Chellaboina, 2008; Sepulchre et al., 2012) and computational methods (Giesl, 2007; Giesl and Hafstein, 2015) have been investigated.
Among computational methods for Lyapunov functions, sums-of-squares (SOS) techniques have garnered widespread attention (Papachristodoulou and Prajna, 2002, 2005; Packard et al., 2010; Tan and Packard, 2008; Topcu et al., 2008; Jones and Peet, 2021). These methods facilitate not only local stability analysis but also provide estimates of regions of attraction (Topcu et al., 2008; Tan and Packard, 2008; Packard et al., 2010). Leveraging semidefinite programming (SDP), one can extend the region of attraction by employing a specific “shape function” within the estimated region. However, selecting such shape functions in a principled manner, beyond standard norm (Packard et al., 2010) or quadratic functions (Khodadadi et al., 2014), remains elusive.
On the other hand, Zubov’s theorem (Zubov, 1964) precisely characterizes the domain of attraction through a partial differential equation (PDE). This contrasts with commonly seen Lyapunov conditions, which manifest as partial differential inequalities. The use of an equation, rather than inequalities, enables precise characterization of the domain of attraction. The concept of maximal Lyapunov function (Vannelli and Vidyasagar, 1985) is closely related to Zubov’s method. The authors of (Vannelli and Vidyasagar, 1985) have also provided a computational procedure for constructing maximal Lyapunov functions using rational functions.
1.1. Related work
The proposed tool solves Zubov’s equation using physics-informed neural networks (PINNs) (Lagaris et al., 1998; Raissi et al., 2019) and verifies the neural solutions using satisfiability modulo theories (SMT) solvers. The tool currently relies on dReal (Gao et al., 2013) for verification through interval analysis.
Computation of Lyapunov functions using Zubov’s equation has been previously explored, for example, with radial basis functions (Giesl, 2007) and SOS techniques (Jones and Peet, 2021) (though the authors did not explicitly mention Zubov’s equation). More recently, the authors of (Kang et al., 2023) employed a data-driven approach to solve Zubov’s equation using neural networks; however, the computed neural solutions were not verified, and Zubov’s equation was not directly encoded in the loss function. The authors of (Liu et al., 2023b) solved Zubov’s equation using neural networks and verified them with dReal, but did not contrast their results with SOS techniques.
Many authors have recently investigated the use of neural networks for computing Lyapunov functions (see, e.g., (Chang et al., 2019; Grüne, 2021b; Gaby et al., 2022; Abate et al., 2020; Gaby et al., 2022; Kang et al., 2023), and (Dawson et al., 2023) for a recent survey). In fact, such efforts date back to as early as the 1990s (Long and Bayoumi, 1993; Prokhorov, 1994). Unlike SDP-based synthesis of SOS Lyapunov functions, neural network Lyapunov functions obtained by training are not guaranteed to be Lyapunov functions. Subsequent verification is required, e.g., using satisfiability modulo theories (SMT) solvers (Chang et al., 2019; Ahmed et al., 2020). The use of SMT solvers for searching and refining Lyapunov functions has been explored previously (Kapinski et al., 2014). Counterexample-guided search of Lyapunov functions using SMT solvers is investigated in (Ahmed et al., 2020) and the associated tool (Abate et al., 2021), which supports both Z3 (De Moura and Bjørner, 2008) and dReal (Gao et al., 2013) as verifiers. Neural Lyapunov functions with SMT verification are explored in (Zhou et al., 2022) for systems with unknown dynamics. SMT verification is often time-consuming, especially when seeking a maximal Lyapunov function (Liu et al., 2023b) or dealing with high-dimensional systems. Recent work has also focused on learning neural Lyapunov functions and verifying them through optimization-based techniques, e.g., (Chen et al., 2021a, b; Dai et al., 2021, 2020). Such techniques usually employ (leaky) ReLU networks and use mixed integer linear/quadratic programming (MILP/MIQP) for verification.
In contrast with existing approaches, the proposed tool offers the ability to compute regions of attraction (ROA) close to the domain of attraction and provides support for compositional verification to handle high-dimensional systems. We demonstrate its usage and effectiveness on both low-dimensional systems with non-trivial dynamics and high-dimensional systems to showcase compositional verification. The tool achieves state-of-the-art results on ROA approximations for nonlinear systems and their verification for high-dimensional systems.
2. Problem formulation
Consider a nonlinear system described by
(1) |
where is continuously differentiable. Suppose that the origin is an asymptotically stable equilibrium for the system. We denote the unique solution to (1) from the initial condition by for , where is the maximal interval of existence for .
The domain of attraction of the origin for (1) is defined as
(2) |
We know that is an open and connected set.
We call any forward invariant subset of a region of attraction (ROA). We are interested in computing regions of attraction, as they not only provide a set of initial conditions with guaranteed convergence to the equilibrium point, but also ensure constraints and safety through forward invariance.
Lyapunov functions provide an indispensable tool for stability analysis and ROA estimates. A Lyapunov function for asymptotic stability of the origin is a continuously differentiable function satisfying , for all , and the derivative of V along solutions of (1) (sometimes called its Lie derivative) satisfies
Given any such function and any positive constant , the sublevel set is a region of attraction, provided that the set is contained in .
The proposed tool is designed to accomplish the following task: given an asymptotically stable equilibrium point, it computes neural Lyapunov functions that can certify regions of attraction with provable guarantees.
What sets our tool and approach apart from existing ones in the literature is its ability to verify regions of attraction close to the domain of attraction by incorporating Zubov’s equation into the training of neural Lyapunov functions.
3. Tool overview and usage
3.1. Tool overview
The tool consists of a set of Python modules built around the machine learning framework PyTorch (Paszke et al., 2019) and the numerical satisfiability modulo theories (SMT) solver dReal (Gao et al., 2013). It also relies on the NumPy library (Harris et al., 2020) for numerical computation, SciPy library (Virtanen et al., 2020) for scientific computing tasks, and the SymPy library (Meurer et al., 2017) for symbolic calculation.
We describe the main modules as follows:
-
•
dynamical_systems.py defines the DynamicalSystems class and associated methods such as linearization, computation of quadratic Lyapunov function by solving Lyapunov’s equation, and providing interfaces for numerical and symbolic function valuations related to the vector field.
-
•
local_verifier.py provides functions for verifying local stability using quadratic Lyapunov functions, with interval analysis from the SMT solver dReal. The key component is using the mean value theorem to bound the remainder of the linear approximation. This enables exact verification of local asymptotic stability, despite dReal’s numerical conservativeness. This sets it apart from existing methods that often ignore verification around a neighbourhood of the origin.
-
•
quadratic_verifier.py performs reachability analysis using a quadratic Lyapunov function. The target is the local ROA obtained above. The set guaranteed to reach this target is formulated as a sublevel set of the quadratic Lyapunov function.
-
•
generate_data.py generates data for training neural Lyapunov functions. It involves solving system (1), augmented with an additional state to compute value for the solution to Zubov’s equation, which we accomplish using SciPy and “embarrassing parallelization” with the joblib library (Joblib Development Team, 2023).
- •
-
•
neural_verifier.py verifies the learned neural Lyapunov function indeed satisfies the Lyapunov condition required for reach a target invariant set within the domain of attraction. The target can be set as the region of attraction already verified by local analysis or quadratic Lyapunov analysis above. Our current implementation uses dReal as the verifier.
-
•
network_verifier.py includes functions for compositional verification of local stability using quadratic and neural Lyapunov functions. The underlying theory involves vector Lyapunov functions, differential inequalities, and comparison techniques for nonlinear systems.
-
•
sos_verifier.py, plot.py, and utils.py provide support for comparisons with other Lyapunov functions, such as sums-of-squares (SOS) Lyapunov functions, visualization of verified level sets, and other routines used in learning and verification.
3.2. Tool usage
Using the tool is straightforward. As shown in the code snippet below, we can define a dynamical system with SymPy variables and symbolic expressions for the vector field. We can also specify the domain for learning and training, as well as name the system for file management and result storage.
The code above defines the reversed Van der Pol equation with parameter on the domain .
The following lines of code can be used to call relevant functions for verifying local stability using quadratic Lyapunov function, as well as learning and verifying neural Lyapunov functions.
Most functions and parameters are self-explanatory. We remark that the argument loss_mode in neural_learner currently can be set to three different modes: "Zubov", "Data", and "Lyapunov". The latter two take a purely data-driven approach for solving Zubov’s equation or encode a Lyapunov inequality instead of Zubov’s equation, respectively.
The code above computes and verifies four sublevel sets:
(3) | ||||
satisfying , , and , where is a quadratic Lyapunov function of the form and is a learned neural network Lyapunov function. The set is a local region of attraction verified by linearization. The set represents the set of initial conditions from which solutions of (1) are verified to reach using a quadratic Lyapunov function. The sets and are verified using a neural Lyapunov function, where represents a target invariant set contained in 111This could be set to if one wishes to skip the call of quadratic_reach_verifier for any reason, or any known region of attraction verified by other means., and is an invariant set from which solutions of (1) are guaranteed to reach . All sets are also verified to be contained in the interior of to ensure set invariance. Put together, is a verified ROA for (1).
4. Methodology
4.1. Local stability analysis using quadratic Lyapunov functions
Local stability analysis using quadratic Lyapunov functions is fairly standard. Suppose that is an exponentially stable equilibrium point for (1); i.e., and the Jacobian matrix is a Hurwitz matrix, i.e., all eigenvalues of have negative real parts. Rewrite (1) as
(4) |
where satisfies . Given any symmetric positive definite real matrix , there exists a unique solution to the Lyapunov equation
(5) |
Let . Then
where we set for some sufficiently small and is the minimum eigenvalue of . The goal of local_stability_verifier is to determine the largest number such that
(6) |
where is as defind in (3).
While (6) can be easily formulated as a satisfiability condition in dReal (Gao et al., 2013), due to conservative use of interval analysis in dReal to account for numerical errors, verification of inequalities such as will return a counterexample close to the origin. To overcome this issue, we look at a higher-order approximation of . By the mean value theorem, we have
where is the Jacobian of given by , which implies
As a result, to verify (6), we just need to verify
(7) |
If is star-shaped with respect to the origin222A set is star-shaped with respect to the point if for all the line segment joining and is a subset of ., then (7) is equivalent to
(8) |
Since and is continuous, one can always choose sufficiently small such that (8) can be verified. Furthermore, in rare situations, if (8) can be verified for , then global exponential stability of the origin is proved for (1).
Remark 1.
From (8), one can further use easily computable norms, such as the Frobenius norm, to over-approximate the matrix 2-norm . Furthermore, by the implication for all , and the use of a compositionally verifiable upper bound of the matrix 2-norm, such as the Frobenius norm, one can verify (8) in a compositional fashion. This idea was implemented in compositional_local_stability_verifier, which can work significantly more efficiently for high-dimensional systems. Of course, this decomposition of an ellipsoid set to a hyperrectangular set is inherently conservative. Less conservative results for compositional verification can be achieved with the help of vector Lyapunov functions (Liu et al., 2024b), as implemented in network_verifier.
4.2. Learning neural Lyapunov functions via Zubuv’s equation for ROA verification
4.2.1. Zubov’s theorem and maximal Lyapunov function
Zubov’s PDE (Zubov, 1964) takes the form
(9) |
where is a positive definite function to be solved and is also a positive definite function that can be chosen. For instance, can be chosen as , where is a parameter. Zubov’s theorem states that if there exists a function satisfying the above PDE on a given domain containing the origin with the following two additional conditions: (1) on except ; (2) as , then the domain , the domain of attraction for the origin. Suppose that the function is also trivially extended to the domain of interest containing with for . We have . In other words, the domain of attraction is characterized by the sublevel-1 set of the solution to Zubov’s PDE.
Zubov’s theorem is closely related to the notion of maximal Lyapunov function discussed in (Vannelli and Vidyasagar, 1985). A maximal Lyapunov function on a domain containing the origin satisfies the property
(10) |
for all , where is a positive definite function. Additionally, as .
A solution to Zubov’s equation can be related to a maximum Lyapunov by any strictly increasingly function that satisfies and as . Indeed, given a maximal Lyapunov function , we can simply choose
(11) |
There are obvious choices of such a function . For examples for or for , where is a parameter. In our implementation, we choose . It can be easily verified that defined this way satisfies (9) with
(12) |
Under suitable assumptions (Vannelli and Vidyasagar, 1985), a solution to (10) can be constructed as
(13) |
For example, if the origin is an exponentially stable equilibrium point and is locally Lipschitz, then this construction is valid and gives a maximal Lyapunov function on . One easy choice of is .
4.2.2. Physics-informed neural solution to Zubov’s PDE
Put together, (9), (12), and (13) allow us to learn a neural Lyapunov function via physics-informed neural networks (Lagaris et al., 1998; Raissi et al., 2019) for solving PDEs.
In a nutshell, a physics-informed neural solution to the PDE (9) is a neural network function that minimizes the residual for satisfying (9). Additional conditions on this neural network function can be enforced (or rather encouraged) through additional loss terms.
More specifically, let be a neural approximation to a solution of (9). The training loss consists of
(14) |
where is the residual error of the PDE given by
(15) |
evaluated over a set of collocation points chosen over a domain . For instance, we can choose as in (12). The loss captures boundary conditions. There are different ways boundary conditions can be added. The simplest one is that and for . Although optional, we can also encourage the following inequality (Grüne, 2021a; Liu et al., 2023b) near the origin:
(16) |
where (corresponding to the choice of above). This inequality is always satisfiable for some positive and when the origin is exponentially stable. Finally, the term captures data loss, which we define as
(17) |
where is a set of data points, which can be obtained by forward integration of (1) to obtain from (13) and (11) (Kang et al., 2023).
4.2.3. Verification of regions of attraction
We briefly outline how local stability analysis via linearization and reachability analysis via a Lyapunov function can be combined to provide a ROA estimate that is close to the domain of attraction.
Let and be two generic functions. Suppose that and gives a local region of attraction . We can verify the following inequalities using SMT solvers:
(18) | ||||
(19) |
where , . The first inequality ensures that solutions of (1) starting in always reaches the target set , as long as the solution does not leave . The second inequality ensures that the target set is contained in a local region of attraction . By definition, it follows that provides an under-estimate of the true domain of attraction . An easy condition to ensure that solutions cannot leave before reaching the target is that does not intersect with the boundary of , which can also be easily verified by an SMT solver.
Now specific to the tool, we can use the above technique to first compute a region of attraction via linearization as in Section 4.1 and obtain . We can then use the reachability conditions above to obtain an enlarged region of attraction, first using a quadratic Lyapunov function to verify , and then using a neural Lyapunov function to verify and , as shown in (3). If the neural Lyapunov function is obtained by solving Zubov’s PDE (9), then the set provides a region of attraction close to the true domain of attraction.
Remark 2.
While there are numerous studies on computing neural Lyapunov functions (Abate et al., 2020, 2021; Ahmed et al., 2020; Grüne, 2021a; Chang et al., 2019; Zhou et al., 2022; Gaby et al., 2022), the tools currently available cannot produce verified region of attraction estimates that are close to the state-of-the-art SOS approaches. In particular, we tested the recent tool Fossil 2.0 (Abate et al., 2021; Edwards et al., 2023) on Example 5.1 in the following section and found that the region of attraction verified by Fossil is worse than that verified by a quadratic Lyapunov function (dashed red curve in Figure 1) using LyZNet. On the other hand, in the next section, we demonstrate LyZNet’s capability in outperforming state-of-the-art SOS approaches by providing less conservative verified region-of-attraction estimates. This is because LyZNet aims to solve Zubov’s equation, which precisely characterizes the exact domain of attraction.
5. Examples
In this section, we demonstrate the usage of the tool with several numerical examples. All experiments are conducted on a 2020 MacBook Pro with a 2 GHz Quad-Core Intel Core i5 and without any GPU. The purpose of this section is not to conduct extensive experiments, but to demonstrate the usage and effectiveness of the proposed tool. The equations describing the examples can be found in the Appendix. We note that the main purpose here is to demonstrate the capability of the tool. Readers are referred to (Liu et al., 2023a, 2024b) for more detailed discussions of the numerical results and, especially, compositional verification techniques involved (Liu et al., 2024b). The repository of the tool includes code for running these examples (https://git.uwaterloo.ca/hybrid-systems-lab/lyznet).
Example 5.1 (Van der Pol equation).
Consider the reversed Van der Pol equation with different values of . For and , similar to the code snippet shown in Section 3.2, we train two neural networks of different sizes (depth and width). Figure 1 depicts the largest verifiable sublevel set, along with the learned neural Lyapunov function. It can be seen that the domain of attraction is quite comparable and slightly better than that provided by a sums-of-squares (SOS) Lyapunov function with a polynomial degree of 6, obtained using a standard “interior expanding” algorithm (Packard et al., 2010).

As the stiffness of the equation increases, we observe further improved advantages of neural Lyapunov approach over SOS Lyapunov functions. With and domain , the comparison of neural Lyapunov function with SOS Lyapunov function is shown in Figire 2.

Layer | Width | c2_V | Volume (%) | SOS volume (%) | |
1.0 | 2 | 30 | 0.871 | 94.78% | 94.10% |
3.0 | 2 | 30 | 0.642 | 85.21% | 70.93% |
Example 5.2 (Two-machine power system).
Consider a two-machine power system (Vannelli and Vidyasagar, 1985) which has an asymptotically stable equilibrium point at the origin and an unstable equilibrium point at . Figure 3 shows that a neural network with two hidden layers and 30 neurons in each layer provides a region-of-attraction estimate significantly better than that from a sixth-degree polynomial SOS Lyapunov function, computed with a Taylor expansion of the system model. The example shows that neural Lyapunov functions perform better than SOS Lyapunov functions when the nonlinearity is non-polynomial. We also compared with the rational Lyapunov function presented in (Vannelli and Vidyasagar, 1985), but the ROA estimate is worse than that from the SOS Lyapunov function, and we were not able to formally verify the sublevel set of the rational Lyapunov function reported in (Vannelli and Vidyasagar, 1985) with dReal (Gao et al., 2013). Improving the degree of polynomial in the SOS approach does not seem to improve the result either.

Layer | Width | c2_V | Volume (%) | SOS volume (%) |
2 | 30 | 0.740 | 82.34% | 18.53% |
Example 5.3 (Inverted pendulum with linear control).
In the literature, there have been numerous references to this example as a benchmark for comparing techniques for stabilization with provable ROA estimates. Interestingly, our local_stability_verifier verifies global stability of the origin within 1 millisecond using dReal (Gao et al., 2013).
Remark 3.
Computational time for training and verification largely depends on computer hardware. On the somewhat outdated laptop used for experiments, training the neural network as shown in Tables 1 and 2 takes about 500 seconds. This is with 20 epochs and 300,000 collocation points using a minibatch size of 32. Verification, including all bisection subroutines, can range from 200 to 2,000 seconds. We report the computational times in the Appendix.
Example 5.4 (10-dimensional system).
The example was used by the authors of (Grüne, 2021a) and (Gaby et al., 2022) to illustrate the training of neural network Lyapunov functions for stability analysis, where the trained Lyapunov functions were not verified. Here, we use the example to illustrate the compositional verification functionalities of LyZNet.
For the compositional approach, we examine two decompositions. The first splits the system into five linear subsystems, each with two variables. The nonlinear terms are treated as interconnections. The second has ten linear subsystems of the form , each with one variable, and considers the remaining terms as interconnections. LyZNet automates these decompositions.
We tried verification over different domains with different approaches. The results are summarized in Table 3. It can be seen that for a smaller domain , all verifiers work well and find the largest possible sublevel sets contained in the domain. It is worth noting that only the decomposition “ dim” gives a verified ROA equal to the entire domain (barring the conservativeness of the numerical SMT verifier dReal (Gao et al., 2013)), because the other two approaches either produce the largest Euclidean ball or the largest Cartesian product of Euclidean balls contained in the domain.
As the domain gets larger, it becomes more challenging for the monolith approach. For , the monolith approach takes more than 12 hrs without returning a result, while all compositional approaches succeeded. Notably, the “ dim” decomposition provides the largest possible hyper-rectangular invariance set within the domain; indeed, it can be verified this set is .
We also trained neural network functions and observe that, if we restrict the structural complexity of the neural network, we are able to verify regions of attraction using neural Lyapunov functions. However, the region of attraction is not as good as the ones obtained using quadratic Lyapunov functions, as those ones are already optimal for this example.
Domain | Approach | Verifier | Levels | Time (sec) |
monolith | local | 0.49999 | 0.19 | |
compositional () | local | 0.49999 | 0.12 | |
compositional () | local | 0.49999 | 0.29 | |
monolith | local | 7.99999 | 4.30 | |
compositional () | local | 7.99999 | 0.12 | |
compositional () | local | 3.1210 | 4.45 | |
compositional () | quadratic | 7.99999 | 1.54 | |
monolith | local | – | time out (¿43200 sec) | |
compositional () | local | 12.4938 | 2.07 | |
compositional () | local | 3.1188 | 4.57 | |
compositional () | quadratic | 12.4969 | 23.40 |
6. Conclusions and future work
We presented a lightweight Python framework for learning and verifying neural Lyapunov functions. We demonstrated that by solving Zubov’s PDE using neural networks, the verified region of attraction can indeed approach the boundary of the domain of attraction, outperforming sums-of-squares Lyapunov functions obtained using semidefinite programming. To cope with learning and verifying Lyapunov functions for high-dimensional systems, we have built support for compositional verification and demonstrated its effectiveness over a monolithic approach. While not presented in this paper, the compositional approach has been demonstrated to be effective for learning and verifying neural Lyapunov functions (Liu et al., 2024b).
There are numerous ways the tool can be expanded. Ongoing research focuses on compositional training and verification of neural Lyapunov functions (Liu et al., 2024b). Future work could include supporting verification engines other than dReal, such as Z3 (De Moura and Bjørner, 2008), and leveraging the growing literature on neural network verification tools. Extending support to other types of dynamical systems like delay differential equations and stochastic dynamics is also of interest. Exploring other stability and boundedness notions, such as global stability and ultimate boundedness, could be valuable. For compositional verification, designing neural networks that automatically discover compositional structures in high-dimensional systems is a promising direction, as opposed to the current manual assignment. Integrating convex optimization and semidefinite programming for suitable problems is another avenue. Data-driven computation of verifiable Lyapunov functions using Zubov’s equation is another interesting direction (Meng et al., 2023, 2024a). Finally, the tool has the potential to handle controls, making the simultaneous training of controllers and Lyapunov/value functions a natural next step. Initial results have been reported in (Meng et al., 2024b) and to appear in (Liu et al., 2024a).
Acknowledgements.
This research was supported in part by the Natural Sciences and Engineering Research Council of Canada and the Canada Research Chairs program. The development of the tool was also enabled in part by support provided by the Digital Research Alliance of Canada (alliance.ca).References
- (1)
- Abate et al. (2021) Alessandro Abate, Daniele Ahmed, Alec Edwards, Mirco Giacobbe, and Andrea Peruffo. 2021. FOSSIL: a software tool for the formal synthesis of Lyapunov functions and barrier certificates using neural networks. In Proceedings of the International Conference on Hybrid Systems: Computation and Control. 1–11.
- Abate et al. (2020) Alessandro Abate, Daniele Ahmed, Mirco Giacobbe, and Andrea Peruffo. 2020. Formal synthesis of Lyapunov neural networks. IEEE Control Systems Letters 5, 3 (2020), 773–778.
- Ahmed et al. (2020) Daniele Ahmed, Andrea Peruffo, and Alessandro Abate. 2020. Automated and sound synthesis of Lyapunov functions with SMT solvers. In Proc. of TACAS. Springer, 97–114.
- Chang et al. (2019) Ya-Chien Chang, Nima Roohi, and Sicun Gao. 2019. Neural Lyapunov control. Advances in Neural Information Processing Systems 32 (2019).
- Chen et al. (2021a) Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J Pappas, and Victor M Preciado. 2021a. Learning Lyapunov functions for hybrid systems. In Proceedings of the International Conference on Hybrid Systems: Computation and Control. 1–11.
- Chen et al. (2021b) Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J Pappas, and Victor M Preciado. 2021b. Learning region of attraction for nonlinear systems. In Proceedings of the IEEE Conference on Decision and Control. IEEE, 6477–6484.
- Dai et al. (2020) Hongkai Dai, Benoit Landry, Marco Pavone, and Russ Tedrake. 2020. Counter-example guided synthesis of neural network Lyapunov functions for piecewise linear systems. In Proc. of CDC. 1274–1281.
- Dai et al. (2021) Hongkai Dai, Benoit Landry, Lujie Yang, Marco Pavone, and Russ Tedrake. 2021. Lyapunov-stable neural-network control. In Proc. of RSS.
- Dawson et al. (2023) Charles Dawson, Sicun Gao, and Chuchu Fan. 2023. Safe control with learned certificates: A survey of neural Lyapunov, barrier, and contraction methods for robotics and control. IEEE Transactions on Robotics (2023).
- De Moura and Bjørner (2008) Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. In Proc. of TACAS. Springer, 337–340.
- Edwards et al. (2023) Alec Edwards, Andrea Peruffo, and Alessandro Abate. 2023. Fossil 2.0: Formal Certificate Synthesis for the Verification and Control of Dynamical Models. arXiv preprint arXiv:2311.09793 (2023).
- Gaby et al. (2022) Nathan Gaby, Fumin Zhang, and Xiaojing Ye. 2022. Lyapunov-Net: A deep neural network architecture for Lyapunov function approximation. In Proceedings of the IEEE Conference on Decision and Control. IEEE, 2091–2096.
- Gao et al. (2013) Sicun Gao, Soonho Kong, and Edmund M Clarke. 2013. dReal: an SMT solver for nonlinear theories over the reals. In Proceedings of International Conference on Automated Deduction. 208–214.
- Giesl (2007) Peter Giesl. 2007. Construction of Global Lyapunov Functions Using Radial Basis Functions. Vol. 1904. Springer.
- Giesl and Hafstein (2015) Peter Giesl and Sigurdur Hafstein. 2015. Review on computational methods for Lyapunov functions. Discrete & Continuous Dynamical Systems-B 20, 8 (2015), 2291.
- Grüne (2021a) Lars Grüne. 2021a. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics 8, 2 (2021).
- Grüne (2021b) Lars Grüne. 2021b. Overcoming the curse of dimensionality for approximating Lyapunov functions with deep neural networks under a small-gain condition. IFAC-PapersOnLine 54, 9 (2021), 317–322.
- Haddad and Chellaboina (2008) Wassim M Haddad and VijaySekhar Chellaboina. 2008. Nonlinear Dynamical Systems and Control: A Lyapunov-based Approach. Princeton University Press.
- Harris et al. (2020) Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. 2020. Array programming with NumPy. Nature 585, 7825 (2020), 357–362.
- Joblib Development Team (2023) Joblib Development Team. 2023. Joblib: running Python functions as pipeline jobs. https://joblib.readthedocs.io/
- Jones and Peet (2021) Morgan Jones and Matthew M Peet. 2021. Converse Lyapunov functions and converging inner approximations to maximal regions of attraction of nonlinear systems. In Proc. of CDC. IEEE, 5312–5319.
- Kang et al. (2023) Wei Kang, Kai Sun, and Liang Xu. 2023. Data-Driven Computational Methods for the Domain of Attraction and Zubov’s Equation. IEEE Trans. Automat. Control (2023).
- Kapinski et al. (2014) James Kapinski, Jyotirmoy V Deshmukh, Sriram Sankaranarayanan, and Nikos Arechiga. 2014. Simulation-guided Lyapunov analysis for hybrid dynamical systems. In Proc. of HSCC. 133–142.
- Khodadadi et al. (2014) Larissa Khodadadi, Behzad Samadi, and Hamid Khaloozadeh. 2014. Estimation of region of attraction for polynomial nonlinear systems: A numerical method. ISA Transactions 53, 1 (2014), 25–32.
- Lagaris et al. (1998) Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. 1998. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks 9, 5 (1998), 987–1000.
- Liu et al. (2023a) Jun Liu, Yiming Meng, Maxwell Fitzsimmons, and Ruikun Zhou. 2023a. Physics-Informed Neural Network Lyapunov Functions: PDE Characterization, Learning, and Verification. arXiv preprint arXiv:2312.09131 (2023).
- Liu et al. (2023b) Jun Liu, Yiming Meng, Maxwell Fitzsimmons, and Ruikun Zhou. 2023b. Towards Learning and Verifying Maximal Neural Lyapunov Functions. In Proceedings of IEEE Conference on Decision and Control.
- Liu et al. (2024b) Jun Liu, Yiming Meng, Maxwell Fitzsimmons, and Ruikun Zhou. 2024b. Compositionally Verifiable Vector Neural Lyapunov Functions for Stability Analysis of Interconnected Nonlinear Systems. In Proc. of ACC.
- Liu et al. (2024a) Jun Liu, Yiming Meng, and Ruikun Zhou. 2024a. LyZNet with Control: Physics-Informed Neural Network Control of Nonlinear Systems with Formal Guarantees. In Proceedings of the 8th IFAC Conference on Analysis and Design of Hybrid Systems.
- Long and Bayoumi (1993) Y. Long and M.M. Bayoumi. 1993. Feedback stabilization: control Lyapunov functions modelled by neural networks. In Proc. of CDC. 2812–2814 vol.3.
- Lyapunov (1992) Aleksandr Mikhailovich Lyapunov. 1992. The general problem of the stability of motion. Internat. J. Control 55, 3 (1992), 531–534.
- Meng et al. (2023) Yiming Meng, Ruikun Zhou, and Jun Liu. 2023. Learning Regions of Attraction in Unknown Dynamical Systems via Zubov-Koopman Lifting: Regularities and Convergence. arXiv preprint arXiv:2311.15119 (2023).
- Meng et al. (2024a) Yiming Meng, Ruikun Zhou, and Jun Liu. 2024a. Zubov-Koopman Learning of Maximal Lyapunov Functions. In Proceedings of American Control Conference.
- Meng et al. (2024b) Yiming Meng, Ruikun Zhou, Amartya Mukherjee, Maxwell Fitzsimmons, Christopher Song, and Jun Liu. 2024b. Physics-Informed Neural Network Policy Iteration: Algorithms, Convergence, and Verification. arXiv preprint (2024).
- Meurer et al. (2017) Aaron Meurer, Christopher P Smith, Mateusz Paprocki, Ondřej Čertík, Sergey B Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K Moore, Sartaj Singh, et al. 2017. SymPy: symbolic computing in Python. PeerJ Computer Science 3 (2017), e103.
- Packard et al. (2010) Andy Packard, Ufuk Topcu, Peter J Seiler Jr, and Gary Balas. 2010. Help on SOS. IEEE Control Systems Magazine 30, 4 (2010), 18–23.
- Papachristodoulou and Prajna (2002) Antonis Papachristodoulou and Stephen Prajna. 2002. On the construction of Lyapunov functions using the sum of squares decomposition. In Proceedings of IEEE Conference on Decision and Control, Vol. 3. IEEE, 3482–3487.
- Papachristodoulou and Prajna (2005) Antonis Papachristodoulou and Stephen Prajna. 2005. A tutorial on sum of squares techniques for systems analysis. In Proc. of ACC. IEEE, 2686–2700.
- Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32 (2019).
- Prokhorov (1994) Danil V Prokhorov. 1994. A Lyapunov machine for stability analysis of nonlinear systems. In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Vol. 2. IEEE, 1028–1031.
- Raissi et al. (2019) Maziar Raissi, Paris Perdikaris, and George E Karniadakis. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics 378 (2019), 686–707.
- Sepulchre et al. (2012) Rodolphe Sepulchre, Mrdjan Jankovic, and Petar V Kokotovic. 2012. Constructive Nonlinear Control. Springer Science & Business Media.
- Tan and Packard (2008) Weehong Tan and Andrew Packard. 2008. Stability region analysis using polynomial and composite polynomial Lyapunov functions and sum-of-squares programming. IEEE Trans. Automat. Control 53, 2 (2008), 565–571.
- Topcu et al. (2008) Ufuk Topcu, Andrew Packard, and Peter Seiler. 2008. Local stability analysis using simulations and sum-of-squares programming. Automatica 44, 10 (2008), 2669–2675.
- Vannelli and Vidyasagar (1985) Anthony Vannelli and Mathukumalli Vidyasagar. 1985. Maximal Lyapunov functions and domains of attraction for autonomous nonlinear systems. Automatica 21, 1 (1985), 69–80.
- Virtanen et al. (2020) Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. 2020. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods 17, 3 (2020), 261–272.
- Zhou et al. (2022) Ruikun Zhou, Thanin Quartz, Hans De Sterck, and Jun Liu. 2022. Neural Lyapunov Control of Unknown Nonlinear Systems with Stability Guarantees. In Advances in Neural Information Processing Systems.
- Zubov (1964) V. I. Zubov. 1964. Methods of A. M. Lyapunov and Their Application. Noordhoff.
7. Appendix
This appendix presents the dynamic equations for the numerical examples discussed in Section 5. Additionally, we report the computational times required for the training and verification processes in Examples 5.1 and 5.2.
7.1. Van der Pol equation
The reversed Van der Pol equation is given by
(20) | ||||
where is a parameter that affects the stiffness of the equation.
7.2. Two-machine power system
7.3. Inverted pendulum with linear control
Consider an inverted pendulum
(22) | ||||
where the linear gains are given by .
7.4. Synthetic 10-dimensional system
Consider the 10-dimensional nonlinear system from (Grüne, 2021a):
7.5. Computational times
Model | Training (sec) | Verification (sec) |
Van der Pol () | 494 | 973 |
Van der Pol () | 502 | 442 |
two-machine power | 473 | 3375 |
We report the computational times required for the training and verification processes in Examples 5.1 and 5.2 on a 2020 MacBook Pro with a 2 GHz Quad-Core Intel Core i5, without any GPU. For training, we randomly chose 300,000 collocation points in the domain and trained for 20 epochs. The sizes of the neural networks were as reported in Tables 1 and 2. Verification in dReal can be easily parallelized by adjusting the config parameter number_of_jobs, which may correspond to the number of cores/threads on the computer. For the laptop used in these experiments, there are four cores, allowing for eight threads via hyperthreading. As discussed in Remark 3, computational time is largely dependent on computer hardware. The computational times provided in Table 4 serve as a reference point. We expect that training time can be significantly improved by using GPUs, and verification time can be further reduced by utilizing additional cores.