Regularized Interior Point Methods for Constrained Optimization and Control
Abstract
Regularization and interior point approaches offer valuable perspectives to address constrained nonlinear optimization problems in view of control applications. This paper discusses the interactions between these techniques and proposes an algorithm that synergistically combines them. Building a sequence of closely related subproblems and approximately solving each of them, this approach inherently exploits warm-starting, early termination, and the possibility to adopt subsolvers tailored to specific problem structures. Moreover, by relaxing the equality constraints with a proximal penalty, the regularized subproblems are feasible and satisfy a strong constraint qualification by construction, allowing the safe use of efficient solvers. We show how regularization benefits the underlying linear algebra and a detailed convergence analysis indicates that limit points tend to minimize constraint violation and satisfy suitable optimality conditions. Finally, we report on numerical results in terms of robustness, indicating that the combined approach compares favorably against both interior point and augmented Lagrangian codes.
Keywords. Nonlinear programming, optimization-based control, interior point methods, augmented Lagrangian
1 Introduction
Mathematical optimization plays an important role in model-based and data-driven control systems, forming the basis for advanced techniques such as optimal control, nonlinear model predictive control (MPC) and parameters estimation. Significant research effort on computationally efficient real-time optimization algorithms contributed to the success of MPC over the years and yet the demand for fast and reliable methods for a broad spectrum of applications is growing; see [29, 27] and references therein. In order to tackle these challenges, it is desirable to have an algorithm that benefits from warm-starting information, can cope with infeasibility, is robust to problem scaling, and exploits the structure of the problem. In order to reduce computations and increase robustness, a common approach is to relax the requirements on the solutions, in terms of optimality, constraint violation, or both [14, 27]. In this work, we propose to address such features by combining proximal regularization and interior point techniques, for developing a stabilized, efficient and robust numerical method. We advocate for this strategy by bringing together and combining a variety of ideas from the nonlinear programming literature.
Let us consider the constrained nonconvex problem
(P) | ||||
where functions and are (at least) continuously differentiable. Problems with general inequality constraints on and can be reformulated as (P) by introducing auxiliary variables. Nonlinear programming (NLP) problems such as (P) have been extensively studied and there exist several approaches for their numerical solution. Interior point (IP) [31, 32], penalty and augmented Lagrangian (AL) [8, 2, 6] and sequential programming [16] schemes are predominant ideas and have been re-combined in many ways [9, 5, 3].
Starting from linear programming, IP methods had a significant impact on the field of mathematical optimization [18]. By solving a sequence of barrier subproblems, they can efficiently handle inequality constraints and scale well with the problem size. The state-of-the-art solver Ipopt, described by [32], is an emblem of this remarkable success. However, relying on Newton-type schemes for approximately solving the subproblems, IP algorithms may suffer degeneracy and lack of constraint qualifications if suitable counter-mechanisms are not implemented. On the contrary, proximal techniques naturally cope with these scenarios thanks to their inherent regularizing action. Widely investigated in the convex setting [26], their favorable properties have been exploited to design (primal-dual) stabilized methods building on the proximal point algorithm [17, 20, 11]. The analysis of their close connection with the AL framework [25] led to the development of techniques applicable to more general problems [22, 10, 24].
The combination of IP and proximal strategies has been successfully leveraged in the context of convex quadratic programming [1, 7] and for linear [13] and nonlinear [28] least-squares problems. With this work we address general NLPs and devise a method for their numerical solution, which can be seen as an extension of a regularized Lagrange–Newton method to handle bound constraints via a barrier function [10], or as a proximally stabilized IP algorithm, generalizing the ideas put forward by [7].111Although beyond the scope of this paper, topics such as infeasibility detection [3], ill-posed subproblems [2], and feasibility restoration [32, §3.3] are of great relevance in this context. Appropriate mechanisms should be incorporated in practical implementations, as they can significantly improve the performances.
Outline
The paper is organized as follows. In Section 2 we provide and comment on some relevant optimality notions. The methodology is discussed in Section 3 detailing the proposed algorithm, whose convergence properties are investigated in Section 4. We report numerical results on benchmark problems in Section 5 and conclude the paper in Section 6.
Notation
With , , and we denote the natural, real and extended real numbers, respectively. We denote the set of real vectors of dimension as ; a real matrix with rows and columns as and its transpose as . For a vector , its -th element is and its squared Euclidean norm is . A vector or matrix with all zero elements is represented by . The gradient of a function at a point is denoted by ; the Jacobian of a vector function by .
2 Optimality and Stationarity
Definition 2.1 (Feasibility).
Relative to (P), we say a point is feasible if and ; it is strictly feasible if additionally .
Definition 2.2 (Approximate KKT stationarity).
Relative to (P), a point is -KKT stationary for some if there exist multipliers and such that
(1a) | ||||
(1b) | ||||
(1c) |
When , the point is said KKT stationary.
Notice that (1c) provides a condition modeling the approximate satisfaction of the (elementwise) complementarity condition within some tolerance . The IP algorithm discussed in Section 3 satisfies a stronger version of these conditions, since the iterates it generates meet the constraints and by construction. Furthermore, we point out that the condition is analogous to , more typical for interior point methods, but does not depend on a specific barrier function, e.g., the logarithmic barrier in [32, §2.1].
We shall consider the limiting behavior of approximate KKT stationary points when the tolerance vanishes. In fact, having with -KKT stationary for (P) and does not guarantee KKT stationarity of a limit point of . This issue raises the need for defining KKT stationarity in an asymptotic sense [6, Def. 3.1].
Definition 2.3 (Asymptotic KKT stationarity).
Relative to (P), a feasible point is AKKT stationary if there exist sequences , and such that and
(2a) | ||||
(2b) |
3 Approach and Algorithm
The methodology presented in this section builds upon the AL framework, interpreted as a proximal point scheme in the nonconvex regime, and IP methods. The basic idea is to construct a sequence of proximally regularized subproblems and to approximately solve each of them as a single barrier subproblem, effectively merging the AL and IP outer loops. Reduced computational cost can be achieved with an effective warm-starting of the IP iterations and with the tight entanglement of barrier and proximal penalty strategies, by monitoring and updating the parameters’ values alongside with the inner tolerance.
A classical approach is to consider a sequence of bound-constrained Lagrangian (BCL) subproblems [8, 6]
(3) |
where and are some given penalty parameter and dual estimate, respectively. The nonlinearly-constrained Lagrangian (NCL) scheme [22] considers equality-constrained subproblems by introducing an auxiliary variable and the constraint . Analogously, a proximal point perspective yields the equivalent reformulation
(4) | ||||
recovering the dual regularization term obtained, e.g., by [24, 10, 11]. By construction, these regularized subproblems are always feasible and satisfy a strong constraint qualification, namely the LICQ, at all points.
The regularized subproblems (3)–(4) can be numerically solved via IP algorithms. Let us consider a barrier parameter and barrier functions , , each with domain , and such that as and . Exemplarily, the logarithmic function is one of such barrier functions. Other choices can be considered as well, e.g., to handle bilateral constraints [4]. We collect these barrier functions to define , , whose domain is . Thus, analogously to [3], a barrier counterpart for the BCL subproblem (3) reads
(5) |
whereas for the constrained subproblem (4) this leads to
(6) | ||||
which is a regularized version of [32, Eq. 3] and reminiscent of [5, Eq. 2]. It should be stressed that, in stark contrast with classical AL and IP schemes, we intend to find an (approximate) solution to the regularized subproblem (4) by (approximately) solving only one barrier subproblem (6). Inspired by [9, 7], our rationale is to drive and the inner tolerance concurrently toward zero, effectively knitting together proximal and barrier strategies.
It should be noted that a primal (Tikhonov-like) regularization term is not explicitly included in (3)–(6). In fact, the original objective could be replaced by a (proximal) model of the form , with some given primal regularization parameter and reference point . However, as this term can be interpreted as an inertia correction, we prefer the subsolver to account for its contribution; cf. [32, §3.1]. In this way, the subsolver can surgically tune the primal correction term as needed, possibly improving the convergence speed, and surpassing the issue that suitable values for are unknown a priori.
The overall procedure is detailed in Algorithm 3.1. At every outer iteration, indexed by , Algorithm 3.1 requires to compute an approximate stationary point, with the associated Lagrange multiplier, for the regularized barrier subproblem (6). As the dual estimate is selected from some bounded set at Algorithm 3.1, the AL scheme is safeguarded and has stronger global convergence properties [6, Ch. 4]. The assignment of at Algorithm 3.1 follows from comparing and matching the stationarity conditions for (P) and (6). After checking termination, we monitor progress in constraint violation and complementarity, based on (1), and update parameters and accordingly, as well as the inner tolerance . At Algorithms 3.1 and 3.1 we consider relaxed conditions for satisfactory feasibility and complementarity as it is preferable to have the sequences , , and bounded away from zero, in order to avoid unnecessary ill-conditioning and tight tolerances. Sufficient conditions to guarantee boundedness of the penalty parameter away from zero are given, e.g., by [2, §5]. Remarkably, as established by 4.2 in Section 4, there is no need for the barrier parameter to vanish in order to achieve -complementarity in the sense of (1c), for .
We shall mention that considering equivalent yet different subproblem formulations may affect the practical performance of the subsolver. It is enlightening to pinpoint the effect of the dual regularization in (6) and to appreciate its interactions with the linear algebra routines used to solve the linear systems arising in Newton-type methods. Although (6) has more (possibly many more) variables than (5), a simple reordering yields matrices with the same structure [10, 24]. Let us have a closer look. Defining the Lagrangian function , the stationarity condition for (5) reads , where , and the corresponding Newton system is
(7) |
where denotes the Hessian matrix or a symmetric approximation thereof. A linear transformation yields the equivalent linear system
(8) |
Analogous Newton systems for (6) read
(9) |
and formally solving for gives the condensed system
(10) |
The resemblances between these linear systems are apparent, as well as the differences. The AL relaxation in (5) introduces a dual regularization for both the linear algebra and nonlinear solver, whose hidden constraint holds pointwise due to the identity . We remark that, entering the (2,2)-block, the dual regularization prevents issues due to linear dependence. Furthermore, the primal regularization is left to the inertia correction strategy of the subsolver, affecting the (1,1)-block as in [32, §3.1]. If the approximation is positive definite, e.g., by adopting suitable quasi-Newton techniques, the matrix in (10) is symmetric quasi-definite and can be efficiently factorized with tailored linear algebra routines [30].
4 Convergence Analysis
In this section we analyze the asymptotic properties of the iterates generated by Algorithm 3.1 under the following blanket assumptions:
-
(a1)
Functions and in (P) are continuously differentiable.
-
(a2)
Subproblems (6) are well-posed for all parameters’ values, namely for any , , and .
First, we characterize the iterates in terms of stationarity.
Lemma 4.1.
Consider a sequence generated by Algorithm 3.1. Then, for all , it is , , and the following conditions hold:
(11a) | ||||
(11b) |
Proof.
Positivity of follows from the barrier function having domain , whereas nonpositivity of is a consequence of for all and . Based on 2.2 and Algorithm 3.1 of Algorithm 3.1, the -KKT stationarity of for (6), with multiplier , yields (11a) along with
(12a) | ||||
(12b) |
Patterning [12, Thm 4.2(ii)], we establish asymptotic complementarity.
Lemma 4.2.
Consider a sequence of iterates generated by Algorithm 3.1 with . Then, it holds .
Proof.
The algorithm can terminate in finite time only if the returned triplet satisfies . Excluding this ideal situation, we may assume that it runs indefinitely and that consequently , owing to Algorithms 3.1 and 3.1 and recalling that and for all by 4.1. Consider now an arbitrary index and the two possible cases. If , then the statement readily follows from . If instead a subsequence remains bounded away from zero, then is bounded and therefore as , proving the statement since . The claim then follows from the arbitrarity of the index and the subsequence. ∎
Like all penalty-type methods in the nonconvex setting, Algorithm 3.1 may generate limit points that are infeasible for (P). Patterning standard arguments, the following result gives sufficient conditions for the feasibility of limit points; cf. [6, Ex. 4.12].
Proposition 4.3.
Consider a sequence of iterates generated by Algorithm 3.1. Then, each limit point of is feasible for (P) if one of the following conditions holds:
-
(i)
the sequence is bounded away from zero, or
-
(ii)
there exists some such that for all
These conditions are generally difficult to check a priori. Nevertheless, in the situation where each iterate is actually a (possibly inexact) global minimizer of (6), then limit points generated by Algorithm 3.1 have minimum constraint violation and tend to minimize the objective function subject to minimal infeasibility [6, Thm 5.1, Thm 5.3]. In particular, limit points are indeed feasible if (P) admits feasible points. However, these properties cannot be expected by solving the subproblems only up to stationarity. Nonetheless, even in the case where a limit point is not necessarily feasible, the next result shows that it is at least a stationary point for a feasibility problem associated to (P).
Proposition 4.4.
Consider a sequence generated by Algorithm 3.1 with . Then each limit point of is KKT stationary for the problem
(13) |
Proof.
We may consider two cases, depending on the sequence . If remains bounded away from zero, then Algorithms 3.1 and 3.1 of Algorithm 3.1 imply that for . Continuity of and properties of norms yield . Furthermore, by construction, we have for all , hence . Altogether, this shows that is feasible for (P), namely a global minimizer for the feasibility problem (13) and, therefore, a KKT stationary point thereof. Assume now that . Define and as
for all . In view of 4.1, we have that and hold for all . Multiplying by , substituting and rearranging, we obtain
Now, let be a limit point of and a subsequence such that . Then the sequence is bounded, and so is by construction. Recalling from 4.1 that and , and observing that , we shall now take the limit of for , resulting in
for some . As a limit point of , together with satisfy by 4.2. Since we also have , it follows that is KKT stationary for (13) according to 2.2. ∎
Finally, we qualify the output of Algorithm 3.1 in the case of feasible limit points. In particular, it is shown that any feasible limit point is AKKT stationary for (P) in the sense of 2.3. Under some additional boundedness conditions, feasible limit points are KKT stationary, according to 2.2.
Theorem 4.5.
Let be a sequence of iterates generated by Algorithm 3.1 with . Let be a feasible limit point of and a subsequence such that . Then,
Proof.
Provided that the iterates admit a feasible limit point, finite termination of Algorithm 3.1 with an -KKT stationary point can be established as a direct consequence of 4.5.
5 Numerical Results
In this section we test an instance of the proposed regularized interior point approach, denoted RegIP, on the CUTEst benchmark problems [19]. RegIP is compared in terms of robustness against the IP solver Ipopt [32] and the AL solver Percival [15], which is based on a BCL method [8] coupled with a trust-region matrix-free solver [21] for the subproblems. We do not report runtimes nor iteration counts since a fair comparison would require close inspection of heuristics and fallbacks [32, §3].
We implemented RegIP in Julia and set up the numerical experiments adopting the JSO software infrastructure by [23]. The IP solver Ipopt acts as subsolver to execute Algorithm 3.1, warm-started at the current primal and dual , estimates. We use its parameter tol to set the (inner) tolerance , disabling other termination conditions, and let Ipopt control the barrier parameter as needed to approximately solve the regularized subproblem.222Solving a sequence of barrier subproblems may hinder the computational efficiency of RegIP compared to the approach behind Algorithm 3.1, but does not degrade its reliability. Ongoing research focuses on solving (6) and letting the IP subsolver update the barrier parameter after warm-starting at the current primal-dual estimate, in the spirit of [7, Alg. 3]. We let the safeguarding set be and choose by projecting the current estimate onto . We set the initial penalty parameter to , the inner tolerance , and parameters , , and . RegIP declares success, and returns a -KKT stationary point, as soon as and .333The condition implies both -stationarity (1a) and -complementarity (1c) required for -KKT stationarity. This follows from the observation that, in RegIP, the subsolver approximately solves (4) at Algorithm 3.1, not (6); see Footnote 2. Instead, if , and , RegIP stops declaring (local) infeasibility. For Ipopt, we set the tolerance tol to , remove the other desired thresholds, and disable termination based on acceptable iterates. For Percival, we set absolute and relative tolerances atol, rtol, and ctol to . We select all CUTEst problems with at most 1000 variables and constraints, obtaining a test set with 609 problems. All solvers are provided with the primal-dual initial point available in CUTEst, a time limit of seconds, the maximum number of iterations set to , and a tolerance . A solver is deemed to solve a problem if it returns with a successful status; it fails otherwise. The source codes for the numerical experiments have been archived on Zenodo at
Table 1 summarizes the results, stratified by solver, termination tolerance () and problem size (, ). For each combination, we indicate the number of times RegIP solves a problem that the other solver fails (“W”) or solves (“T+”) and the number of times RegIP fails on a problem that the other one fails (“T-”) or solves (“L”). The results show that RegIP succeeds on more problems than the other solvers, consistently for both low and high accuracy, indicating that the underlying regularized IP approach can form the basis for reliable and scalable solvers.444We agree with [5, §5] on the fact that “strong statements concerning the relative efficiency or robustness [] are not possible in nonlinear optimization.”
RegIP against Ipopt | |||||||
Size | Tolerance | Tolerance | |||||
Min | Max | W | T | L | W | T | L |
0 | 10 | 19 | 417+/25- | 3 | 15 | 417+/29- | 3 |
11 | 100 | 6 | 60+/7- | 1 | 7 | 58+/8- | 1 |
101 | 1000 | 6 | 57+/8- | 0 | 7 | 53+/10- | 1 |
RegIP against Percival | |||||||
Size | Tolerance | Tolerance | |||||
Min | Max | W | T | L | W | T | L |
0 | 10 | 17 | 419+/20- | 8 | 19 | 413+/24- | 8 |
11 | 100 | 12 | 54+/8- | 0 | 18 | 47+/8- | 1 |
101 | 1000 | 37 | 26+/8- | 0 | 37 | 23+/11- | 0 |
6 Conclusion
This paper has presented a regularized interior point approach to solving constrained nonlinear optimization problems. Operating as an outer regularization layer, a quadratic proximal penalty provides robustness whilst consuming minimal computation effort once embedded into existent interior point codes as a principled inertia correction strategy. Furthermore, regularizing the equality constraints allows to safely adopt more efficient linear algebra routines, while waiving the need for an infeasibility detection mechanism within the subsolver. Preliminary numerical results indicate that a close integration of proximal regularization within interior point schemes is key to provide efficient and robust solvers. We encourage further research in this direction.
Acknowledgements
I gratefully acknowledge the support of Ryan Loxton and the Centre for Optimisation and Decision Science for giving me the opportunity to visit Curtin University. I would also like to thank Hoa T. Bui for her friendly hospitality and lively discussions during this time in Perth.
References
- [1] A.Altman and J.Gondzio, Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization, Optimization Methods and Software 11 (1999), 275–302, doi:10.1080/10556789908805754.
- [2] R.Andreani, E. G.Birgin, J. M.Martínez, and M. L.Schuverdt, On Augmented Lagrangian Methods with General Lower–Level Constraints, SIAM Journal on Optimization 18 (2008), 1286–1309, doi:10.1137/060654797.
- [3] P.Armand and N. N.Tran, Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization, Optimization Methods and Software 34 (2019), 991–1013, doi:10.1080/10556788.2018.1528250.
- [4] E.Bertolazzi, F.Biral, and M.Da Lio, Real-Time Motion Planning for Multibody Systems, Multibody System Dynamics 17 (2007), 119–139, doi:10.1007/s11044-007-9037-7.
- [5] E. G.Birgin, L. F.Bueno, and J. M.Martínez, Sequential equality-constrained optimization for nonlinear programming, Computational Optimization and Applications 65 (2016), 699–721, doi:10.1007/s10589-016-9849-6.
- [6] E. G.Birgin and J. M.Martínez, Practical Augmented Lagrangian Methods for Constrained Optimization, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2014.
- [7] S.Cipolla and J.Gondzio, Proximal stabilized Interior Point Methods for quadratic programming and low-frequency-updates preconditioning techniques, 2022, doi:10.48550/arxiv.2205.01775.
- [8] A. R.Conn, N. I. M.Gould, and P. L.Toint, A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds, SIAM Journal on Numerical Analysis 28 (1991), 545–572, doi:10.1137/0728030.
- [9] F. E.Curtis, A penalty-interior-point algorithm for nonlinear constrained optimization, Mathematical Programming Computation 4 (2012), 181–209, doi:10.1007/s12532-012-0041-4.
- [10] A.De Marchi, Augmented Lagrangian methods as dynamical systems for constrained optimization, in 60th IEEE Conference on Decision and Control (CDC), IEEE, Austin, TX, 2021, 6533–6538, doi:10.1109/cdc45484.2021.9683199.
- [11] A.De Marchi, On a Primal-Dual Newton Proximal Method for Convex Quadratic Programs, Computational Optimization and Applications (2022), doi:10.1007/s10589-021-00342-y.
- [12] A.De Marchi and A.Themelis, An Interior Proximal Gradient Method for Nonconvex Optimization, 2022, doi:10.48550/arxiv.2208.00799.
- [13] M.Dehghani, A.Lambe, and D.Orban, A regularized interior-point method for constrained linear least squares, INFOR: Information Systems and Operational Research 58 (2020), 202–224, doi:10.1080/03155986.2018.1559428.
- [14] M.Diehl, H. J.Ferreau, and N.Haverbeke, Efficient numerical methods for nonlinear MPC and moving horizon estimation, in Nonlinear Model Predictive Control: Towards New Challenging Applications, Springer, 2009, 391–417, doi:10.1007/978-3-642-01094-1_32.
- [15] E. A.dos Santos and A. S.Siqueira, Percival.jl: an augmented Lagrangian method, 2020, doi:10.5281/zenodo.3969045, https://github.com/JuliaSmoothOptimizers/Percival.jl.
- [16] A. V.Fiacco and G. P.McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, Wiley, New York, 1968.
- [17] M. P.Friedlander and D.Orban, A primal-dual regularized interior-point method for convex quadratic programs, Mathematical Programming Computation 4 (2012), 71–107, doi:10.1007/s12532-012-0035-2.
- [18] J.Gondzio, Interior point methods 25 years later, European Journal of Operational Research 218 (2012), 587–601, doi:10.1016/j.ejor.2011.09.017.
- [19] N. I. M.Gould, D.Orban, and P. L.Toint, CUTEst: a Constrained and Unconstrained Testing Environment with safe threads for mathematical optimization, Computational Optimization and Applications 60 (2015), 545–557, doi:10.1007/s10589-014-9687-3.
- [20] D.Liao-McPherson and I.Kolmanovsky, FBstab: A proximally stabilized semismooth algorithm for convex quadratic programming, Automatica 113 (2020), 108801, doi:10.1016/j.automatica.2019.108801.
- [21] C. J.Lin and J. J.Moré, Newton’s Method for Large Bound-Constrained Optimization Problems, SIAM Journal on Optimization 9 (1999), 1100–1127, doi:10.1137/s1052623498345075.
- [22] D.Ma, K. L.Judd, D.Orban, and M. A.Saunders, Stabilized Optimization Via an NCL Algorithm, in Numerical Analysis and Optimization, M.Al-Baali, L.Grandinetti, and A.Purnama (eds.), Springer, 2018, 173–191, doi:10.1007/978-3-319-90026-1_8.
- [23] D.Orban and A. S.Siqueira, JuliaSmoothOptimizers: Infrastructure and Solvers for Continuous Optimization in Julia, 2019, doi:10.5281/zenodo.2655082, https://juliasmoothoptimizers.github.io.
- [24] A.Potschka and H. G.Bock, A sequential homotopy method for mathematical programming problems, Mathematical Programming 187 (2021), 459–486, doi:10.1007/s10107-020-01488-z.
- [25] R. T.Rockafellar, Augmented Lagrange Multiplier Functions and Duality in Nonconvex Programming, SIAM Journal on Control 12 (1974), 268–285, doi:10.1137/0312021.
- [26] R. T.Rockafellar, Monotone Operators and the Proximal Point Algorithm, SIAM Journal on Control and Optimization 14 (1976), 877–898, doi:10.1137/0314056.
- [27] N.Saraf and A.Bemporad, An efficient bounded-variable nonlinear least-squares algorithm for embedded MPC, Automatica 141 (2022), 110293, doi:10.1016/j.automatica.2022.110293.
- [28] A. S.Siqueira and D.Orban, A Regularized Interior-Point Method for Constrained Nonlinear Least Squares, 2018, https://abelsiqueira.github.io/assets/2018-07-23-xiibrazopt.pdf (visited September 21, 2022). XII Brazilian Workshop on Continuous Optimization.
- [29] P.Sopasakis, E.Fresk, and P.Patrinos, OpEn: Code Generation for Embedded Nonconvex Optimization, IFAC-PapersOnLine 53 (2020), 6548–6554, doi:10.1016/j.ifacol.2020.12.071. 21st IFAC World Congress.
- [30] R. J.Vanderbei, Symmetric Quasidefinite Matrices, SIAM Journal on Optimization 5 (1995), 100–113, doi:10.1137/0805005.
- [31] R. J.Vanderbei and D. F.Shanno, An Interior-Point Algorithm for Nonconvex Nonlinear Programming, Computational Optimization and Applications 13 (1999), 231–252, doi:10.1023/a:1008677427361.
- [32] A.Wächter and L. T.Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming 106 (2006), 25–57, doi:10.1007/s10107-004-0559-y.