Single-Loop Deterministic and Stochastic Interior-Point Algorithms for Nonlinearly Constrained Optimization
An interior-point algorithm framework is proposed, analyzed, and tested for solving nonlinearly constrained continuous optimization problems. The main setting of interest is when the objective and constraint functions may be nonlinear and/or nonconvex, and when constraint values and derivatives are tractable to compute, but objective function values and derivatives can only be estimated. The algorithm is intended primarily for a setting that is similar for stochastic-gradient methods for unconstrained optimization, namely, the setting when stochastic-gradient estimates are available and employed in place of gradients of the objective, and when no objective function values (nor estimates of them) are employed. This is achieved by the interior-point framework having a single-loop structure rather than the nested-loop structure that is typical of contemporary interior-point methods. For completeness, convergence guarantees for the framework are provided both for deterministic and stochastic settings. Numerical experiments show that the algorithm yields good performance on a large set of test problems.
1 Introduction
The interior-point methodology is one of the most popular and effective derivative-based algorithm methodologies for solving inequality-constrained continuous optimization problems. Indeed, while there has been a long history of development of other types of algorithms—e.g., of the augmented-Lagrangian and sequential-quadratic-optimization varieties [16]—interior-point methods are arguably the most effective class of approaches for solving many constrained continuous optimization problems. In the setting of nonconvex optimization, which is the setting of interest in this paper, the effectiveness of interior-point methods is evidenced by the fact that the most popular software packages for solving such problems are based on interior-point methods. These include the packages Ipopt [19], Knitro [5], and LOQO [18]. This popularity withstands despite the fact that interior-point methods for solving nonconvex problems do not necessarily achieve the same optimal worst-case complexity properties that can be achieved in convex settings.
Despite its widespread success, little work has been done on extending the interior-point methodology to the stochastic regime. Here, we are focused on the setting in which constraint values and derivatives are tractable to compute, but objective values and derivatives are not. Furthermore, like in the setting of stochastic-gradient-based methods for solving unconstrained problems, our focus is to have an interior-point method that employs stochastic-gradient estimates (in place of exact objective gradients) and no objective function evaluations. Given the success of interior-point methods in the deterministic regime, there is ample motivation to explore their potential theoretical and practical behavior in such a stochastic regime. This is the main motivation for the work presented in this paper.
We contend that there are at least a few significant obstacles to the design and analysis of a stochastic interior-point method, which may be at least part of the reason that there has been little prior work on this topic. Firstly, interior-point methods often have a nested-loop structure wherein an inner loop is designed to solve (approximately) a so-called barrier subproblem for a fixed value of a barrier parameter and the outer loop involves decreasing the barrier parameter. Determining when to terminate the inner loop typically involves evaluating a measure of stationarity. However, in the aforementioned stochastic setting, evaluating such a measure is not tractable since objective-gradient evaluations are not tractable. Secondly, the barrier functions introduced in interior-point methods have corresponding gradient functions that are not Lipschitz continuous nor bounded. This presents major difficulties since for many stochastic-gradient-based methods, Lipschitz continuity and boundedness of the gradients are necessary features for ensuring convergence guarantees. Thirdly, interior-point methods regularly involve globalization mechanisms such as line searches or trust-region step-acceptance mechanisms that require exact function evaluations, but these are not tractable to perform in the aforementioned stochastic setting that is of interest here.
1.1 Contributions and Limitations
To provide a significant advancement in the design and analysis of stochastic-gradient-based interior-point methods, in this paper we propose and analyze a single-loop interior-point algorithm framework that offers rigorous convergence guarantees in both deterministic and stochastic settings. Let us emphasize upfront that we do not expect our method to be competitive with state-of-the-art interior-point methods in the deterministic regime, and especially not with those that employ second-order derivatives of the objective. Nevertheless, we present an algorithm and analyze its convergence properties for that setting for completeness and since it helps to elucidate the essential features of an algorithm that are necessary for achieving convergence guarantees in the stochastic regime.
Our method is an extension of the approach proposed in [7], which was proposed and analyzed for the bound-constrained setting. We go beyond the bound-constrained setting and propose a framework that is applicable for problems with nonlinear inequality constraints, at least as long as a strictly feasible initial point for the algorithm can be provided. For the sake of generality, we present the framework in a manner such that linear equality constraints can also be present, although we emphasize that our convergence guarantees in neither the deterministic nor stochastic regime allow for the presence of nonlinear equality constraints. Along with our concluding remarks, we discuss the challenges that would be faced when trying to extend our approach to settings with nonlinear equality constraints.
Most of the assumptions that are required for our convergence guarantees are standard in the literature on interior-point algorithms, specifically for that on so-called feasible interior-point methods that maintain feasibility (with respect to all of the constraints) at all algorithm iterates. However, we require one additional assumption that is not standard. This assumption essentially requires that, when an algorithm iterate is close to a constraint boundary, a direction of sufficient descent can be computed that moves sufficiently away from the boundary. It is, in essence, an assumption that combines two aspects: (a) a type of nondegeneracy assumption and (b) an assumption that the barrier parameter is initialized to be sufficiently large compared to a neighborhood parameter that is introduced in our algorithm. We demonstrate these aspects through some illustrative examples. Since, to the best of our knowledge, there are no other interior-point methods in the literature that possess convergence guarantees in a stochastic setting that is comparable to ours, we contend that our algorithm and analysis together constitute a notable contribution to the literature despite our method being a feasible interior-point method and our need to introduce a nonstandard assumption. Future research may reveal techniques that can offer convergence guarantees for a stochastic-gradient-based interior-point method in other settings.
We also provide the results of numerical experiments showing that our algorithms performs well when employed to solve challenging test problems.
1.2 Notation
We use to denote the set of real numbers and, given , we use (resp., ) to denote the set of real numbers that are greater than or equal to (resp., strictly greater than ). Superscripts are used to indicate the dimension of a vector or matrix whose elements are restricted to such a set; e.g., the set of -dimensional vectors is denoted as . The set of positive integers is denoted as and, given , the set of positive integers up to is denoted as . The set of -by--dimensional symmetric positive semidefinite (resp., definite) matrices is denoted as (resp., ).
A real-valued sequence is always introduced as , where is a real vector (sub)space and the subset notation indicates that for each index (although elements in the sequence may repeat). Here, subscripts are used to indicate index number in a sequence, although in some situations a subscript is used to indicate an element number of a vector or vector-valued function. When indicating the th element of a vector , the notation is used. In all cases, the meaning of a subscript is clear from the context. The notation indicates that the sequence converges to . The notation indicates that is a sequence of positive real numbers that vanishes monotonically.
Given a vector , we use to denote the diagonal matrix for which, for all , the element is . Given two vectors and , their Hadamard (i.e., component-wise) product is denoted as . We use to denote a vector whose elements are all equal to one; the length of such a vector is determined by the context in which it appears. Given , the null space of is denoted as . Given , we use (resp., ) to indicate that (resp., ) is positive semidefinite. Given and , the -norm of is written as . Finally, given a set , its interior is denoted as and its polar cone is denoted as
1.3 Outline
In §2, we present our problem of interest and preliminary commentary. In §3, we present our proposed framework and prove basic facts about its behavior. Section 4 contains our assumptions and convergence analyses for the deterministic and stochastic settings. As previously mentioned, our analyses rely on an assumption that is not standard; we discuss this assumption in detail in §5, highlighting certain settings of interest in which it indeed holds. Details about a practical implementation of our algorithm and resulting numerical experiments are presented in §6. Concluding remarks are provided in §7.
2 Problem Formulation and Preliminaries
Our proposed algorithm framework (Algorithm 1 on page 1) is designed to solve nonlinearly constrained optimization problems of the form
(1) |
where (with ), , and the functions and satisfy the following assumption with respect to the feasible region of problem (1), namely,
Assumption 2.1.
There exists an open set over which the objective function is continuously differentiable and bounded below by and the constraint function is continuously differentiable. In addition, the matrix has full row rank. Finally, there exists such that i.e., where the inequality holds strictly, component-wise.
The elements of Assumption 2.1 pertaining to and are mostly common for the literature on smooth nonlinearly constrained optimization. The only exception is the assumption that there exists such that . This aspect of the assumption precludes the presence of nonlinear equality constraints (through two-sided inequalities). It is needed in our setting since our algorithm requires a feasible initial point that is strictly feasible with respect to the nonaffine constraints.
Assumption 2.1 does not require that the feasible region is bounded, although our convergence analysis in §4 requires that the iterates remain within a set such that remains component-wise bounded below, and that the functions , , , and for all are Lipschitz continuous; see Assumption 4.1 on page 4.1. Our analysis could be extended to the setting in which the matrix does not have full row rank as long as the linear system is consistent. Our algorithm framework requires that the initial iterate is contained in and that each step lies in the null space of , which inductively implies that each iterate lies in . These requirements remain reasonable when has linearly dependent rows as long as a projection operator onto the null space of is available. Merely for ease of notation, we assume has full row rank.
In fact, our algorithm framework ensures that all iterates remain in the set
(2) |
Observe that this set is guaranteed to be nonempty under Assumption 2.1. In each iteration, our framework involves the computation of a search direction that, for a given barrier parameter , aims to reduce the value of the objective augmented with the logarithmic barrier function defined with respect to the inequality-constraint functions of (1), namely, the function defined by
Going forward, we refer to this as the barrier-augmented objective function. Observe that if is a local solution of the problem to minimize over , then there exists a Lagrange multiplier vector such that, along with for all , one finds that
Any tuple satisfying this system is referred to as a stationary point for the problem to minimize over . This system should be compared to the Karush-Kuhn-Tucker (KKT) conditions for problem (1); in particular, under a constraint qualification—such as the Mangasarian-Fromovitz constraint qualification (MFCQ) [13]—if is a local solution of problem (1), then there exists and such that the following system of KKT conditions is satisfied:
(3) |
Any tuple satisfying this system is referred to as a stationary point (or KKT point) for problem (1). As is generally the case for an interior-point method, our algorithm framework aims to find such a KKT point by taking steps to reduce the barrier-augmented objective function for a diminishing sequence of barrier parameters—a procedure that can be motivated by the fact that the stationarity conditions for the minimization of over tend toward those of (1).
3 Algorithm
Our algorithm framework is stated as Algorithm 1 on page 1. The framework presumes that it is given an initial neighborhood parameter and an initial iterate , where for all we define
Observe that the existence of such and is guaranteed under Assumption 2.1. Moreover, the steps of Algorithm 1 ensure that, for a prescribed, monotonically decreasing sequence with , one finds for all . Hence, to describe the main steps of the algorithm for each , we may presume that for all the algorithm has an iterate .
Following [7], Algorithm 1 also employs a prescribed, monotonically decreasing sequence of barrier parameters such that is a constant sequence. Other inputs required by Algorithm 1 in the deterministic setting are , , , with , and , each of which influences the search direction computation. Given these inputs along with and , the computation proceeds as follows. Firstly, a set of indices of constraints that are nearly active with respect to is identified as
Observe that since it follows that for all , and, since the facts that and is a constant sequence together ensure , it follows that for all . Secondly, a vector is computed, the value of which is distinct between the deterministic and stochastic settings. In particular, the framework employs
(4) |
where in the stochastic setting the vector is a stochastic estimate of . Third, with denoting the orthogonal projection matrix onto the null space of , a search direction is computed such that
(5a) | ||||
(5b) | ||||
(5c) | ||||
(5d) | ||||
(5e) |
In the deterministic setting, all that is required of the search direction is that it satisfies (5). For the stochastic setting, we introduce in our analysis a particular strategy for computing the search direction that implies that (5) holds along with other useful properties for our analysis for that setting.
Let us now discuss (5) in detail. Firstly, condition (5a) requires that the search direction satisfies , i.e., it lies in the null space of , which, since , ensures that the subsequent iterate will also lie in . Secondly, conditions (5b) and (5c) require that the norm of the search direction is proportional to the norm of , i.e., the norm of the projection of onto . Thus, conditions (5b) and (5c) require that the norm of the search direction is proportional to that of a projected gradient (estimate), which is a common type of requirement for a search direction in an algorithm for minimizing a smooth objective function over an affine set. Thirdly, condition (5d) requires that makes an angle with that is sufficiently obtuse. Again, this is a common type of requirement; e.g., in the deterministic setting, it ensures that any nonzero is a direction of sufficient descent with respect to the barrier-augmented objective function. Together, conditions (5a)–(5d) define a nonempty set and computing a search direction to satisfy these conditions is straightforward; e.g., one can compute the projection of onto and scale the resulting direction, if needed, to satisfy (5b)–(5c). However, it remains finally to consider condition (5e). Unfortunately, this is a requirement that is not always satisfiable along with the other conditions, namely, (5a)–(5d). That said, we contend that all of (5) can be satisfied as long as (a) the constraints are not degenerate in some sense and (b) the initial barrier parameter is sufficiently large relative to . Denoting the conical hull of nearly active constraint gradients as
condition (5e) requires that lies sufficiently within the interior of the polar cone of , i.e., . Hence, to satisfy (5), there needs to exist a direction sufficiently within this interior that also satisfies (5a)–(5d). For our purposes now, let us simply introduce Assumption 3.1 below, which implicitly means that Algorithm 1 is being applied to a problem such that a search direction satisfying (5) always exists. We defer until §5 a more detailed discussion Assumption 3.1, where in particular we provide (a) example problems for which the assumption can be shown to hold or cannot be shown to hold, (b) further explanation about why the assumption can be viewed as a type of nondegeneracy assumption and an assumption about being sufficient large (which is reminiscient of a requirement in Corollary 3.8 and Lemma 3.15 in [7] for the bound-constrained setting), and (c) a procedure for computing to satisfy (5) when such a direction exists.
Assumption 3.1.
Upon computation of the search direction, Algorithm 1 chooses a step size . For simplicity, our analyses for the deterministic and stochastic settings consider conservative rules for selecting the step sizes that each depend on the barrier- and neighborhood-parameter sequences, and . We conjecture that it would be possible to generalize our algorithm and analyses to allow more flexibility in the step-size selection rules, as is done in [7]. However, for our purposes here, we present conservative strategies that are sufficient for proving convergence.
The algorithm’s last major step in the th iteration is to compute a positive real number such that the line segment is contained in the neighborhood . The subsequent iterate is then set as . As mentioned at the beginning of this section, from the facts that the th iterate has , the th search direction satisfies (5a), and the choice of ensures that , it follows that . For the sake of formality, we state the following lemma that has been proved.
Lemma 3.1.
For all generated in any run of Algorithm 1, it follows that the iterate satisfies and the search direction satifies .
4 Analysis
We prove theoretical convergence guarantees for Algorithm 1 in both deterministic and stochastic settings. For both settings, we make the following additional, standard type of assumption beyond Assumptions 2.1 and 3.1.
Assumption 4.1.
Let be an open convex set containing the sequence of line segments between iterates, namely, , generated in any run of Algorithm 1. There exist constants , , , , , and such that:
-
is Lipschitz with constant over , so (under Assumption 2.1) it follows that is bounded in (Euclidean) norm by over .
-
is Lipschitz with constant with respect to over .
-
For all , is bounded in absolute value by over .
-
For all , is Lipschitz with constant over , so (under Assumption 2.1) it follows that is bounded in (Euclidean) norm by over .
-
For all , is Lipschitz with constant with respect to over .
Under Assumption 4.1, with defined in the assumption, we introduce along with the shifted barrier-augmented objective function defined by
Important properties of the shifted barrier-augmented objective function are that (a) it is bounded below by whose existence follows under Assumption 2.1 and (b) it shares the same gradient function with respect to its first argument as that of the (unshifted) barrier-augmented objective function; i.e., for all , one finds that
(6) |
where denotes the gradient operator with respect to a function’s first argument. These follow since for any and one has and , so implies
(7) |
In addition, the shifted barrier-augmented objective function is monotonically increasing in its second argument in the sense that for any one finds that for any with . These properties make the function useful for our analysis. Indeed, even though the algorithm is not aware of the constants in Assumption 4.1, we shall show that it follows from the former property above that it computes steps that lead to (expected) decrease in the shifted barrier-augmented objective function . Also, with the latter property above, decreases in the barrier parameter also lead to decreases in .
Our first result is useful for both the deterministic and stochastic settings. It shows that, for any , the gradient of the shifted barrier-augmented objective function with respect to its first argument is Lipschitz over line segments in with a Lipschitz constant that depends on . This is a critical property, especially since this function is not globally Lipschitz over . The proof follows standard techniques for such a result; we provide it for completeness.
Lemma 4.1.
For any with , , , and , one finds with
that one has both
Proof.
For arbitrary satisfying the conditions of the lemma, it follows from Assumption 2.1, Assumption 4.1, (6), and the triangle inequality that
from which the first desired conclusion follows. Hence, for arbitrary satisfying the conditions of the lemma, it follows from the Fundamental Theorem of Calculus and the Cauchy–Schwarz inequality that
so the desired conclusion follows from the fact that . ∎∎
4.1 Deterministic Setting
In this subsection, we prove convergence guarantees for Algorithm 1 in the deterministic setting, i.e., when (see (4) and (6)) for all . For ease of exposition, we refer to this instance of the method as Algorithm 1(D). In particular, we prove that under a certain rule for choosing the step-size parameters, the algorithm is well defined, generates a sequence of iterates over which a measure of stationarity vanishes, and under a constraint qualification can produce a sequence of Lagrange multipliers such that convergence to a KKT point is guaranteed. This algorithm for the deterministic setting is not expected to be competitive with state-of-the-art (second-order) interior-point methods for solving problems in many real-world settings. That said, our analysis for the deterministic setting provides a useful precursor for our subsequent analysis for the stochastic setting for which state-of-the-art methods are not applicable (since they do not possess known convergence guarantees in the stochastic setting that we consider in our analysis in §4.2).
The class of step-size parameters that we consider for our analysis in this subsection is that satisfying the following parameter rule. We state the rule, then summarize the reasons that the rule has been designed in this manner.
Parameter 1.
The fundamental aspects of Parameter Rule 1 can be understood as follows. First, the choice of ensures that this step size is chosen sufficiently small such that it ensures sufficient decrease in the barrier-augmented objective function with each step. Intuitively, this can be seen in the fact that the step size is proportional to the reciprocal of a Lipschitz-type constant for the gradient of the barrier function (see Lemma 4.1), as is common in basic descent methods for gradient-based optimization algorithms. For additional flexibility and since it is required in the stochastic setting, we introduce the factor such that the step size may diminish faster, namely, when a choice is made. (That said, our analysis shows that is a valid choice in the deterministic setting.) Finally, the rule for the choice of is shown in our analysis to guarantee that for all , which is necessary for our analysis to employ Lemma 4.1.
For our next lemma, we prove this critical property of for all .
Lemma 4.2.
Suppose that Assumptions 2.1, 3.1, and 4.1 hold and that Algorithm 1(D) is run with Parameter Rule 1. Then, for all . the choice of in Parameter Rule 1 ensures that , i.e., the condition in Step 6 of Algorithm 1(D) holds. Consequently, under Parameter Rule 1, Algorithm 1(D) is well defined in the sense that it will generate an infinite sequence of iterates.
Proof.
Consider arbitrary and . We prove that the choice of in Parameter Rule 1 ensures that for all , after which the desired conclusion follows directly from the fact that for all . Under Assumptions 2.1 and 4.1, one has for all that
Therefore, is ensured as long as yields
(8) |
Recalling that Lemma 3.1 ensures , which means that , it follows that (8) holds for all when , and otherwise (when ) one finds that the left-hand side of (8) is a strongly convex quadratic function of . The second term in the minimum in the definition of in Parameter Rule 1 is the unique positive root of this quadratic function, where we remark that since, as previously mentioned, . ∎∎
We now go beyond Lemma 4.2 and prove a lower bound for that will be critical for our ultimate convergence guarantee in this subsection.
Lemma 4.3.
Proof.
Consider arbitrary and . Our aim is to prove that , from which the desired conclusion follows due to the fact that under Parameter Rule 1. Toward this end, observe that if Parameter Rule 1 yields , then, clearly, and there is nothing left to prove. Therefore, we may proceed under the assumption that Parameter Rule 1 yields . It follows under this condition that
(9) |
On the other hand, by Assumptions 2.1, 3.1, and 4.1, the fact that the matrix norm is submultiplicative, the triangle inequality, (5c), the fact that , the fact that is a constant sequence, and the fact that for the orthogonal projection matrix one has , one finds that
(10) | ||||
Let us now proceed by considering two cases.
-
1.
Suppose . (The th constraint is not nearly active.) Observe that Assumption 4.1 and the Cauchy-Schwarz inequality together imply that . At the same time, observe that the numerator of the right-hand side of (9) can be viewed as the value of the function defined by at for , where positivity of follows since implies . One finds that is a monotonically decreasing function of over . Hence, from (9), (10), and the condition of this case, one has
(11) - 2.
Combining the results from the two cases, one can conclude that for all , which, as previously mentioned, completes the proof. ∎∎
For our next lemma, we prove the aforementioned sufficient decrease condition that is guaranteed by the choices in Parameter Rule 1. In particular, one finds in the lemma that the amount of decrease is at least proportional to a squared norm of the projected gradient of the barrier-augmented objective function, which makes sense since, under Lemma 3.1, and for all .
Lemma 4.4.
Proof.
We are now prepared to prove our guarantee for the deterministic setting. For the sake of notational clarity, the following theorem introduces the parameters and that dictate the rates of decrease of and , respectively. However, the theorem requires , and indeed one can see in the details of the proof that both and are required to prove the theorem.
Theorem 4.1.
Suppose that Assumptions 2.1, 3.1, and 4.1 hold and that Algorithm 1(D) is run with Parameter Rule 1. Suppose, in addition, that for some with and and for some and the algorithm employs and for all . Then,
(14) |
If, in addition, there exists with such that
-
,
-
for some , and
-
at the linear independence constraint qualification (LICQ) holds with respect to problem (1) in the sense that with the columns of combined with the vectors in form a linearly independent set,
then is a KKT point for problem (1) in the sense that there exists a pair of Lagrange multipliers such that satisfies (3).
Proof.
It follows from (7) that is bounded below by over . Then, by summing the expression in Lemma 4.4 over and (6), one finds that
To complete the proof of (14), let us now show that is unsummable. To begin, first observe that the lower bound on stated in Lemma 4.3 holds, i.e., there exist , , and such that, for all , one has
Consequently, it holds for all that
(15) |
Let us now consider the sequences , , and in turn. Our aim is to show that each of these sequences is unsummable, which will complete the proof of (14). With respect to the step-size sequence , one finds by Parameter Rule 1, , and the conditions of the theorem that
(16) |
Observe that , , and show that . With (16) and , this shows
which since implies that is unsummable, as desired. Now, with respect to , one finds under the conditions of the theorem that
(17) |
Observe that and show that
In fact, , so the bound above along with (17) shows that is unsummable, as desired. Finally, can be seen to be unsummable since for all . Having reached the desired conclusion that , , and are unsummable, it follows through (15) that is unsummable, which, as previously mentioned, completes the proof of (14).
Now suppose that there exists an infinite-cardinality set as described in the theorem. By construction of the algorithm, it follows that and for all . Now, for all , define the Lagrange multiplier estimates
(18) |
Since and for all , it follows that for all . In addition, the fact that shows that
(19) |
Consider . Since for all and , it follows that for all . Combining this with (19) and using the fact that is bounded for all under Assumption 4.1,
Under the LICQ, it follows from this limit that converges to some pair such that satisfies (3), as desired. ∎∎
4.2 Stochastic Setting
We now prove convergence guarantees for Algorithm 1 in a stochastic setting, i.e., when (see (4)) for all where is a stochastic estimate of . Formally, let us consider the probability space , where captures all outcomes of a run of Algorithm 1. As mentioned shortly, we make assumptions that guarantee that each iteration is well defined, which implies that each such outcome corresponds to an infinite sequence of iterates. In this manner, each realization of a run of the algorithm can be associated with , an infinite-dimensional tuple whose th element determines the stochastic gradient-of-the-objective estimate in iteration . The stochastic process defined by the algorithm can thus be expressed as
where, for all , the random variables are the iterate , stochastic objective-gradient estimator , stochastic gradient estimator
(20) |
direction , step size , and neighborhood-enforcement parameter . Observe that the statement of Algorithm 1 on page 1 instantiates a particular realization of a run, expressed as .
Given the initial conditions of the algorithm (including for simplicity that for all ), let denote the sub--algebra of corresponding to the initial conditions and, for all , let denote the sub--algebra of corresponding to the initial conditions and . In this manner, the sequence is a filtration. For our analysis, with respect to this filtration, we make the following assumption. For the sake of brevity, here and for the remainder of our analysis, we drop from our notation the dependence of the stochastic process on an element of , since this dependence remains obvious.
Assumption 4.2.
For all , it holds that . In addition, there exists such that, for all , one has .
Assumption 4.2 amounts to the stochastic-gradient estimators being unbiased with bounded error. This is consistent with the stochastic setting in [7]. Looser assumptions are possible for stochastic-gradient-based methods in the unconstrained setting, but for the context of stochastic-gradient-based interior-point methods we know of no analysis that can avoid a bounded-error assumption.
We also carry forward our prior assumptions, namely, Assumptions 2.1, 3.1, and 4.1. However, we strengthen Assumption 3.1 somewhat to impose additional structure on the manner in which the search direction is defined. In particular, for all , we define the search direction through the linear system
(21) |
where the sequence satisfies the following assumption.
Assumption 4.3.
All combined, Assumptions 3.1 and 4.3 amount to the assumption that is computed by (21) with a choice of such that (5e) holds. Here, we are remarking on the fact that, under Assumption 4.3, the search direction satisfies the null-space and angle conditions imposed by the combination of (5a)–(5d). Indeed, one finds
(22) |
where is an orthogonal matrix whose columns form an orthonormal basis for , through which one can in turn show that along with
(23) |
Hence, we employ and (23) in place of and (5b)–(5d) in this section. For our analysis, it is worthwhile to emphasize that the tuple of parameters is presumed to be determined uniquely for any given instance of problem (1). Therefore, the parameters in Assumption 3.1 (i.e., in (5)) and (as explicitly stated) in Assumption 4.3 are presumed to be uniform over all possible runs of the algorithm. Similarly, for Assumption 4.1 in this section, the set and stated constants are assumed to be uniform over all possible realizations of .
Our convergence guarantees for this stochastic setting rely on Parameter Rule 2, stated below, that we employ for choosing step-size- and neighborhood-enforcement parameter sequences, namely, and . Like in the deterministic setting, in this stochastic setting, the step sizes depend on a parameter , with one difference being that the choice is not allowed for the stochastic setting. However, unlike in the deterministic setting, in this stochastic setting we prescribe a particular dependence of the neighborhood-enforcement values on the barrier-parameter sequence and neighborhood-parameter sequence , which in turn are determined by parameters . Whereas in the deterministic setting our ultimate requirements for the tuple (specified between Parameter Rule 1 and Theorem 4.1) were that , , and , the requirements for these parameters is stricter in this stochastic setting. For the sake of simplicity, rather than introduce separate parameters and for and , respectively, in Parameter Rule 2 we introduce a single parameter . This is reasonable since, even in the deterministic setting, our analysis requires . It is worthwhile to emphasize upfront that the restrictions of the rule are satisfiable; e.g., one may consider and as one acceptable setting.
Parameter 2.
The strategy for selecting in Parameter Rule 2 is consistent with that in Parameter Rule 1. Observe that since and are set in advance of any run of the algorithm, it follows under Parameter Rule 2 that and so are determined in advance of a run of the algorithm as well. Thus, our analysis can consider , where is a prescribed sequence that is uniform over all runs of Algorithm 1 employed for any given instance of problem (1). The choice of in Parameter Rule 2, on the other hand, is more stringent than that in Parameter Rule 1. Observe that Parameter Rule 1 amounts to choosing for all , where for all . The choice in Parameter Rule 2 is similar, but with for all , where for any given the upper limit might be less than 1. The particular strategy for setting for all in Parameter Rule 2 involves first setting a lower value for each that is defined only by values that are set in advance of a run of the algorithm. This fact is critical, since it ensures that is prescribed and uniform over all runs of Algorithm 1 employed for any given instance of problem (1). For each , the values and may differ between runs, but importantly the sequence does not.
Many of our prior results from the deterministic setting carry over here to our present analysis for the stochastic setting. Firstly, Lemma 3.1 holds, as does Lemma 4.1 since it has been proved irrespective of any particular algorithm. Secondly, following the arguments in Lemmas 4.2 and 4.3, it follows that Parameter Rule 2 ensures that for all in any given run of the algorithm. We formalize this in Lemma 4.5 below, the proof of which is omitted since it would follow the same lines of arguments as in the proofs of Lemmas 4.2 and 4.3. The only significant difference that would be in the proof is that the bound needs to be replaced by for all , which follows under Assumption 4.2. This is the reason that in Parameter Rule 2 involves an additional term, as opposed to in Lemma 4.3, which does not. This affects the denominators in the expression for for all . Otherwise, is consistent between Lemma 4.3 and Parameter Rule 2.
Lemma 4.5.
Suppose that Assumptions 2.1, 3.1, 4.1, 4.2, and 4.3 hold and that Algorithm 1 is run with Parameter Rule 2. Then, for all , the choice of in Parameter Rule 2 ensures that , i.e., the condition in Step 6 of Algorithm 1 holds. Consequently, under Parameter Rule 2, Algorithm 1 is well defined in the sense that it will generate an infinite sequence of iterates. Moreover, Parameter Rule 2 ensures that and for all .
Now let us turn to results for the stochastic setting that are distinct from those for the deterministic setting. Our first main goal is to prove an upper bound on the expected decrease in the shifted barrier-augmented function that occurs with each iteration. Toward this end, let us begin with the following preliminary bound.
Lemma 4.6.
Proof.
Lemma 4.6 shows that the change in the shifted barrier-augmented function with each iteration is bounded above by the sum of a negative term and two terms that may be considered noise terms. The goal of our next two lemmas is to bound these noise terms. We start with the second term on the right-hand side in Lemma 4.6.
Lemma 4.7.
Proof.
The next lemma provides an upper bound on a conditional expectation of the last term on the right-hand side of the inequality in Lemma 4.6.
Lemma 4.8.
Proof.
Consider arbitrary . The inner product in the expected value of interest may be nonnegative or nonpositive. Hence, let us invoke the Law of Total Expectation in order to handle each case separately. Let be the event that , where , and let be the complementary event that . By Parameter Rule 2, the fact that under Assumption 4.2, and Lemma 4.5, it follows that
(24) |
Furthermore, by Assumptions 2.1–4.3, the Cauchy–Schwarz inequality, the triangle inequality, , and the fact that for any one has
Combining this inequality with (24) yields the desired conclusion. ∎∎
Combining the preliminary bound in Lemma 4.6 with the previous two lemmas, we are now prepared to prove an upper bound on the expected decrease in the shifted barrier-augmented function that occurs with each iteration.
Lemma 4.9.
Proof.
Using this sufficient decrease result and looking further into the selections in Parameter Rule 2, we obtain the following result.
Lemma 4.10.
Proof.
We prove the result by building on Lemma 4.9. By Parameter Rule 2, there exist , , and such that for all one finds
Since this is the same type of lower bound as in (15), the same argument in the proof of Theorem 4.1 shows that there exists such that for all . Hence, by Lemma 4.9, all that remains is to prove that there exists such that for all one finds that
(25) |
Starting with the first term in (25), one finds that
where with one finds that
As for the second term in (25), one similarly finds that
Combining these bounds yields (25) for some . ∎∎
Our final theorem, presented next, now follows similarly to Theorem 3.16 in [7]. We provide a complete proof for the sake of completeness.
Theorem 4.2.
Suppose that Assumptions 2.1, 3.1, 4.1, 4.2, and 4.3 hold and that Algorithm 1 is run with Parameter Rule 2. Then,
Consequently, if in a given run there exists with such that
-
,
-
for some , and
-
at the linear independence constraint qualification (LICQ) holds with respect to problem (1) in the sense that with the columns of combined with the vectors in form a linearly independent set,
then is a KKT point for problem (1) in the sense that there exists a pair of Lagrange multipliers such that satisfies (3).
Proof.
It follows from Lemma 4.10 and the Law of Total Expectation that there exists such that, for all , one finds that
Consider arbitrary . Summing over yields
After rearrangement, this yields
Since and , the right-hand side of this inequality converges to a finite limit as . Furthermore, by , nonnegativity of , and Fatou’s lemma, one almost surely has
Now consider , the expectation of which has been shown above to be zero. Nonnegativity of for all and the Law of Total Expectation imply that , so . This is the first desired conclusion of the theorem. The second desired conclusion, on the other hand, follows by using the same argument as in the proof of Theorem 4.1. ∎∎
5 Discussion of Assumptions 3.1 and 4.3
Our aim in this section is to discuss Assumptions 3.1 and 4.3. In particular, first for the deterministic setting, we provide (a) example problems for which Assumption 3.1 can be shown to hold or cannot be shown to hold, (b) further explanation about why Assumption 3.1 can be viewed as a combination of a nondegeneracy assumption and an assumption about being sufficiently large, and (c) a procedure for computing to satisfy (5) when such a direction exists. We follow this with a discussion about the combination of Assumptions 3.1 and 4.3 for the stochastic setting. We emphasize upfront that, generally speaking, a straightforward procedure for computing to satisfy the conditions of our convergence analysis for either the deterministic or stochastic settings may not be available in practice. For example, for one thing, determining a sufficiently large value for the ratio may rely on problem-specific constants that are not known in advance of a run of the algorithm. That being said, in this section we provide guidance for how to compute in practice such that, along with parameter tuning (as is common for stochastic-gradient-based methods in practice), one obtains good practical performance.
Throughout this section, for the sake of clarity, we drop the subscript notation that indicates the iteration index of an algorithm—rather, in this section, a subscript only refers to an index of a component of a vector. In particular, in this section, we denote (which in turn means, e.g., that refers to the first component of the vector ), , , , and .
Let us begin with some example problems for the deterministic setting. Firstly, recall that under Assumption 2.1 a strictly feasible point must exist. This means that one can rule out problems with, e.g., the affine constraint and the inequality constraint , or any problems with such a combination of constraints that implies the lack of a strictly feasible point. Secondly, observe that the existence or not of a direction satisfying (5) depends on the feasible region reduced to . This means that one can distinguish cases of whether (5) holds or not by presuming that one has already restricted attention to this reduced space and consider only the geometry of the feasible region with respect to inequality constraints where the ambient space of the variables is this reduced space.
Let us now present an example for which Assumption 3.1 can be shown to hold. In particular, the following example shows that there exists a value of the tuple under the ranges specified by Algorithm 1 such that at all there exists satisfying (5). The main aspect of the example is that must be sufficiently large relative to . We remark that the case that is considered in the example is the more challenging case to consider. Indeed, if , then, using similar arguments, the parameter requirements are less restrictive than in the following example. We expound on this claim further after the example.
Example 1.
To start, let be arbitrary positive constants under the ranges specified by Algorithm 1. (A further restriction on the relationship between and is developed during the discussion of this example, when it is needed.) Consider a two-dimensional problem where for some , , for some , and (see (4)), so
(26) |
Defining , the conditions (5) reduce to
(27a) | ||||
(27b) | ||||
(27c) |
Since the gradient vectors , , and are constant over and the latter two gradient vectors are nonzero, it follows from (26) that for any one finds that as , where . At the same time, for any pair of constants and with the search direction defined by
(28) |
one has for any that as , where the limiting direction is .
Now suppose that the constants and point are such that both constraints are nearly active at , i.e., . Since one has , it follows that
Now suppose and . (The effect of these choices is to normalize the contributions of the two terms in the definition of .) These choices and the fact that for some implies that
On the other hand, one has that . Thus, one may conclude that there exists such that and for all such that both constraints are nearly active.
On the other hand, with defined as in the previous paragraph (i.e., with and ), it follows that
On the other hand, and . Thus, one may conclude that there exists such that for all such that both constraints are nearly active.
The arguments in the preceding two paragraphs involve the limiting directions and , where it was shown that there exist positive constants and —that are uniform with respect to —such that conditions of the form (27b)–(27c) hold for these limit directions. All that remains is to observe that, since these are the limiting directions for any given as , such conditions and (27a) hold for for some positive constants as long as is sufficiently large relative to . All together from our observations, it follows that with sufficiently large relative to , , such that both constraints are nearly active, sufficiently small, sufficiently small, sufficiently large, and sufficiently small, one finds in (28) with the aforementioned choices of and satisfies (5).
Finally, observe that for any such that both constraints are not nearly active, the situation is simpler than above. In such a setting, one does not need condition (27c) to hold for both constraints; it only needs to hold for whichever constraint, if any, is nearly active. This offers much more flexibility in the choice of . Overall, since the constraint gradients are constant over , one can again derive the existence of uniform parameter choices, including that is sufficiently large relative to , to ensure that satisfying (5) exists at all such . ∎
The analysis in Example 1 can be extended to any problem with polyhedral constraints (for arbitrary and ) as long as at any strictly feasible point in the interior of the polar cone of the nearly active constraint gradients is nonempty. Intuitively, this can be understood as follows. With sufficiently large relative to , the gradient of the barrier term corresponding to nearly active constraints pushes the vector (through more emphasis on the barrier term) to point with the interior of the polar cone of the nearly active constraint gradients. This means that, as long as a direction is chosen appropriately to compensate for the different magnitudes of the norms of the constraint gradients, the desired conditions in (5) hold. Admittedly, the parameter choices that ensure that such a direction always exists are not straightforward to determine in advance of a run of the algorithm. This means that, in practice, the algorithm parameters require tuning and/or adaptive choices to be made, as we discuss further shortly. To illustrate the situation that we have described in Example 1, please see Figure 1, on the left.
Now let us briefly describe an example for which we are unable to show that Assumption 3.1 holds with uniform parameter choices; see Figure 1, on the right.
Example 2.
Consider a two-dimensional problem where for some , , , and (see (4)). Defining, as before, , (5) reduces to (27). The situation is similar to that in Example 1, except that is not constant over . Attempting to follow a similar analysis as in Example 1, one finds that the arguments break down since as from within the interior of the feasible region. ∎
Intuitively, Example 2 has the property that the constraint gradients point in opposite directions as from within the interior of the feasible region. This means that, if the algorithm were to compute (say, since is the optimal solution), the barrier terms for the two constraints would push against each other, in the limit. Consequently, we are not able to show that there exists a uniform ratio such that a direction satisfying (5) always exists over the feasible region.
One sees in Figure 1 our claim that Assumption 3.1 is a type of nondegeneracy assumption. When, at any strictly feasible point, the polar cone of the nearly active constraint gradients is sufficiently wide, the situation is favorable, like on the left in Figure 1. On the other hand, if the polar cone collapses as for some , like in the situation on the right in Figure 1, then the situation is not favorable since the barrier terms for different constraints push against each other.
We close this section by observing that a procedure for computing a direction to satisfy (5), if one exists, has been suggested in Example 1. In particular, at any iterate one can compute by (a) determining the set of nearly active constraints, (b) computing weights for the nearly active constraints, as in (28), such that the contribution from each nearly active constraint gradient is normalized in some sense in order to satisfy (5). Concretely, one can consider the feasibility problem
However, in practice, we suspect such a procedure not to be worth the required computational effort. Instead, we suggest that, in practice, a reasonable choice is to initialize and to positive values, and in each iteration compute by solving a linear system of the form (21), as we specify in Assumption 4.3 for the stochastic setting. For example, can simply be an identity matrix, or it might be chosen as an identity matrix plus (an approximation of) the Hessian of the barrier function, which may involve second-order derivatives of , but does not require derivatives (of any order) of the objective function . In any case, if fails to satisfy (5) (specifically, (5e)) for some choices of the algorithm parameters, then the algorithm might “reset” the barrier parameter to a larger value. This may be allowed to occur iteratively, at least up to some upper limit for practical purposes. In the worst case, the algorithm may need to compute smaller values of than as stated in our theoretical results. However, at least this is a practical approach that worked well in our experiments.
6 Numerical Experiments
As previously stated, we do not claim that the deterministic version of our algorithm would be competitive with state-of-the-art derivative-based interior-point methods, especially not with such methods that allow infeasible iterates and/or employ second-order derivatives. On the other hand, for the stochastic setting, we know of no other stochastic interior-point method like ours that can solve generally constrained, smooth optimization problems, meaning that there is no such method in the literature against which our algorithm can be compared on a level playing field. All of this being said, it is illustrative to demonstrate the practical performance of our method through numerical experiments, both on a well-established and diverse set of test problems and on a couple of relatively straightforward test problems that represent the settings for which the algorithm has been designed. In this section, we present the results of such numerical experiments. For our experiments, each run of the algorithm was conducted on a compute node with an 16 AMD Opteron Processor 6128 with 32GB of memory.
First, we conducted an experiment with a large test set of problems in order to demonstrate the broad applicability of our algorithm. For such a demonstration, in these experiments we considered only the deterministic version of our algorithm. (Our second and third sets of experiments consider the stochastic version.) We generated our test set from the CUTEst collection [12], specifically using the PyCUTEst interface [11]. This collection includes 1315 unconstrained or constrained smooth optimization problems. To generate the subset of problems that we employed for our experiments, we proceeded as follows. Firstly, we removed all problems with nonlinear equality constraints, with no inequality constraints, or that were too large to be loaded into memory. This resulted in a set of 396 problems of the form (1). Second, since our algorithm requires a strictly feasible initial point, we ran a so-called Phase I algorithm (implemented in Python) starting from the initial point provided by PyCUTEst; see Algorithm 2 in Appendix A. Of the 396 problems on which the Phase I algorithm was run, 10 encountered function evaluation errors through the interface and 59 resulted in a failure to find a strictly feasible point; these were all removed. Our test set is the remainder of these problems—327 in total—for which a strictly feasible initial point could be found. These points were stored as the initial points for this first set of our experiments.
We implemented Algorithm 1 in Python. We refer to the software as SLIP. For each test problem, for which was determined by Algorithm 2, the input parameters for the algorithm were set as follows. The initial neighborhood parameter was set as and the initial barrier parameter was set as . The corresponding sequences of neighborhood and barrier parameters were then prescribed such that and with for all . Estimates of , , , , , and were determined by randomly generating points from a normal distribution centered at , then computing constraint function and derivative values at these points to compute bound and Lipschitz-constant estimates. These values were employed to set the step sizes as stated in Parameter Rule 1 with . (This ensures , which means that the condition on these exponents in Theorems 4.1 was satisfied.) Finally, SLIP employed the parameter values , , , , and . Rather than attempt to ensure that (5) holds for all , SLIP follows the strategy described at the end of §5 to reset whenever a search direction is computed that does not satisfy (5). In particular, was multiplied by a factor of 2 in such cases, although we imposed an overall limit of beyond which was not increased. One additional feature that we included in the implementation is that we allowed the algorithm to explore values of greater than 1, since we believe this would be a useful feature in any practical implementation of the method. Specifically, if , then the algorithm would iteratively reduce from 1 until was found. On the other hand, if , then the algorithm would iteratively increase from 1 as long as and the barrier term in the barrier-augmented objective function did not increase.
We ran SLIP with an iteration budget of in order to determine the progress that the algorithm could make within a budget. (To resemble the stochastic setting, our implementation of SLIP does not employ second-order derivatives of the objective function, so one should not expect the algorithm to converge at a fast rate like other interior-point methods that employ second-order derivatives.) Let be the final iterate generated by a run of the algorithm on a test problem. Since our algorithm is a feasible method, one can compare and for each run in order to determine if the algorithm has made progress. We confirmed that, indeed, in all of our runs of SLIP over our entire test set, we found that . However, without knowledge of for each problem in our test set, it is not possible to compare with . Instead, we considered the relative stationarity measure (recall Theorem 4.1):
(29) |
Here, the in the denominator respects the fact that different choices of the barrier parameter correspond to different multiplier estimates at the initial point (see (18)), which in turn give different stationarity measures at the initial point. Our use of the in the denominator in this definition of relative stationarity means that we are determining the better of the stationarity measures at the initial point to compare against our stationarity measure at the final iterate.
Figure 2 shows a histogram of relative stationarity values over our test set of problems from the CUTEst collection. The results show that, generally speaking, the relative stationarity values were quite small, meaning that the final iterates in most runs appeared to be much closer to stationarity than the initial iterates, thus indicating that within the iteration budget the algorithm made substantial progress toward stationarity. We conjecture that the problems on which the relative stationarity value was higher may be ones that are very nonlinear and/or have constraints that are degenerate near the iterates generated by the algorithm.

As a second experiment, we tested stochastic runs of SLIP on a randomly generated instance of a second-order conic optimization problem, namely,
(30) |
Specifically, we randomly generated and such that a strictly feasible point existed, and randomly generated such that the problem had a finite optimal value. We chose a problem of this type since it is nonlinear, but convex, so for our first demonstration of stochastic runs of our algorithm we could avoid challenging situations that arise due to nonconvexity, such as the algorithm potentially converging to different minimizers. (By contrast, our third experiment considers a nonconvex problem.) First, as a benchmark, we ran deterministic SLIP with the same parameter choices as our experiments with the CUTEst collection, except for two differences: (a) we set so that and , as required in Parameter Rule 2, and (b) our procedure for computing for all did not observe barrier-augmented objective function values, since this would require objective function values that are not tractable to compute in a stochastic setting. Instead, we allowed to increase above 1, but only up to a limit of 10, when doing so maintained all constraint values less than or equal to .
Second, we ran stochastic SLIP with 10 different seeds for the random number generator. These runs used the same values for , , initial , , and as the deterministic algorithm along with for all . It also employed the same Lipschitz constant estimates and employed the same procedured for “resetting” (up to a limit of ) as the deterministic algorithm. Thus, the main difference between the deterministic and stochastic runs of the algorithm were that the latter employed stochastic objective gradient estimates of the form , where for all in each run the vector was drawn randomly from a standard normal distribution. By design of the algorithm, the final iterates generated by the deterministic and all stochastic runs of the algorithm were feasible with respect to all of the constraints. Table 1 shows the final objective values for each of the 10 stochastic runs of SLIP. For reference, 1.80561e+03 for all runs, deterministic SLIP achieved 6.31100e+02, and the relative stationarity measure achieved by deterministic SLIP (see (29)) was 1.65377e-02. Thus, overall, one can observe that the deterministic and stochastic runs of SLIP yielded final iterates that were relatively very close to stationarity.
run | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
6.31103e+02 | 6.31102e+02 | 6.31100e+02 | 6.31101e+02 | 6.31100e+02 | |
run | 6 | 7 | 8 | 9 | 10 |
6.31101e+02 | 6.31102e+02 | 6.31101e+02 | 6.31101e+02 | 6.31103e+02 |
As a third experiment, we tested stochastic runs of SLIP to train a binary classifier subject to a norm constraint for the mushrooms data set from the LIBSVM collection [6]. The model that we trained was an artificial neural network with a single hidden layer with 512 nodes, tanh activation, and cross-entropy loss. This objective function is nonconvex. We trained the model subject to the constraint that the neural network weights had squared -norm less than or equal to 100. This feasible region is convex. Like for our second experiment, we chose a convex feasible region to ensure that it would be reasonable to expect multiple stochastic runs of the algorithm to converge to a point of similar solution quality in terms of the final objective function values (our measure of comparison), despite the nonconvexity of the objective function. The experimental setup was the same as for our second experiment, except that, rather than artificial noise in the stochastic-gradient estimates, we employed mini-batch gradient estimates with a mini-batch size of 256. (The mushrooms dataset has 8124 data points, each with 112 features.)
Table 6 shows the final objective values for each of the 10 stochastic runs of SLIP. For reference, 6.93137e-01 for all runs, deterministic SLIP achieved 7.77697e-03, and the relative stationarity measure achieved by deterministic SLIP (see (29)) was 7.06457610e-02. Thus, one can observe that the deterministic and stochastic runs of SLIP yielded final iterates of comparable quality.
run | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
7.77711e-03 | 7.76221e-03 | 7.79686e-03 | 7.82982e-03 | 7.78385e-03 | |
run | 6 | 7 | 8 | 9 | 10 |
7.79379e-03 | 7.80981e-03 | 7.80194e-03 | 7.75375e-03 | 7.77848e-03 |
7 Conclusion
We have proposed, analyzed, and tested a single-loop interior-point framework for solving constrained optimization problems. Of particular interest is a stochastic-gradient-based version of the framework, which can be employed for solving problems with affine equality constraints and (potentially nonconvex) nonlinear inequality constraints, at least when a strictly feasible initial point can be provided (or at least computed through a so-called Phase I algorithm, such as the method that we present in Appendix A). We have shown that the algorithm possesses convergence guarantees in both the deterministic and stochastic settings. We have also shown that a deterministic version of the algorithm performs reliably on a large test set of problems, and that a stochastic version of the algorithm yields good results both in the setting of artificial noise and in the context of a mini-batch stochastic-gradient-based algorithm for the training an artificial neural network for binary classification subject to a norm constraint.
A few significant open questions remain in the study of stochastic-gradient-based interior-point methods for solving nonlinearly constrained optimization problems. For example, our theoretical convergence guarantees requires that the ratio is sufficiently large and that each computed search direction satisfies the conditions in Assumptions 3.1 and/or 4.3, but, as we have explained throughout the paper, a computationally effective strategy for computing search directions that adhere to our theoretical guarantees is not straightforward to design. A related open question is whether it is possible to design a single-loop infeasible interior-point framework that possesses theoretical convergence guarantees that are at least on par with those that we offer for the algorithm proposed in this paper. The main challenge in the design of such an approach is that our theoretical guarantees, which employ a prescribed sequence , rely heavily on the fact that the iterates generated by our algorithm are feasible in every iteration. There exist stochastic-gradient-based infeasible Newton-type methods for solving equality-constrained optimization problems; see, e.g., [3, 1, 2, 4, 9, 8, 10, 14, 15, 17]. Hence, one might expect it to be possible to employ such methods for solving the equality-constrained subproblems that arise in an infeasible interior-point method. However, even if one were to introduce slack variables for which the barrier functions are introduced, it remains unclear how to merge such an approach with our strategy for ensuring feasibility at all iterates. Concretely, suppose that inequality constraints are reformulated with slack variables as along with , where the latter inequalities are handled with a barrier function. Attempting to follow the strategy for bound-constrained optimization in [7], one is confronted with the challenge that the search direction in the slack variables, call it , depends on the search direction in the original variables, say , where it may be possible that . It is difficult to follow a strategy such as ours to ensure, e.g., that there exists uniformly bounded below such that is sufficiently positive, say to stay within a prescribed neighborhood. The difficulties of designing such an approach led us to propose the feasible method that has been presented in this paper, although it is possible that such an approach, or one based on alternative strategies, could be designed with convergence guarantees.
Appendix A Appendix: Algorithm for Finding a Strictly Feasible Point
Our algorithm for finding a strictly feasible initial point for our numerical experiments with problems from the CUTEst collection [12] is presented as Algorithm 2 in this appendix. For simplicity, for our discussion here we state user-defined parameters in terms of the specific values that we used in our implementation rather than introduce a generic parameter range.
The algorithm can be understood as an infeasible interior-point method, where line searches on a merit function are performed as a step-acceptance mechanism. Rather than attempt to solve an optimization problem to high accuracy, the aim of the algorithm is merely to find a strictly feasible point, i.e., a point in (recall (2)) satisfying affine equality constraints and strictly satisfying (potentially nonlinear) inequality constraints. Hence, the barrier parameter is fixed at 1 and the algorithm terminates as soon as a strictly feasible point is found. A strictly feasible point is defined as follows. As opposed to problem (1), which is stated in terms of only one-sided inequality constraints, our definition here of a strictly feasible point distinguishes between one- and two-sided inequalities. With respect to any one-sided inequality, say, , strict feasibility of a point is defined as . With respect to any two-sided inequality, say, , strict feasibility of a point is defined as
Supposing again that the inequality constraints are stated as in problem (1), the algorithm can be understood as an iterative method toward solving
(31) |
the first-order optimality conditions for which are
Given an initial point , the algorithm commences by attempting to compute with by solving a least-squares problem. (If this yields , then the algorithm continues; otherwise, the run is a failure.) Then, up to an iteration limit, the algorithm follows a standard infeasible interior-point approach with line searches on a merit function; in particular, the merit function is defined by
where is a merit parameter that is updated adaptively. (If the iteration limit is exceeded, then the run is a failure.) Firstly, a Hessian approximation is determined with a modification, if necessary, to ensure that it is sufficiently positive definite. To represent potential use in practice with our proposed Algorithm 1, the Hessian approximation does not employ second-order derivatives of the objective function, although it does employ second-order derivatives of the constraint functions. Secondly, a search direction is determined by solving a linear system. Thirdly, the merit parameter is updated using a standard technique from the literature; see, e.g., [5]. Specifically, defining the predicted reduction in the merit function corresponding to the tuple and search direction components as
the update ensures that is sufficiently small such that is sufficiently large relative to . Fourthly, the largest step size in satisfying a fraction-to-the-boundary rule is determined. Finally, a line search is conducted to compute a step size that satisfies the fraction-to-the-boundary rule and yields sufficient decrease in the merit function.
References
- [1] Albert S. Berahas, Raghu Bollapragada, and Baoyu Zhou. An adaptive sampling sequential quadratic programming method for equality constrained stochastic optimization. arXiv 2206.00712, 2022.
- [2] Albert S. Berahas, Frank E. Curtis, Michael J. O’Neill, and Daniel P. Robinson. A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians. Mathematics of Operations Research, https://doi.org/10.1287/moor.2021.0154, 2023.
- [3] Albert S. Berahas, Frank E. Curtis, Daniel P. Robinson, and Baoyu Zhou. Sequential quadratic optimization for nonlinear equality constrained stochastic optimization. SIAM Journal on Optimization, 31(2):1352–1379, 2021.
- [4] Albert S. Berahas, Jiahao Shi, Zihong Yi, and Baoyu Zhou. Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction. Computational Optimization and Applications, 86:79–116, 2022.
- [5] Richard H. Byrd, Mary E. Hribar, and Jorge Nocedal. An interior point algorithm for large-scale nonlinear programming. SIAM Journal on Optimization, 9(4):877–900, 1999.
- [6] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
- [7] Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, and Qi Wang. A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems. arXiv e-prints, arXiv:2304.14907, April 2023.
- [8] Frank E. Curtis, Michael J. O’Neill, and Daniel P. Robinson. Worst-Case Complexity of an SQP Method for Nonlinear Equality Constrained Stochastic Optimization. Mathematical Programming, https://doi.org/10.1007/s10107-023-01981-1, 2023.
- [9] Frank E. Curtis, Daniel P. Robinson, and Baoyu Zhou. Inexact sequential quadratic optimization for minimizing a stochastic objective function subject to deterministic nonlinear equality constraints. arXiv 2107.03512 (to appear in INFORMS Journal on Optimization), 2021.
- [10] Yuchen Fang, Sen Na, Michael W. Mahoney, and Mladen Kolar. Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems. SIAM Journal on Optimization, 34(2):2007–2037, 2024.
- [11] Jaroslav Fowkes, Lindon Roberts, and Árpád Brmen. Pycutest: an open source python package of optimization test problems. Journal of Open Source Software, 7(78):4377, 2022.
- [12] Nicholas I. M. Gould, Dominique Orban, and Philippe L. Toint. CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comp. Opt. Appl., 60(3):545–557, 2015.
- [13] Olvi L. Mangasarian and Stan Fromovitz. The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. Journal of Mathematical Analysis and Applications, 17:37–47, 1967.
- [14] Sen Na, Mihai Anitescu, and Mladen Kolar. An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians. Mathematical Programming, 199:721–791, 2023.
- [15] Sen Na and Michael W. Mahoney. Asymptotic convergence rate and statistical inference for stochastic sequential quadratic programming. arXiv 2205.13687, 2022.
- [16] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer-Verlag New York, 2006.
- [17] Songqiang Qiu and Vyacheslav Kungurtsev. A sequential quadratic programming method for optimization with stochastic objective functions, deterministic inequality constraints and robust subproblems. arXiv 2302.07947, 2023.
- [18] Robert J. Vanderbei and David F. Shanno. An Interior-Point Algorithm for Nonconvex Nonlinear Programming. Computational Optimization and Applications, 13:231–252, 1999.
- [19] Andreas Wächter and Lorenz T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25–57, 2006.