Novel Optimization Techniques for Parameter Estimation
Abstract
In this paper, we introduce a new optimization algorithm that is well suited for solving parameter estimation problems. We call our new method cubic regularized Newton with affine scaling (CRNAS). In contrast to so-called first-order methods which rely solely on the gradient of the objective function, our method utilizes the Hessian of the objective. As a result it is able to focus on points satisfying the second-order optimality conditions, as opposed to first-order methods that simply converge to critical points. This is an important feature in parameter estimation problems where the objective function is often non-convex and as a result there can be many critical points making it is near impossible to identify the global minimum. An important feature of parameter estimation in mathematical models of biological systems is that the parameters are constrained by either physical constraints or prior knowledge. We use an affine scaling approach to handle a wide class of constraints. We establish that CRNAS identifies a point satisfying -approximate second-order optimality conditions within iterations. Finally, we compare CRNAS with MATLAB’s optimization solver fmincon on three different test problems. These test problems all feature mixtures of heterogeneous populations, a problem setting that CRNAS is particularly well-suited for. Our numerical simulations show CRNAS has favorable performance, performing comparable if not better than fmincon in accuracy and computational cost for most of our examples.
1 Introduction
Mathematical modeling plays a critical role in understanding and quantifying biological systems. In particular, it enables the quantification of dynamics, thus providing the ability to predict potential outcomes and guide future studies. An essential component of successful mathematical modeling is parameter estimation. Here the goal is to train and/or fit a given mathematical model to observed data. The most common method of parameter estimation follows the maximum likelihood framework where one formulates a statistical model with associated parameters and maximizes a likelihood function over the parameter space to fit the model to provided data. Thus, parameter estimation problems often reduce to optimization problems.
A common feature of mathematical models arising in the study of biological systems is a high nonlinear dependence on the models’ parameters. For example, the parameter sensitivities in a system of nonlinear differential equations also satisfy a nonlinear system of differential equations [CLPS03]. Even when one has an explicit formulation of their mathematical model there is often a nonlinear dependence on the model parameters, e.g., the logistic growth model, , has a nonlinear dependence on the parameters and .
Parameters in mathematical models for biological systems usually have a physical interpretation. So, the values they can assume are often restricted based on our knowledge of the natural world. For example, non-negativity and upper-bound constraints are often desired if not necessary. This is in contrast to neural network models where the parameters are often unconstrained. The presence of constraints on the parameter space adds non-trivial difficulties to the derived parameter estimation optimization problem. For instance, for constrained problems one cannot simply apply gradient descent since the updates could take the parameters out of their feasible region.
Another phenomena to consider when modeling biological systems is heterogeneous populations. For example, when studying how cells respond in vitro to chemotherapy there can be a heterogeneous response [DKHC+08]. To model this type of heterogeneity one often assumes there are a known number of distinct subpopulations present at unknown proportions , each with their own distinct characteristics. The presence of these unknown proportions can introduce a new challenge to the parameter estimation problem, namely equality constraints. In particular, we know a priori any set of unknown proportions satisfy the conditions: and for .
Combining all of this together, we see parameter estimation problems for biological models utilizing a maximum likelihood framework reduce to nonlinear and non-convex constrained optimization problems [WGM+24, LC13, MMB03]. The loss of convexity is an immense challenge. Convexity in optimization enables us to infer global properties from local properties, i.e., local minimums are also global minimums. This inferring up from local to global properties is lost with the loss of convexity. Global optimal solutions are preferred, of course, and this is no different in parameter estimation. Multiple global optimization frameworks have been suggested to address this issue [SWH21, GB15]. These frameworks try to propose a strategy for locating global optimum, while relying on solutions provided by local optimization algorithms which converge to locally optimal solutions. A frequently employed technique is a multi-start approach; this technique suggests to solve the optimization problem starting from randomly selected initial points and obtaining multiple local solutions. Among these local solutions, one can argue the best local solution is a global solution if the problem has only a finite number of local solutions; however, confirming a finite number of local solutions becomes challenging when dealing with nonlinear and non-convex constrained continuous optimization models. In summary, current global optimization frameworks rely heavily on locating many local solutions to the constrained optimization problem, emphasizing the need for efficient and accurate local methods.
Many of the likelihood functions utilized in the study of mathematical biology have accessible higher-order derivative information; however, numerous optimization algorithms do not take advantage of this; procedures such as gradient descent and quasi-Newton methods only use the gradient information of the objective function, which categorizes them as first-order methods. For this reason, these approaches typically only yield solutions satisfying the first-order optimality conditions, which might not be locally optimal. In the parameter estimation problems we consider it is possible to compute the Hessian of the objective function and utilize this information to obtain better solutions. So, in this work, we explore the advantages of optimization algorithms utilizing second-order information and develop a new algorithm. Our proposed method is called cubic regularized Newton based on affine scaling (CRNAS). This method combines the ideas of the cubic regularized Newton’s method and affine scaling in-order to provide a novel second-order scheme capable of solving constrained optimization problems. An important advantage of CRNAS over first-order methods is it finds solutions satisfying the second-order optimality conditions. Due to the high levels of nonlinearity in mathematical biology models, there are possibly many solutions satisfying the first-order optimality conditions which are far from optimal; therefore, bypassing these solutions is crucial and converging to second-order stationary points has been shown to produce globally optimal solutions for some non-convex models [BVB16, GLM16]. We provide a convergence analysis for CRNAS, obtaining the best possible iteration complexity bound for the class of models we study, and numerical experiments showcase a simple implementation of CRNAS is already competitive with MATLAB’s state-of-the-art solvers.
The paper is structured as follows: Section 2 develops CRNAS for a general class of constrained non-convex problems. We discuss the two methodologies grafted together to construct CRNAS, the cubic regularized Newton’s method and the affine scaling method, and present the convergence theory; Section 3 summarizes the state-of-the-art algorithms we compared CRNAS with in our numerical experiments and details the metrics used to compare their respective efficiencies; Sections 4 and 5 present numerical experimentation with CRNAS using two case studies; Section 4 investigates a cancer drug response estimation problem while Section 5 studies a general application in heterogeneous logistic growth estimation; the paper concludes in Section 6 with a discussion of the advantages and limitations of CRNAS and a proposed implementing scenario; additional appendices present the technical arguments of our analysis, exposition on CRNAS’s implementation, and details about the set-up of our numerical experiments.
2 Methodology
2.1 Optimization Framework
In a parameter estimation problem, one has a function that maps inputs and a parameter set to a desired output. Then, for a given dataset one is interested in finding the best fitting (in some sense) parameter set. How one defines ‘best fitting’ is a choice that the modeler makes based on their understanding of the dataset and its generation. In particular, assume we have a function that takes inputs and parameter set and generates a prediction of a desired output, . If we have a dataset of paired inputs and outputs, , and assume a simple statistical model generating the outputs, , where are independent and identically distributed random variables with probability density function that are used to represent potential observation noise, we can then write the negative log-likelihood of the parameter set given the observed data as
To complete the parameter estimation problem, we must minimize the function over ; however, as mentioned before, the components of might have a physical interpretation and therefore be constrained. For example, they might be constrained to be non-negative, or bounded, or, as stated in the introduction, they might be proportions that sum to one. As a result, we have reduced the parameter estimation problem to a constrained optimization problem. Note that when writing the negative log-likelihood we will often drop the explicit dependence on the observed data.
Our focus in this work is creating algorithms to numerically solve a broad class of constrained optimization problems that arise from parameter estimation problems of the form
(1) |
where the objective function is possibly non-convex and the domain is the intersection of a linear and box constraint. Observe, all the examples given for constraints on the parameters can be represented by the conditions in (1), and we can actually further generalize our optimization framework by noting the box constraint, , can be rewritten as a conic constraint. Letting and we see the condition and is equivalent to the box constraint. So, we can replace the box constraint with an additional linear equality constraint and restrictions to the non-negative orient, i.e., .
The set is an example of a pointed convex cone, that is, a set such that for all and : , , and . If additionally the span of is the entire ambient space we say the cone is solid. Thus, due to this equivalence, we shall instead develop our algorithm to solve the more general class of minimization problems
(2) |
where is a pointed convex solid cone and is smooth but possibly non-convex. For a first-order method, one typically expects to find a first-order solution, which is defined as without constraints. However, like the examples in Sections 4 and 5, functions may have many saddle points which satisfy first-order conditions while failing to be local minimums, and second-order methods are ideal to avoid converging to such saddle points. In the optimization literature, one of the major recent developments is the introduction of second-order methods which converge to second-order stationary points [CRS19, OW21, ROW20], but relatively few papers in the literature deal with developing second-order methods for non-convex constrained models. Independent of our work, in a very recent arXiv paper, Dvurechensky and Staudigl [DS24] proposed a similar approach to ours to solve non-convex constrained optimization models. The main difference between our work and theirs lies in the fact that we use the Dikin ellipsoid and a cubic regularization of the objective in defining our subproblems, while they use only a cubic regularization of the objective to form their subproblems.
In this paper, we use the definitions for first-order (FOSP) and second-order stationary points (SOSP) as introduced in [HL23]. The authors of [HL23] also introduced a so-called Newton-conjugate gradient (Newton-CG) based barrier method to find an -SOSP for non-convex conically constrained models with Cholesky factorizations and other fundamental operations. In comparison, the authors of [DS24] apply a cubic regularized model to get a worst-case iteration bound. CRNAS also obtains a worst-case iteration bound to -SOSP for (2); CRNAS is simple to implement and is well suited for the situation when the Hessian is easily obtainable. Additionally, our approach avoids the need to solve a possibly ill-conditioned Newton’s equation; we only require the solution to a simple subproblem which we discuss in Appendix A.2. Next, we introduce the notion of self-concordant barrier functions for conic constraints, which play an important role in our analysis, and the corresponding notion of -FOSP and -SOSP arising from them.
2.2 Logarithmic Homogeneous Self-concordant Barrier Functions
Logarithmic homogeneous self-concordant barrier functions [NN94] play an important role in interior-point methods. In this paper, we assume the cone is convex pointed and solid and has a logarithmic homogeneous self-concordant barrier function , that is, is convex, three-times continuously differentiable over , for all sequences which converge to a point on the boundary of , and for any , , and direction , the following properties are satisfied:
where is a positive constant. As an example, is a logarithmic homogeneous self-concordant barrier function for the cone with .
An important aspect of barrier functions comes from the fact they can define induced local norms. These norms are standard in the conic optimization literature and play a crucial role in our algorithm development and analysis. Following convention, we let
for all and . Note, unless otherwise stated, denotes the regular Euclidean norm. Following the pattern of [HL23], a feasible solution with is said to be an -approximate first-order stationary point (-FOSP) for (2) if
which means that the angle between and the orthogonal complement subspace of , namely , is of order . The latter statement can be restated as in terms of the local norms as
(3) |
for all . Moreover, a solution is called an -approximate second-order stationary point (-SOSP) if in addition to being an -FOSP the point also satisfies
for all ; see Remark 1 (ii) of [HL23] for further details. Next, we introduce the two methods which inspired our algorithm design.
2.3 Cubic Regularized Newton’s Method
The cubic regularized Newton’s method was first proposed by Nesterov and Polyak to solve smooth unconstrained optimization problems [NP06]. Their approach adds a cubic regularization term to the second-order Taylor expansion of the objective function about the current iterate, , and solves this subproblem to compute the next iterate, i.e.,
(4) |
If the objective function has a gradient Lipschitz Hessian with constant and , then the above subproblem amounts to minimizing a cubic upper bound of the objective function about . Hence, under mild assumptions, cubic regularized Newton monotonically decreases the value of the objective function.
This method boasts multiple desirable qualities. The algorithm converges globally to second-order stationary points and computes a point satisfying the approximate first-order unconstrained optimally conditions, i.e., , within iterations, which bests gradient descent, and the method converges quadratically near strict local minimums [NP06]. Furthermore, though (4) is generally a non-convex problem, the subproblem is actually equivalent to minimizing a convex function in one variable. Thus, cubic regularized Newton is an implementable, globally convergent, and locally fast converging method to second-order stationary points. For these reasons, we seek to incorporate aspects of this procedure into our design.
2.4 Affine Scaling Method
The affine scaling method [LV90, Bar86, VMF86] is an algorithm for solving linear programming problems by rescaling the variables and constraining the next iterate to lie within a ball contained inside the cone to get a better iterate at a reduced computational cost. Consider the linear program
In each iteration, we scale the problem based on the current point . Let , where is a diagonal matrix with diagonal elements and is the vector of all ones in . Then, the non-negativity constraint is equivalent to . We desire to have the next iterate stay inside the interior of , so we replace with a ball constraint, that is, we instead enforce , where . This then ensures the next iterate cannot lie on the boundary of the cone with the proximity to the boundary dictated by , i.e., close to zero can yield new iterates near the boundary. With this new constraint and change-of-variable, we form the following subproblem
(5) |
The main benefit of (5) is it has a closed formed solution. So, the affine scaling method proceeds by forming and solving (5) and using the variable transformation to obtain the next iterate. Utilizing the notation introduced in Section 2.2, we can then concisely write the affine scaling update as
(6) |
where is the barrier function used to define the norm in the constraint.
2.5 Cubic Regularized Newton with Affine Scaling (CRNAS)
Our new method seeks to combine the ideas of cubic regularized Newton and affine scaling to develop a procedure which solves (2). The crux of our approach is to utilize the affine scaling method to handle the constraints, leveraging the concept of homogeneous self-concordant barrier functions to deal with the general conic constraints, but replace the linear objective in (6) with the local cubic approximation of the objective function in (4). Thus, our method generates new iterates by solving the following subproblem
(7) |
where is a positive number. Since our method brings together these two different procedures we coined it cubic regularized Newton with affine scaling (CRNAS). A precise description of CRNAS is provided below.
Cubic Regularized Newton with Affine Scaling (CRNAS) Step 0: Provide an interior point , i.e., and ; choose the constants , , and ; set Step 1: Solve the following subproblem and proceed to Step 2: Step 2: If , let and stop. Otherwise, go back to Step 1 with
Though (7) appears to be more difficult to solve than (4), this subproblem forming the backbone of CRNAS is solvable. A theoretical analysis and practical approach to solving (7) is detailed in Appendix A.2. In the next section, we provide an overview of the convergence guarantees we have for CRNAS; the technical proofs of the results are left to Appendix A.1.
2.6 Overview of the Complexity Analysis of CRNAS
We first declare CRNAS is well-defined in the sense all of the iterates produced by the algorithm remain inside the cone . The following lemma guarantees this follows from the constraints in the subproblem.
Lemma 2.1.
(Theorem 2.1.1, [NN94]) Let be self-concordant on and let , then .
For our convergence theory to hold, we assume the following scaled Lipschitz smoothness of the second-order derivative of the objective:
Assumption 1.
There exists a constant such that for all
We note this assumption is not original to us but has precedence in the literature, e.g. [HL23]. So, under these limited conditions, we present our main complexity theorem; the proof can be found in Appendix A.1.
Theorem 2.2.
Therefore, in view of the discussions on the optimality conditions in Subsection 2.2, Theorem 2.2 implies that in at most iterations CRNAS is guaranteed to find an -SOSP. So, for the highly nonlinear and non-convex optimization problems coming from parameter estimation problems, CRNAS is able to avoid the many sub-optimal first-order stationary points leading to a proposed solution of often superior quality compared to first-order methods.
It is worth remarking one can devise a first-order version of CRNAS which uses a quadratic approximation of the objective function in the subproblem rather than a cubic approximation. A similar convergence result can be derived for this procedure as well. As situations arise where second-order derivatives are prohibitively expensive to compute, this version of CRNAS would be prudent to implement; therefore, for the sake of completeness, we include a description of our first-order version of CRNAS with associated convergence theory in Appendix A.3
3 Benchmarking Algorithms and Performance Metrics
In our case studies, we analyzed, evaluated, and compared the performance of CRNAS to state-of-the-art constrained nonlinear programming algorithms in MATLAB’s optimization solver fmincon [MAT2a]. Specifically, we compared CRNAS with the interior-point and sequential quadratic programming algorithms. Although fmincon implements other approaches, our experience indicated these two methods performed the best in our experiments, so we only consider these in our exposition.
Since CRNAS relies on precise gradient and Hessian information, we considered implementations of algorithms in fmincon both with and without specifying the gradient and Hessian of the objective function. We implemented both the interior-point and sequential quadratic programming algorithms in fmincon by supplying only the objective function and allowing MATLAB’s built-in techniques to estimate the gradient and Hessian; IP and SQP refer to these implementations. We also tested the interior-point solver with the exact gradient and Hessian; IP hess denotes this instantiation. Lastly, the sequential quadratic programming method with the exact gradient was employed in our testing; SQP grad denotes this procedure. Table 1 summarizes these different approaches.
Algorithm | Gradient/Hessian | Acronym |
---|---|---|
Cubic regularized Newton with affine scaling | Yes | CRNAS |
Interior-point in fmincon | No | IP |
Interior-point in fmincon | Yes | IP hess |
Sequential quadratic programming in fmincon | No | SQP |
Sequential quadratic programming in fmincon | Yes | SQP grad |
Our studies sought to analyze the effectiveness and quality of the solutions provided by each algorithm. We evaluated three quantities in our assessment of each method:
-
1.
Total Compute Time - the total wall time required for Algorithm to run from different initial points and terminate. Termination will occur if a local solution is obtained up to some provided tolerance or some computational resource or numerical limit has been expended or reached, e.g., maximum iterations or minimum stepsize, respectively
-
2.
Best Computed Value - is the lowest value of the objective function obtained from Algorithm from different initializations; this value is Algorithm ’s best estimate of the global minimum after different instantiations
-
3.
Number of Iterations to Best Value - is the number of iterations required to obtain the best computed value from the single initial point which generated it
The first two metrics seek to provide a measure of which method shall be of greatest benefit to a practitioner. These metrics measure which algorithm is most effective at producing the best solution the quickest when a multi-start approach is applied to estimate a global minimum. The third metric on the other hand provides an estimate of total iterations needed for the method to compute its best solution from a single initialization. To ensure a fair comparison, in all of our tests each algorithm was provided the same initial guesses and had the same termination criteria. The termination criteria used in the numerical experiments are detailed in Appendix A.7. In the next two sections, we set-up parameter estimation problems arising in mathematical biology and compare CRNAS against fmincon using these three measures.
4 Case Study I: Cancer Drug Response Estimation
In this section, we explore the use of CRNAS to solve a recently proposed parameter estimation problem [KLMT+23, WGM+24]. In these works, the authors employed a maximum likelihood estimation (MLE) approach to estimate tumor subpopulation dynamics and the corresponding drug response from high throughput drug screening data. The primary challenge in these studies arises from estimating the parameters of mixtures of dose-response Hill equations which are both nonlinear and non-convex. To alleviate some of the issues arising from nonlinearity, we perform a parameter transformation on the variables in the Hill equations. Due to the importance and prevalence of Hill equations, we review them before describing the cancer drug response estimation problem.
4.1 Hill Equations and our Parameter Transformation
The Hill equation is a fundamental model for describing dose-response relationships in pharmacological studies [GZKB+12]. For a given drug dose level , the Hill equation specifies the fraction of viable cells as
where the parameters , , and represent the maximum drug effect, the half maximal effect dose (EC50), and the Hill coefficient respectively. This model has been well utilized in pharmacology, biochemistry, and various other fields over the past hundred years [Hil10].
An important question that arises in the study of the Hill equation is the accurate identification of model parameters based on observed data [GC15]. A particularly challenging aspect of dealing with the parameters of the Hill equation is the nonlinear term in the denominator, . To mitigate the challenge of this term, we apply a variable transformation. In particular, we introduce a new parameter to produce the modified Hill equation
Given the transformation is a one-to-one function when is non-negative, we can determine the value of and from the estimated values of and . This parameter transformation is applied during the numerical experiments, while the original EC50 parameter is used throughout the manuscript for clarity.
4.2 Deterministic Drug-Affected Cell Proliferation Framework
4.2.1 Problem Description
In [KLMT+23], the authors proposed a novel statistical framework, PhenoPop, to analyze high-throughput drug screening data. PhenoPop takes into account the tumor drug response when multiple subpopulations, each with varying drug responses, exist within the tumor. A single Hill equation cannot capture this heterogeneous drug response; therefore, the PhenoPop model represents the dynamics of tumor growth as a composite of numerous subpopulations, with each subpopulation distinguished by a unique set of Hill parameters.
PhenoPop employs a classical exponential growth model to describe the growth of each subpopulation. Each subpopulation has a distinct growth rate, denoted as . The size of subpopulation at time is represented as
where is the initial population size of subpopulation . The unique set of Hill parameters for subpopulation is denoted as . The relationship between the cell growth rate and the drug dose is expressed as
where the constraint is enforced to ensure that the drug decreases the growth rate. For , the cell count of subpopulation at time and dose levels is modeled as
(8) |
where satisfies and represents the initial proportion of subpopulation , and is the initial total cell count. While is assumed to be a known quantity, PhenoPop estimates the following set of parameters from the data
To estimate these parameters, cell counts are collected at a set of time points and drug dose levels ; replications are performed for each possible time-dose combination. We denote the dataset as
PhenoPop then estimates the parameter set through the MLE process. In [KLMT+23], the authors assumed Gaussian noise with mean zero, featuring two possible variances based on the dose level and time of observation. For simplicity, we assume here the noise follows a mean-zero Gaussian distribution with a constant variance . The corresponding negative log-likelihood function for the observations under the parameters set is given by
where . To further simplify this formulation, we assume the observation noise is known. So, the MLE problem is reduced to the following constrained least square problem
(9) |
where is an estimated parameter set and is the feasible space of the parameters. In practice, the feasible space for the parameters is selected based on several biological assumptions.
4.2.2 Numerical Results
To evaluate the performance of each algorithm for solving (9), we conducted a series of in silico experiments.
In each experiment, we generated 100 different datasets, each corresponding to a randomly selected true parameter set.
For each dataset, we solved (9) starting from 20 different initial points.
Among those 20 results, we recorded the result with the best objective value as the solution obtained from each algorithm.
Details regarding the problem initialization and data simulation for these tests are located in Appendix A.4.
It is important to note that in PhenoPop the primary source of randomness is the observation noise, which is less significant in this context; therefore, for the simulated data, we assume there is no observation noise, aiming to better illustrate each algorithms’ performance.
Thus, the optimal objective value should be close to zero;
we consider values exceeding one indicative of poor estimation.
Our discussion of the experiments is divided into three sections based on the number of assumed subpopulations, , in the model.
Experiment with
We begin with a preliminary experiment focusing on tumor dynamics involving a single dose-response curve. In this case, the parameter set consists of and , without specification of the initial proportion parameter . Figure 1 displays the three performance metrics discussed in Section 3 for each of the five algorithms listed in Table 1. The box-plots display the results from 100 unique and independent datasets generated for the PhenoPop model.
Evaluating each algorithm with our performance metrics, we see CRNAS demonstrated superior performance compared to the other algorithms in terms of the number of iterations required to achieve its best solution from one initial guess.
Comparing total compute times, it is evident CRNAS required less time compared to IP and IP hess, while SQP and SQP grad required a similar amount of time to run for 20 different initial guesses;
however, though SQP and SQP grad had comparable total compute times, they yielded less reliable solutions.
CRNAS consistently delivered accurate estimates across all but one of the 100 instances of the model, as evidenced by its optimal obtained objective values consistently falling significantly below one in all but one test.
Both SQP and SQP grad on the other hand produced multiple poor estimations.
Notably, without higher-order information, both SQP and IP could not reduce the objective value below .
This demonstrates a handicap of utilizing the inexact higher-order information.
With exact higher-order information, both SQP grad and IP hess yielded optimal solutions lower than their inexact higher-order counterparts.
One might argue that since SQP grad has a comparable estimation quality to CRNAS, statistically speaking as evidenced by Figure 1, there is limited advantage to using CRNAS; however, this would be fallacious given the fact in about ten percent of the tests SQP grad produced poor estimates.
IP hess produced accurate estimates, but required more time to obtain these solutions compared to CRNAS.
Overall, we see CRNAS performs on pair if not better than MATLAB’s solvers based on our metrics.

Experiment with
We now consider the case where two subpopulations exist in the tumor, each with a distinct drug response. The initial proportion parameter for each subpopulation is therefore reintroduced along with the corresponding equality constraint, ;

Figure 2 provides the results for these experiments.
With two subpopulations present CRNAS maintained its advantage in terms of the number of iterations needed to obtain the optimal solution compared to the other algorithms. Adding an additional subpopulation increased the overall complexity of the model and this added complexity greatly limited SQP and SQP grad’s ability to obtain accurate solutions. In more than 1/4 of the tests SQP’s optimal computed objective values exceeded , while more than half of the optimal solutions obtained by SQP grad were worse than the median of the optimal objective values obtained by CRNAS. In contrast, CRNAS provided accurate estimations across all experiments for this more challenging problem; the performance of IP and IP hess was similar to the prior experiment.
In seeking to understand the factors which limited the performance of SQP, we found imposing an upper bound constraint on some of the parameters improved the performance of SQP. This technique requires researchers to heuristically determine an upper bound for all parameters. If the true parameter set is located within the narrowed zone, this significantly restricts the search region for the optimization algorithm and aids its location of better estimates. The risk of employing this practice depends on the accuracy of the practitioner’s prior knowledge since placing too small of an upper bound could make finding the true global optimal impossible as it will be outside the artificially constructed feasible region. For the in silico experiments, we have full knowledge about the true parameter set , leading to an accurate optimization feasible region selection. Specifically, we set the optimal feasible region for , and to be the interval . The constraints on the other parameters remained the same.

Using the new upper bound constraints on and another set of experiments was conducted;
the results are provided in
Figure 3.
With these constraints, SQP grad performed in a similar fashion to the experiments with one subpopulation;
however, it required more total compute time compared to the outcomes depicted in Figure 2.
Consequently, CRNAS significantly outperformed the other methods in both total compute time and number of iterations.
Additionally, CRNAS can provide robust estimates even without specifying upper bounds for all parameters, as shown in Figure 2.
So, a practitioner does not need to estimate potentially unknown upper bounds on parameters for their models in order for CRNAS to obtain strong parameter estimates.
Experiment with
Lastly, we examined two more cases with and . The majority of the conditions for generating the data remain unchanged from the experiments with ; however, to accommodate setting and , we employed a new scheme to select the EC50 values for each subpopulation and the dose levels to ensure the identifiability of each parameter set; for details see Appendix A.5.


The results for and are shown in Figures 4 and 5 respectively. When , it is evident that IP hess, which in the prior experiments produced very accurate solutions, located almost zero accurate estimates with nearly all its computed optimal objective values being larger than one. Some of the poor estimates produced by IP hess and SQP grad might be due to the algorithms reaching the maximum iteration limit of 500, see Appendix A.7 for details on termination criteria, but the algorithms produced poor estimates even when the iteration limit was not reached; as supported by the fact in more than half of the experiments with three subpopulations SQP grad and IP hess terminated before the maximum number of iterations was reached. CRNAS however consistently provided accurate estimates for both three and five subpopulations. We propose that this advantage stems from the inherent capability of CRNAS to automatically avoid saddle points and converge to second-order stationary points, avoiding suboptimal first-order stationary points which the other methods are potentially converging to.
4.3 Stochastic Drug-Affected Cell Proliferation Framework
4.3.1 Problem Description
To more accurately represent the changing variability in the data, [WGM+24] extended the PhenoPop framework using linear birth-death processes to model the heterogeneous tumor cell populations. Instead of using a deterministic exponential growth model to represent the cell proliferation dynamics, they utilized the stochastic linear birth-death process. As a result, the relationship between the variance and the number of cells in the linear birth-death process can easily account for changing noise levels in the observations. Specifically, the authors assumed a cell in subpopulation (type-i cell) divides into two cells at rate and dies at rate stochastically. This means that during a short time interval , a type- cell divides with probability and dies with probability . The drug effect on type- cells is then modeled as a cytotoxic effect:
Note that this framework can also easily account for cytostatic effect. The authors denoted the stochastic process as the number of cells in subpopulation at time under drug dose with mean and variance
and
Since each subpopulation was assumed to grow independently, the total cell count is normally distributed with mean and variance
when the initial population is large. Consequentially, the authors suggested the statistical framework
where are independent and identical distributed (i.i.d.) copies of for , and are i.i.d. normally distributed observation noise with mean and variance . Thus, the model parameter set is now
This novel statistical framework aims to utilize the information in the variability of the data to predict the parameters , and related to each subpopulation. As a result, there is a more complicated negative log-likelihood for the MLE process:
(10) |
The challenge in minimizing the negative log-likelihood function in (10) is its variance term. The variance now depends on the cell growth dynamics of each subpopulation which involves the Hill equation.
4.3.2 Numerical Results
We next investigate CRNAS’s performance on the MLE problem described by (10). Details for the MLE problem initialization and data simulation are provided in Appendix A.5. In the stochastic model, verifying whether the solution reaches the global optimum is challenging, particularly when the parameter space is infinite. To address this, we use the likelihood of the true parameter set as a targeted likelihood value. Specifically, we consider a solution to be good if its likelihood exceeds that of the true parameter set. In other words, a good solution should have a lower objective value (negative log-likelihood) than that of the true parameter set. Since the likelihood of the true parameter varies across different experiments, in order to create a unified metric across experiments we employed the relative likelihood for an estimator defined as
(11) |
where the likelihood function is defined as in (10) and is the dataset generated by the true parameter set ; typically this ratio is below for quality estimators.
As discussed in Section 4.3.1, the advantage and novelty of the linear birth-death process framework is its variance structure can naturally explain the dynamic variance observed in real-world data. With mild assumptions on the cell growth dynamics, the linear birth-death process framework is able to utilize the information in the variance of the data to obtain more robust estimates of the model’s parameters. The trade-off however is that employing this variance structure further complicates the likelihood function, as the variance term also contains a Hill function. For simplicity, we consider the setting with subpopulations. In Section 4.2.2, we concluded that inexact gradient and Hessian computations may mislead IP and SQP to terminate at poor solutions. We thus focus on comparing IP hess, SQP grad, and CRNAS in these experiments; Figure 6 displays the results of the experiments.

From the results, we note CRNAS still has the best convergence rate in terms of number of iterations; however, the total compute time of CRNAS was comparable to IP hess and longer than SQP grad. This is primarily because, for this problem, evaluating the Hessian matrix is significantly more expensive to compute than evaluating the gradient. Though SQP grad solved relatively quickly, many of the solutions were of low quality. Specifically, 14 of the 100 solutions obtained by SQP grad had a relative likelihood greater than 1, while all of the estimations obtained from IP hess and CRNAS had a relative likelihood less than 1. To illustrate this, we present an example estimation in Table 2. In this example, it is evident SQP grad fails to estimate the parameters for the second subpopulation accurately, as the estimations for and are completely incorrect. This example reiterates the necessity of including the exact computation of higher-order information for better-quality solutions. Additionally, it supports our claim that when the relative likelihood is below , the estimation quality is high.
0.36 | 0.41 | 0.31 | 0.8 | 0.08 | 2.05 | 0.64 | 0.72 | 0.67 | 0.95 | 0.81 | 4.39 | 1 | |
IP hess | 0.35 | 0.42 | 0.33 | 0.79 | 0.09 | 1.96 | 0.65 | 0.71 | 0.67 | 0.95 | 0.83 | 4.42 | 0.9997 |
SQP grad | 0.38 | 0.43 | 0.34 | 0.8 | 0.09 | 1.82 | 0.62 | 0.73 | 0.68 | 0.95 | 1.19 | 36.51 | 1.0009 |
CRNAS | 0.35 | 0.42 | 0.33 | 0.79 | 0.09 | 1.96 | 0.65 | 0.71 | 0.67 | 0.95 | 0.83 | 4.42 | 0.9997 |
5 Case Study II: Heterogeneous Logistic Estimation
5.1 Problem Description
The logistic growth model is popular for studying population growth with a carrying capacity. Recently, there has been significant interest in an extended version of the logistic growth model, which aims to represent cell populations consisting of heterogeneous subpopulations [JMS18]. These subpopulations can exhibit varying growth dynamics, such as differing growth rates and carrying capacities. For simplicity here we study a linear combination of logistic growth functions with varying parameter sets for each subpopulation to model a mixture of heterogeneous populations growing according to a logistic function. Note, we do not assume the populations are interacting via their carrying capacity as that would necessitate the numerical solution of a nonlinear system of differential equations which is outside the focus of this manuscript.
In particular, we start with the parameters of subpopulations: , and we then model the total cell count dynamics with respect to time as
(12) |
where is the initial total cell count and is the initial proportion of each subpopulation, satisfying an equality constraint . Compared to the widely used three-parameter logistic growth model, we normalize the maximum carrying capacity for each subpopulation to 1, ensuring the identifiability of each parameter. We retain another two parameters, , to capture the growth behavior of type population at the inflection point. The parameter encodes the maximum growth rate that the type population can achieve around the inflection point, while the parameter relates to the shift in time of the inflection point. From a modeling perspective, there are no restrictions on and . However, we assume for all for a growing population and select a distinct for each subpopulation to ensure these parameters are identifiable. The detailed selection method is specified in Appendix A.6. For this experiment, we assume a zero observation noise. Thus, for a dataset observed at a given set of time points , , we solve the least squares problem
(13) |
to estimate the true parameter set .
5.2 Numerical Results
We compared the performance of all the algorithms in Table 1 for solving the least squares problem described in (13). Similar to the previous case study, we compared the methods based on the described metrics in Section 3. Experimental details are provided in Appendix A.6.

From Figure 7, we see again CRNAS boasted the smallest number of iterations required to obtain the optimal solution. Furthermore, the middle panel in Figure 7 shows CRNAS yielded the lowest best objective values among all five methods. Based on the advantages observed in both the number iterations and objective value, we see CRNAS outperformed four different implementations of state-of-the-art algorithms in solving this highly nonlinear and non-convex parameter estimation problem.
6 Conclusion
This paper introduced a novel optimization algorithm called cubic regularized Newton based on affine scaling (CRNAS) for solving nonlinear and non-convex constrained optimization problems. CRNAS combines the best qualities of Nesterov’s cubic regularized Newton’s method and the classical affine scaling method for linear programming to produce a new approach with convergence guarantees to second-order stationary points. The central motivator for the design of our algorithm was the parameter estimation problems which arise in mathematical biology. These problems produce exceptionally challenging constrained optimization models and were able to demonstrate CRNAS competes with state-of-the-art algorithms on these problems.
We conducted a comparative analysis of CRNAS with two state-of-the-art local optimization methods implemented in MATLAB’s commercial optimization solver fmincon. This analysis was performed across two case studies, and our findings consistently demonstrated CRNAS outperformed fmincon by delivering generally more accurate solutions in fewer iterations and with comparable, if not better, total compute times; however, it is important to note that the computational effort required to compute second-order information may diminish the advantage of CRNAS. Since the Hessian matrix must be computed at each iteration of CRNAS, we recommend using CRNAS when accurate second-order information can be obtained without expending an excessive amount of computational resources compared to computing first-order information.
Acknowledgements
The work of K. Leder, S. Zhang, C. Wu and N. Wang was partially supported by NSF Award CMMI 2228034; C. Garner is supported by the NSF-GRFP under Grant No. 2237827.
Appendix A Appendix
The supplemental materials included here are structured as follows: Appendix A.1 provides the proof of Theorem 2.2 which provides the convergence rate of CRNAS; Appendix A.2 describes how to solve the crucial subproblem in CRNAS; Appendix A.3 describes a first-order version of CRNAS called first-order affine scaling and provides a theoretical analysis of its performance; Appendices A.4, A.5, and A.6 provide additional details about of our numerical experiments; lastly, Appendix A.7 states the termination criteria utilized by all of the algorithms in our experiments.
A.1 Complexity Analysis of CRNAS
To begin, we quote Lemma 3 in [HL23] which shall prove crucial in our analysis.
Lemma A.1.
Observe that the affine scaling algorithm is to minimize the majorization. We show the function value decreases can be lower bounded by the norm of projected in the linear space in the following two lemmas.
Lemma A.2.
If , then the following inequality holds:
Proof.
Since is the optimal solution of the subproblem, we have
where the last step is according to Lemma A.1. ∎
Because of the linear constraint , we expect to converge to zero after affine scaling, where is any given feasible direction.
Lemma A.3.
Assume the constraint in the subproblem is inactive, namely . Then, for any , such as , we have
Proof.
Because the constraint is inactive, any satisfying must be a feasible direction. According to the optimality condition, we have
which implies that
According to Lemma A.1, we have . So,
Combining the above two, we get the inequality. ∎
We give the lemma below to show the second-order optimality of the solution obtained from the algorithm.
Lemma A.4.
Assume the constraint in the subproblem is inactive, namely . Then, for any , such as , we have
Proof.
Because the constraint is inactive, any satisfying must be a feasible direction. According to the second-order optimality condition, we have
So, we observe that
According to Assumption 1, we have
Combining the above two inequalities, we have
∎
According to Lemma A.2, we have
which implies the decrease guarantee in each step so that the algorithm will stop in finite steps. Telescoping the above inequalities, we have
Therefore, we can give an upper bound for the number of iterations . According to Lemma A.3 and A.4 with and , we have the following bounds.
For any , such as , we have
and
Given that the right hand sides of the above two inequalities also involve , we site the following lemma to change them into .
Lemma A.5.
(Theorem 2.1.1 in [NN94]) Let be a-self-concordant on and let , then for any and , it holds that .
Since , we have . The above two inequalities imply
and
We conclude the following theorem.
A.2 Solving the Subproblem
In each step, we solve the following subproblem.
Let be an orthogonal matrix whose columns form a basis of the linear space nontrival . Then, the linear constraint can be replaced by . Let . The matrix is invertible because where is of full column rank and is of full rank. Then by letting , the subproblem can be written as
where is a vector and is a matrix with corresponding sizes.
Note in the subproblem of standard cubic Newton, we solve , where we confine in the above subproblem. If the solution to satisfies , then we get the solution. Otherwise, we introduce the following two lemmas to solve the subproblem.
Lemma A.7.
Let and . If , then we must have .
Proof.
We cited part of the proof of Theorem 3.1 in [CGT11]. Suppose , we will show is a global minimum and no global solution will satisfy . Since is an interior point of the set , according to the first and second-order necessary optimality conditions
and
(A-2) |
for all vectors , where . If , the second-order optimizality condition is . We will show for , still holds.
For , (A-2) shows that . Consider the vector , the line intersects the ball of radius at two points, , where . Without loss of generality, let . Since , we have
Therefore, always holds. Let has an eigendecomposition , where . Then, according to A.2, we have
where and , .
On the other hand, according to the first-order optimality condition for the global solution , we have
which is equivalent to
where . Therefore, we have , . Thus, we can get the following equation
which implies . Therefore, we have . However, since is orthogonal,
which causes a contradiction. ∎
Lemma A.8.
Let and . If , then
Proof.
Suppose . Let . Then we have , which is
According to Lemma A.7, we have . Therefore,
which contradicts with the fact . ∎
Therefore, when solving the subproblem, we first solve the problem by the method introduced in Section 6.1 of [CGT11]. If , we find the solution . If not, we know the optimal solution is on the boundary, namely . Thus, we can skip the cubic term by adding the ball constraints. Only the following trust-region subproblem needs to be solved
which is discussed in 7.3 and 1.3 of [CGT00].
A.3 A First-Order Version of CRNAS
Here we discuss a first-order version of CRNAS to solve (2). The construction of our first-order method is essentially the same as that for CRNAS with the exception a quadratic approximation of the function is utilized rather than a cubic approximation. Hence, the subproblem solved at each iteration of our first-order affine scaling method (FOAS) is
where is a positive number. The exact description of the algorithm is provided below; you will recognize it as identical to CRNAS modulo the altered subproblem.
First-Order Affine Scaling Step 0: Provide an interior point , i.e., and ; choose the constants , , and ; set Step 1: For , solve the following subproblem: Step 2: If , let and stop. Otherwise, go back to Step 1 with
As CRNAS was well-defined, so is FOAS due to Lemma 2.1, which guarantees all of the generated iterates lie inside the interior of the cone. So, we proceed to prove the convergence rate of FOAS. To begin, we assume the Lipschitz smoothness of the gradient.
Assumption 2.
There exists a constant such that for all
Using this assumption it follows there exists a local quadratic upper bound for at all points inside the interior of the cone.
Lemma A.9.
Under Assumption 2, the following inequality holds for all
Proof.
According to the fundamental theorem of calculus and norm relations we have
where the last inequality follows from Assumption 2. ∎
Lemma A.9 demonstrates that, provided is large enough, FOAS at each iteration minimizes a local quadratic upper bound of our objective function. We show the function value decreases can be lowered bounded by the norm of projected in the linear space in the following two lemmas.
Lemma A.10.
If , then for all the iterates generated by FOAS we have that
Proof.
Since is the optimal solution of the FOAS subproblem, we have
where the last step follows from Lemma A.9. ∎
Because of the linear constraint , we expect to converge to zero after affine scaling, where is any given feasible direction.
Lemma A.11.
Assume the constraint in the subproblem is inactive, namely . Then, for any such as we have
Proof.
Because the constraint is inactive, any satisfying must be a feasible direction. According to the optimality condition, we have
which implies
∎
Given the fact , there exists a , such that .
Therefore, as long as and , according to Lemma A.11, we have the following bounds.
For any , such as , we have
We arrive at the following theorem.
A.4 Deterministic Drug-Affected Cell Proliferation Experiment Details
To set up our optimization problems, we collected in silico data according to a true parameter set and the deterministic framework at collections of time points and drug dose levels . Without further specification, we employ the following time points and drug dose levels:
(A-3) |
Since we assume no noise in these experiments, we only collect data for one replicate. We selected the true parameter set randomly from a biologically feasible range, denoted as . This range is specified for each experiment in the subsequent sections. With the true parameter set, we obtained the cell count data of each sub-type and the following total cell count data according to (8).
Based on the data, we solved (9) to obtain the point estimation within the optimization feasible range .
The optimization feasible range is larger than the biologically feasible region, indicating incomplete prior knowledge about the ‘true nature’ of the parameters.
Experiment with
The biologically feasible range and optimization feasible range for this experiment are described in Table 3. Note that the true EC50 parameter should be located within the drug dose levels to ensure parameter identifiability.
Experiment with
When , we denote the subpopulation with smaller EC50 value as ‘sensitive’ and the subpopulation with a higher EC50 value as ‘resistant’.
The sensitive subpopulation EC50 parameter is selected to be distinct from of the resistant subpopulation for model identifiability.
Other than the EC50 values, we do not distinguish the parameter space between the sensitive and resistant subpopulations.
We also note the initial proportions and are selected to satisfy .
Table 4 outlines the biologically feasible range and the feasible region used in the optimization model for this experiment.
Experiment with
In this experiment, we dynamically selected the drug dose levels. Specifically, we selected the EC50 biologically feasible range and dose levels based on the number of subpopulations . We kept the overall range of the same, i.e. , and selected dose levels within this range according to a logarithmic scale. For example, with the drug doses could be selected as
To preserve parameter identifiability, we designed the biologically feasible ranges for the EC50 parameters based on the generated dose levels. For instance, we have the following biologically feasible ranges for the experiment described in Table 5, where are from distinct subpopulations and the ranges of the other parameters are the same across the subpopulations.
A.5 Stochastic Drug-Affected Cell Proliferation Experiment Details
Similar to the deterministic framework experiment, we randomly generated true parameter sets from biologically feasible ranges and the corresponding simulated dataset. Different from the PhenoPop experiment, we assumed the stochastic linear birth-death process governed the tumor dynamic. Consequently, we employed the Gillespie algorithm [Gil76] to simulate the data according to a stochastic process. Due to the stochastic nature of the simulation, we collected 13 replicates under the same drug dose levels and time points as in (A-3). The biologically feasible ranges and the constraints for the optimization problems are presented in Table 6.
A.6 Heterogeneous Logistic Model Experiment Details
Similar to the experiments in Section 4, we examined the algorithms’ performance for 100 independent experiments with distinct true parameter sets . The true parameter sets were randomly selected from the biologically feasible range shown in Table 7. With the true parameter set, we generated the experimental data at time points
according to the relation described in (12). The optimization problems were then formulated using the feasible regions described in Table 7 as in the prior experiments.
. |
A.7 CRNAS Stopping Criteria
To ensure the fairness of comparison in the computational time of each optimization algorithm, we employ two stopping criteria for CRNAS, which are also implemented in fmincon. One is the first-order optimality condition, and the other is the stepsize of each iteration. In particular, the algorithm will stop once one of the following quantities falls below the given threshold :
where is the objective function. It is worth noticing that fmincon does not use this measurement for the first-order optimality condition. However, both methods of computing the first-order optimality conditions are zero at the minimum. Therefore, we use the same threshold for both CRNAS and the fmincon algorithms. In addition to these two stopping criteria, the algorithm will also stop if the number of iterations is greater than .
References
- [Bar86] Earl R Barnes. A variation on Karmarkar’s algorithm for solving linear programming problems. Mathematical Programming, 36:174–182, 1986.
- [BVB16] Nicolas Boumal, Vlad Voroninski, and Afonso Bandeira. The non-convex Burer-Monteiro approach works on smooth semidefinite programs. Advances in Neural Information Processing Systems, 29, 2016.
- [CGT00] Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. Trust region methods. SIAM, 2000.
- [CGT11] Coralia Cartis, Nicholas IM Gould, and Philippe L Toint. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Mathematical Programming, 127(2):245–295, 2011.
- [CLPS03] Yang Cao, Shengtai Li, Linda Petzold, and Radu Serban. Adjoint sensitivity analysis for differential-algebraic equations: The adjoint DAE system and its numerical solution. SIAM journal on scientific computing, 24(3):1076–1089, 2003.
- [CRS19] Frank E Curtis, Daniel P Robinson, and Mohammadreza Samadi. An inexact regularized Newton framework with a worst-case iteration complexity of for nonconvex optimization. IMA Journal of Numerical Analysis, 39(3):1296–1327, 2019.
- [DKHC+08] Adrien Daigeler, Ludger Klein-Hitpass, Ansgar Michael Chromik, Oliver Müller, Jörg Hauser, Heinz-Herbert Homann, Hans-Ulrich Steinau, and Marcus Lehnhardt. Heterogeneous in vitro effects of doxorubicin on gene expression in primary human liposarcoma cultures. BMC cancer, 8:1–17, 2008.
- [DS24] Pavel Dvurechensky and Mathias Staudigl. Barrier Algorithms for Constrained Non-Convex Optimization. arXiv preprint arXiv:2404.18724, 2024.
- [GB15] Attila Gábor and Julio R Banga. Robust and efficient parameter estimation in dynamic models of biological systems. BMC systems biology, 9:1–25, 2015.
- [GC15] Sudhindra R Gadagkar and Gerald B Call. Computational tools for fitting the Hill equation to dose–response curves. Journal of Pharmacological and Toxicological methods, 71:68–76, 2015.
- [Gil76] Daniel T Gillespie. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. Journal of computational physics, 22(4):403–434, 1976.
- [GLM16] Rong Ge, Jason D Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. Advances in neural information processing systems, 29, 2016.
- [GZKB+12] Rudolf Gesztelyi, Judit Zsuga, Adam Kemeny-Beke, Balazs Varga, Bela Juhasz, and Arpad Tosaki. The Hill equation and the origin of quantitative pharmacology. Archive for history of exact sciences, 66:427–438, 2012.
- [Hil10] Archibald Vivian Hill. The possible effects of the aggregation of the molecules of hemoglobin on its dissociation curves. j. physiol., 40:iv–vii, 1910.
- [HL23] Chuan He and Zhaosong Lu. A Newton-CG based barrier method for finding a second-order stationary point of nonconvex conic optimization with complexity guarantees. SIAM Journal on Optimization, 33(2):1191–1222, 2023.
- [JMS18] Wang Jin, Scott W McCue, and Matthew J Simpson. Extended logistic growth model for heterogeneous populations. Journal of Theoretical Biology, 445:51–61, 2018.
- [KLMT+23] Alvaro Köhn-Luque, Even Moa Myklebust, Dagim Shiferaw Tadele, Mariaserena Giliberto, Leonard Schmiester, Jasmine Noory, Elise Harivel, Polina Arsenteva, Shannon M Mumenthaler, Fredrik Schjesvold, et al. Phenotypic deconvolution in heterogeneous cancer cell populations using drug-screening data. Cell Reports Methods, 3(3), 2023.
- [LC13] Lennart Ljung and Tianshi Chen. Convexity issues in system identification. In 2013 10th IEEE international conference on control and automation (ICCA), pages 1–9. IEEE, 2013.
- [LV90] J Lagarias and R Vanderbei. II Dikin’s convergence result for the affine scaling algorithm. Contemp. Math, 114:109, 1990.
- [MAT2a] MATLAB Optimization Toolbox, R2022a. The MathWorks, Natick, MA, USA.
- [MMB03] Carmen G Moles, Pedro Mendes, and Julio R Banga. Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome research, 13(11):2467–2474, 2003.
- [NN94] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming. SIAM, 1994.
- [NP06] Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical programming, 108(1):177–205, 2006.
- [OW21] Michael O’Neill and Stephen J Wright. A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees. IMA Journal of Numerical Analysis, 41(1):84–121, 2021.
- [ROW20] Clément W Royer, Michael O’Neill, and Stephen J Wright. A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization. Mathematical Programming, 180:451–488, 2020.
- [SWH21] Leonard Schmiester, Daniel Weindl, and Jan Hasenauer. Efficient gradient-based parameter estimation for dynamic models using qualitative data. Bioinformatics, 37(23):4493–4500, 2021.
- [VMF86] Robert J Vanderbei, Marc S Meketon, and Barry A Freedman. A modification of Karmarkar’s linear programming algorithm. Algorithmica, 1:395–407, 1986.
- [WGM+24] Chenyu Wu, Einar Bjarki Gunnarsson, Even Moa Myklebust, Alvaro Köhn-Luque, Dagim Shiferaw Tadele, Jorrit Martijn Enserink, Arnoldo Frigessi, Jasmine Foo, and Kevin Leder. Using birth-death processes to infer tumor subpopulation structure from live-cell imaging drug screening data. PLoS computational biology, 20(3):e1011888, 2024.