This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Embedded Code Generation with CVXPY

Maximilian Schaller, Goran Banjac, Steven Diamond, Akshay Agrawal, Bartolomeo Stellato, and Stephen Boyd This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement OCAL, No. 787845, Stanford’s SystemX, and the AI Chip Center for Emerging Smart Systems (ACCESS).M. Schaller and G. Banjac are with the Department of Information Technology and Electrical Engineering, ETH Zürich, 8092 Zürich, Switzerland. {mschaller, gbanjac}@ethz.chS. Diamond is with Gridmatic, Campbell CA 95008, USA. [email protected]A. Agrawal and S. Boyd are with the Department of Electrical Engineering, Stanford University, Stanford CA 94305, USA. {akshayka, boyd}@stanford.eduB. Stellato is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton NJ 08544, USA. [email protected]
Abstract

We introduce CVXPYgen, a tool for generating custom C code, suitable for embedded applications, that solves a parametrized class of convex optimization problems. CVXPYgen is based on CVXPY, a Python-embedded domain-specific language that supports a natural syntax (that follows the mathematical description) for specifying convex optimization problems. Along with the C implementation of a custom solver, CVXPYgen creates a Python wrapper for prototyping and desktop (non-embedded) applications. We give two examples, position control of a quadcopter and back-testing a portfolio optimization model. CVXPYgen outperforms a state-of-the-art code generation tool in terms of problem size it can handle, binary code size, and solve times. CVXPYgen and the generated solvers are open-source.

I Introduction

Convex optimization is used in many domains, including signal and image processing [1, 2], control [3, 4], and finance [5, 6], to mention just a few. A (parametrized) convex optimization problem can be written as

minimizef0(x,θ)subject tofi(x,θ)0,i=1,,pgj(x,θ)=0,j=1,,r,\begin{array}[]{ll}\text{minimize}&f_{0}(x,\theta)\\ \text{subject to}&f_{i}(x,\theta)\leq 0,\quad i=1,\ldots,p\\ &g_{j}(x,\theta)=0,\quad j=1,\ldots,r,\end{array} (1)

where x𝐑nx\in\mathbf{R}^{n} is the optimization variable, f0f_{0} is the objective function to be minimized, f1,,fpf_{1},\ldots,f_{p} are the inequality constraint functions, and g1,grg_{1}\ldots,g_{r} are the equality constraint functions. We require that f0,,fpf_{0},\ldots,f_{p} are convex functions, and g1,,grg_{1},\ldots,g_{r} are affine functions [7]. The parameter θ𝐑d\theta\in\mathbf{R}^{d} specifies data that can change, but is constant and given when we solve an instance of the problem. We refer to the parametrized problem (1) as a problem family; when we specify a fixed value of θ\theta, we refer to it as a problem instance. We let xx^{\star} denote an optimal point for the problem (1), assuming it exists.

The problem family can be specified using a domain-specific language (DSL) for convex optimization. Such systems allow the user to specify the functions fif_{i} and gjg_{j} in a simple format that closely follows the mathematical description of the problem. Examples include YALMIP [8] and CVX [9] (in Matlab), CVXPY [10] (in Python), Convex.jl [11] and JuMP [12] (in Julia), and CVXR [13] (in R). We focus on CVXPY, which also supports the declaration of parameters, enabling it to specify problem families, not just problem instances.

DSLs parse the problem description and translate (or canonicalize) it to an equivalent problem that is suitable for a solver that handles some generic class of problems, such as linear programs (LPs), quadratic programs (QPs), second-order cone programs (SOCPs), semidefinite programs (SDPs), and others such as exponential cone programs [7]. We focus on solvers that are suitable for embedded applications, i.e., are single-threaded, can be statically compiled, and do not make system calls: OSQP [14] handles QPs, SCS [15] and ECOS [16] handle cone programs that include SOCPs. After the canonicalized problem is solved, a solution of the original problem is retrieved from a solution of the canonicalized problem.

It is useful to think of the whole process as a function that maps θ\theta, the parameter that specifies the problem instance, into xx^{\star}, an optimal value of the variable. With a DSL, this process consists of three steps. First the original problem description is canonicalized to a problem in some standard (or canonical) form; then the canonicalized problem is solved using a solver; and finally, a solution of the original problem is retrieved from a solution of the canonicalized problem.

Refer to caption
(a) Parser-solver calculating solution xx^{\star} for problem instance with parameter θ\theta.
Refer to caption
(b) Source code generation for problem family, followed by compilation to custom solver. The compiled solver computes a solution xx^{\star} to the problem instance with parameter θ\theta.
Figure 1: Comparison of convex optimization problem parsing and solving approaches.

Most of these DSLs are organized as parser-solvers, which carry out the canonicalization each time the problem is solved (with different parameter values). This simple setting is illustrated in figure 1(a). We are interested in applications where we solve many instances of the problem, possibly in an embedded application with hard real-time constraints. For such applications, a code generator makes more sense. A code generator takes as input a description of a problem family, and generates specialized solver source code for that specific family. That source code is then compiled, and we have an efficient solver for the specific family.

This workflow is illustrated in figure 1(b). The compiled solver has a number of advantages over parser-solvers. First, the compiled solver can be deployed in embedded systems, fulfilling rules for safety-critical code [17]. Second, by caching canonicalization and exploiting the problem structure, the compiled solver is faster.

A well known code generator is CVXGEN [18]. It handles problems that can be transformed to QPs and includes a custom interior-point solver. CVXGEN is used in many applications, including autonomous driving, dynamic energy management, and real-time trading in finance. SpaceX uses CVXGEN to generate flight code for high-speed onboard convex optimization for precision landing of space rockets [19].

CVXGEN was designed for use in real-time control systems, where the problems solved are not too big, either in terms of the number of variables or number of parameters. CVXGEN unrolls for-loops in its generated source code files to increase the solving speed, but this can also result in large compiled code size. Due to the flat and explicit code generated, CVXGEN only handles problems with up to around a few thousand parameters. (More accurately, CVXGEN is limited to 4000 nonzero entries in the linear system of equations solved in each iteration.)

I-A Contribution

In this paper, we introduce the code generation tool CVXPYgen, which produces custom C code to solve a parametrized family of convex optimization problems. The design decisions for CVXPYgen are somewhat different from those made for CVXGEN. First, CVXPYgen is built on top of the DSL CVXPY, whereas CVXGEN is entirely self-contained. This means that prototypes can be developed, prototyped, and simulated in Python using CVXPY. Second, CVXPYgen interfaces with multiple solvers, currently OSQP, SCS, and ECOS. This means that CVXPYgen supports problems more general than those that can be transformed to QPs. As far as we know, CVXPYgen is the first generic code generator for convex optimization that supports SOCPs. When using OSQP or SCS (both based on first-order methods), the generated solvers support warm-starting, which can bring more speed in some applications [4]. Third, CVXPYgen does not aggressively unroll loops in the generated code, which allows it to support high-dimensional parameters. In addition, matrix parameters can have any user-defined sparsity pattern. CVXPYgen uses partial update canonicalization, in which only the parameters changed are processed when solving a new problem instance. Fourth, CVXPYgen and its generated solvers are fully open-source, whereas CVXGEN is proprietary.

CVXPYgen (and more generally, code generation) is useful for two families of practical applications. The first is solving convex optimization problems in real-time settings on embedded devices, as is done in control systems, real-time resource allocators, and other applications. The second is in solving a large number of instances of a problem family, possibly on general-purpose computers. One example is back-testing in finance, where a trading policy based on convex optimization is simulated on historical or simulated data over many periods. Typical back-tests involve solving thousands or more instances of a problem family. In these applications, there is no hard real-time constraint; the goal is simply to speed up solving by avoiding repeatedly canonicalizing the problem.

I-B Prior work

Several other code generators for optimization have been developed in addition to CVXGEN. FORCESPRO [20] and FORCES NLP [21] are proprietary code generators for multi-stage control problems. They handle problems that can be transformed to multi-stage quadratically constrained quadratic programs and nonlinear programs, respectively. The open-source code generators QCML [22] and CVXPY-CODEGEN [23], which interface with ECOS, were developed before CVXPY included support for parameters. These early prototypes are no longer actively supported or maintained.

I-C Outline

The remainder of this paper is structured as follows. In §II we describe, at a high level, how CVXPYgen works, and in §III, we illustrate how it is used with a simple example. In §IV and §V we compare CVXPYgen to CVXGEN (for embedded use) and CVXPY (for general purpose use), respectively. We conclude the paper in §VI.

II CVXPYgen

CVXPYgen is based on the open-source Python-embedded DSL CVXPY. CVXPY handles many types of conic programs and certain types of nonconvex problems, whereas we focus on LPs, QPs, and SOCPs for code generation. CVXPY provides modeling instructions that follow the mathematical description for convex optimization problems. It ensures that the modeled problems are convex, using disciplined convex programming (DCP). DCP is the process of constructing convex functions by assembling given base functions in mathematical expressions using a simple set of rules [24]. DCP ensures that the resulting problem is convex, and also, readily canonicalized to a standard form.

In DCP, parameters are treated as constants, optionally with specified sign, and there are no restrictions about how these constants appear in the expressions defining the problem family. The recently developed concept of disciplined parametrized programming (DPP) puts additional restrictions on how parameters can enter a problem description. If a problem family description is DPP-compliant, then canonicalization and retrieval can be represented as affine mappings [25]. Thus DPP-compliant problems are reducible to ASA-form, which stands for Affine-Solve-Affine [25]. This is the key property we exploit in CVXPYgen. More about the DCP and DPP rules can be found in the aforementioned papers, or at https://www.cvxpy.org.

After CVXPY has reduced the DPP-compliant problem to ASA-form, CVXPYgen extracts a sparse matrix CC that canonicalizes the user-defined parameters θ\theta to the parameters θ~\tilde{\theta} appearing in the standard form solver:

θ~=C[θ1].\tilde{\theta}=C\begin{bmatrix}\theta\\ 1\end{bmatrix}.

CVXPYgen analyzes CC to determine the user-defined parameters (i.e., components of θ\theta) that every standardized form parameter depends on. This information is used when generating the custom solver, where only slices of the above mapping are computed if not all user-defined parameters are updated between solves. In addition, it is very useful to know the set of updated canonical parameters when using the OSQP solver or SCS, as detailed below.

In a similar way the retrieval of the solution xx^{\star} for the original problem from a solution x~\tilde{x}^{\star} of the canonicalized problem is an affine mapping,

x=R[x~1],x^{\star}=R\begin{bmatrix}\tilde{x}^{\star}\\ 1\end{bmatrix},

where RR is a sparse matrix. Typically RR is a selector matrix, with only one nonzero entry in each row, equal to one, in which case this step can be handled via simple pointers in C.

CVXPYgen generates allocation-, library-, and division-free C code for the canonicalization and retrieval steps, which in essence are nothing more than sparse matrix-vector multiplication, with some logic that exploits pointers or partial updates. Sparse matrices are stored in compressed sparse column format [26] and dense matrices are stored as vectors via column-major flattening.

Any solver can be used to solve the canonicalized problem, which provides the final link:

x~=𝒮(θ~),\tilde{x}^{\star}=\mathcal{S}(\tilde{\theta}),

where 𝒮\mathcal{S} denotes the mapping from the canonicalized parameters to a solution of the canonicalized problem. (We assume here that the problem instance is feasible, and that when there are multiple solutions, we simply pick one.) If available, CVXPYgen uses the canonical solver’s code generation method to produce C code for canonical solving. As of now, only OSQP provides this functionality [27]. Otherwise, the solver’s C code is simply copied, possibly modified for use in embedded applications.

OSQP and SCS provide a set of C functions for updating their parameters. This way, when only canonical vector parameters are updated, the factorization of the linear system involved in the OSQP or SCS algorithms can be cached and re-used, which can lead to substantial speed up and division-free code. In the same way as only the parts of θ~\tilde{\theta} are re-canonicalized that depend on the updated parts of θ\theta, only the OSQP or SCS update functions associated with these parameters are called before the canonicalized problem is solved.

The code and the full documentation for CVXPYgen with its generated solvers are available at

https://pypi.org/project/cvxpygen.

III Simple example

We consider the nonnegative least squares problem

minimizeGxh22subject tox0,\begin{array}[]{ll}\text{minimize}&\lVert Gx-h\rVert_{2}^{2}\\ \text{subject to}&x\geq 0,\end{array} (2)

where x𝐑nx\in\mathbf{R}^{n} is the variable and G𝐑m×nG\in\mathbf{R}^{m\times n}, h𝐑mh\in\mathbf{R}^{m} are parameters, so θ=(G,h)\theta=(G,h). We will canonicalize this to the standard form accepted by OSQP,

minimize12x~TPx~+qTx~subject tolAx~u,\begin{array}[]{ll}\text{minimize}&\frac{1}{2}\tilde{x}^{T}P\tilde{x}+q^{T}\tilde{x}\\ \text{subject to}&l\leq A\tilde{x}\leq u,\end{array} (3)

where x~𝐑n~\tilde{x}\in\mathbf{R}^{\tilde{n}} is the canonical variable and all other symbols are canonical parameters, i.e., θ~=(P,q,A,l,u)\tilde{\theta}=(P,q,A,l,u). (In this form, entries of ll can be -\infty, and entries of uu can be ++\infty.)

The naïve canonicalization of (2) to (3) takes x~=x\tilde{x}=x and

P=2GTG,q=2GTh,A=I,l=0,u=.P=2G^{T}G,\quad q=2G^{T}h,\quad A=I,\quad l=0,\quad u=\infty.

In this canonicalization, θ~\tilde{\theta} is not an affine function of θ\theta, since some entries of θ~\tilde{\theta} are products of entries of θ\theta.

The canonicalization that uses DPP first expresses problem (2) as

minimizex~222subject tox~2=Gx~1h,x~10,\begin{array}[]{ll}\text{minimize}&\lVert\tilde{x}_{2}\rVert_{2}^{2}\\ \text{subject to}&\tilde{x}_{2}=G\tilde{x}_{1}-h,\quad\tilde{x}_{1}\geq 0,\end{array}

with variable x~=(x~1,x~2)\tilde{x}=(\tilde{x}_{1},\tilde{x}_{2}), where x~1=x\tilde{x}_{1}=x and x~2𝐑m\tilde{x}_{2}\in\mathbf{R}^{m}. We can express this as (3) with parameters

P=[0002I],q=0,A=[GII0],P=\begin{bmatrix}0&0\\ 0&2I\end{bmatrix},\quad q=0,\quad A=\begin{bmatrix}G&-I\\ I&0\end{bmatrix},
l=(h,0),u=(h,),l=(h,0),\quad u=(h,\infty),

where the second part of uu has \infty in every entry, i.e., there is no upper bound on the second part of Ax~A\tilde{x}. In this canonicalization, θ~\tilde{\theta} is indeed an affine function of θ\theta. The retrieval map has the simple (linear) form x=[I0]x~x^{\star}=[I~{}0]\tilde{x}^{\star}.

1import cvxpy as cp
2from cvxpygen import cpg
3
4# model problem
5x=cp.Variable(n, name='x')
6G=cp.Parameter((m,n), name='G')
7h=cp.Parameter(m, name='h')
8p=cp.Problem(cp.Minimize(cp.sum_squares(G@x-h)),
9 [x>=0])
10# generate code
11cpg.generate_code(p)
Figure 2: Code generation for example (2). We assume that the dimensions m and n have been previously defined.

We generate code for this problem as shown in figure 2. The problem is modeled with CVXPY in lines 5–9. The actual code generation is done in line 11. The variable and parameters are named in lines 5–7 via their name attributes. These names are used for C variable and function naming.

IV Comparison to CVXGEN

We compare CVXPYgen to CVXGEN for a model predictive control (MPC) problem family, described in Appendix A. In particular, we compare solve times of the C interface and executable sizes. The MPC problems are parametrized by their horizon length H{6,12,18,30,60}H\in\left\{6,12,18,30,60\right\}; the number of variables is around 10H10H. Figure 3 shows the resulting solve times averaged over 100 simulation steps. CVXGEN is not able to generate code for H>18H>18. Together with the automatically chosen OSQP solver, CVXPYgen outperforms CVXGEN for all problem sizes.

Refer to caption
Figure 3: Comparison of solve times (top) and binary sizes (bottom) with CVXGEN (magenta) and CVXPYgen (blue) used for code generation.

The bottom of figure 3 presents the example executable sizes for CVXGEN and CVXPYgen, respectively. For all values of HH, the executables corresponding to CVXPYgen are considerably smaller.

The execution times cited above are on a MacBook Pro 2.3GHz Intel i5. We have also used these generated solvers to control the position of a custom-built 14-by-14 cm quadcopter. The generated code was compiled in a robot operating system (ROS) node, and run on the drone’s Intel Atom x5-Z8350 processor, at 30 Hz. We provide a video of the quadcopter following a circle trajectory at

V Comparison to CVXPY

Here we compare CVXPYgen to CVXPY, for a (financial) portfolio optimization problem family, described in Appendix B. This family of problems is parametrized by the number of assets in the portfolio N{10,20,40,60,100}N\in\{10,20,40,60,100\}. The number of variables in these problems is around 2N2N. Figure 4 gives the results for 500 solves, a two year back-test using historical data. We see that the average solver speed is about 6 times faster with CVXPYgen, for N=10N=10, with the ratio dropping to 2.5 for N=100N=100. The execution times are measured on the same MacBook described in the previous section.

Refer to caption
Figure 4: Comparison of solve times with CVXPY (magenta) and the CVXPY interface of CVXPYgen (blue).

An interesting metric is the break-even point, which is the number of instances that need to be solved before CVXPYgen is faster than CVXPY, when we include the code generation and compilation time. This number is around 5000, and not too dependent on NN. A typical back-test might involve daily trading, with around 250 trading days in each year, over 4 years, with hundreds of different hyper-parameter values, which gives on the order of 100,000 solves, well above this break-even point.

VI Conclusion

We have described CVXPYgen, a tool for generating custom C code that solves instances of a family of convex optimization problems specified within CVXPY. This gives a seemless path from prototyping an application using Python and CVXPY, to a final embedded implementation in C. In addition to CVXPYgen supporting a wider variety of problems (such as SOCPs) than the state-of-the-art code generator CVXGEN, numerical experiments show that it outperforms CVXGEN in terms of allowable problem size, compiled code size, and solve times. For applications running on general purpose machines, we obtain a significant speedup over CVXPY when many problem instances are to be solved.

Appendix A MPC example

We use MPC to track the position and velocity of a quadcopter with mass mm, experiencing gravitational acceleration gg. We model the quadcopter as a point mass with position error pk𝐑3p_{k}\in\mathbf{R}^{3} and velocity error vk𝐑3v_{k}\in\mathbf{R}^{3}, where kk denotes the time step or period. We concatenate the position and velocity error to the state zk=(pk,vk)𝐑6z_{k}=(p_{k},v_{k})\in\mathbf{R}^{6}, which we regulate to 0. The input is the force vector uk𝐑3u_{k}\in\mathbf{R}^{3} without gravity compensation. The dynamics are

zk+1=Azk+Buk,z_{k+1}=Az_{k}+Bu_{k},

where A𝐑6×6A\in\mathbf{R}^{6\times 6} and B𝐑6×3B\in\mathbf{R}^{6\times 3}.

We limit the tilt angle of the quadcopter. Since the quadcopter’s attitude is tied to the pointing direction of uu plus gravity compensation, we impose a (polyhedral) tilt angle constraint as

cjT(uk)0:1γ((uk)2+mg),j=0,,Nhs1,c_{j}^{T}\left(u_{k}\right)_{0:1}\leq\gamma\left(\left(u_{k}\right)_{2}+mg\right),\quad j=0,\ldots,N^{\text{hs}}-1,

where ()0:1\left(\cdot\right)_{0:1} and ()2\left(\cdot\right)_{2} denote the horizontal and vertical part of a vector in 𝐑3\mathbf{R}^{3} space, respectively, and γ>0\gamma>0. We use NhsN^{\text{hs}} halfspaces parametrized through cjc_{j}. Compared to a spherical (natural) tilt angle constraint, when added to the MPC constraints, this formulation renders the problem QP representable. The lower and upper thrust limits of the propellers are represented as uvmin(uk)2uvmaxu_{\text{vmin}}\leq\left(u_{k}\right)_{2}\leq u_{\text{vmax}} with uvmin<0u_{\text{vmin}}<0 and uvmax>0u_{\text{vmax}}>0.

Up to horizon HH, we penalize state errors and control effort via the traditional quadratic cost

zHTQTzH+k=0H1(zkTQzk+ukTRuk),z_{H}^{T}Q_{T}z_{H}+\sum_{k=0}^{H-1}\left(z_{k}^{T}Qz_{k}+u_{k}^{T}Ru_{k}\right),

with diagonal positive definite matrices QQ and RR, and positive definite QTQ_{T}, which is the solution to the discrete-time algebraic Riccati equation (as a function of AA, BB, QQ, and RR). In addition, at every stage, we discourage rapid changes of the input (that the low-level attitude control system cannot follow) with the additional cost term

k=0H1(uk+1uk)TT(uk+1uk),\sum_{k=0}^{H-1}\left(u_{k+1}-u_{k}\right)^{T}T\left(u_{k+1}-u_{k}\right),

where TT is diagonal positive definite. Combining all the above constraints and cost terms, we arrive at the MPC problem

minimizezHTQTzH+k=0H1(zkTQzk+ukTRuk+(uk+1uk)TT(uk+1uk))subject toz0=zmeas,u0=uprevzk+1=Azk+Buk,k=0,,H1uvmin(uk)2uvmax,k=1,,H1cjT(uk)0:1γ((uk)2+mg),j=1,,Nhs1,k=1,,H1,\begin{array}[]{ll}\text{minimize}&\displaystyle z_{H}^{T}Q_{T}z_{H}+\sum_{k=0}^{H-1}\big{(}z_{k}^{T}Qz_{k}+u_{k}^{T}Ru_{k}+\\[-7.5pt] &\hskip 70.0001pt(u_{k+1}-u_{k})^{T}T(u_{k+1}-u_{k})\big{)}\\[5.0pt] \text{subject to}&z_{0}=z_{\text{meas}},\quad u_{0}=u_{\text{prev}}\\ &z_{k+1}=Az_{k}+Bu_{k},\quad k=0,\ldots,H-1\\ &u_{\text{vmin}}\leq(u_{k})_{2}\leq u_{\text{vmax}},\quad k=1,\ldots,H-1\\ &c_{j}^{T}\left(u_{k}\right)_{0:1}\leq\gamma\left(\left(u_{k}\right)_{2}+mg\right),\\ &\quad j=1,\ldots,N^{\text{hs}}-1,\quad k=1,\ldots,H-1,\end{array}

where the states zkz_{k} and inputs uku_{k} are optimization variables. The current state measurement is zmeasz_{\text{meas}} and the solution for the input at the first stage from the previous solve is uprevu_{\text{prev}}. In practice, these two would most certainly be the only parameters of the problem. However, for demonstration purposes, we declare all other symbols (except for variables, all cjc_{j}, NhsN^{\text{hs}}, and HH) as parameters. This problem formulation is not DPP-compliant, e.g., because of the multiplication of parameters γ\gamma, mm, and gg.

Before rewriting the problem in DPP-compliant form, we define the following convenience notation for matrix slicing. We use the zero-based counting scheme. Mr:t,c:dM_{r:t,c:d} is the slice of some matrix MM from its rrth to its ttth row (included) and from its ccth to its ddth column (included). We slice full columns by omitting the row indices, i.e., McM_{c} is the ccth column of MM. Finally, we write the DPP-compliant problem as

minimizeQT1/2ZH22+Q1/2Z0:H1F2+R1/2U0:H1F2+T1/2(U1:HU0:H1)F2subject toZ0=zmeas,U0=uprevZ1:H=AZ0:H1+BU0:H1uvminU2,1:H1uvmaxcjTU0:1,1:H1γU2,1:H1+d,j=0,,Nhs1,\begin{array}[]{ll}\text{minimize}&\lVert Q_{T}^{1/2}Z_{H}\rVert_{2}^{2}+\lVert Q^{1/2}Z_{0:H-1}\rVert_{F}^{2}\\ &+\lVert R^{1/2}U_{0:H-1}\rVert_{F}^{2}+\lVert T^{1/2}\left(U_{1:H}-U_{0:H-1}\right)\rVert_{F}^{2}\\[5.0pt] \text{subject to}&Z_{0}=z_{\text{meas}},\quad U_{0}=u_{\text{prev}}\\ &Z_{1:H}=AZ_{0:H-1}+BU_{0:H-1}\\ &u_{\text{vmin}}\leq U_{2,1:H-1}\leq u_{\text{vmax}}\\ &c_{j}^{T}U_{0:1,1:H-1}\leq\gamma U_{2,1:H-1}+d,\\ &\quad j=0,\ldots,N^{\text{hs}}-1,\end{array}

where Z𝐑6×(H+1)Z\in\mathbf{R}^{6\times(H+1)} and U𝐑3×(H+1)U\in\mathbf{R}^{3\times(H+1)} are the variables and contain the state and input vectors, respectively, for increasing stage count in their columns. The vector d𝐑H1d\in\mathbf{R}^{H-1} contains γmg\gamma mg in all its entries. The problem is parametrized by

QT1/2,Q1/2,R1/2,T1/2,A,B,γ,d,uvmin,uvmax,zmeas,uprev.\begin{array}[]{ll}Q_{T}^{1/2},~{}Q^{1/2},~{}R^{1/2},~{}T^{1/2},~{}A,~{}B,\\ ~{}\gamma,~{}d,~{}u_{\text{vmin}},~{}u_{\text{vmax}},~{}z_{\text{meas}},~{}u_{\text{prev}}.\end{array}

In the expressions above, M1/2M^{1/2} denotes any squareroot of the positive definite matrix MM, e.g., the transposed Cholesky factor, F\|\cdot\|_{F} denotes the Frobenius norm, and the inequalities are elementwise.

Appendix B Portfolio optimization example

We search for a portfolio of holdings in NN assets and a cash balance. The corresponding weights are w𝐑N+1w\in\mathbf{R}^{N+1}, where the last entry represents the cash balance, and 𝟏Tw=1\mathbf{1}^{T}w=1. We impose a leverage limit w1L\|w\|_{1}\leq L, where L1L\geq 1 is a parameter.

The return r𝐑N+1r\in\mathbf{R}^{N+1} has mean (or forecast) α𝐑N+1\alpha\in\mathbf{R}^{N+1}, so the expected portfolio return is αTw\alpha^{T}w. The risk or variance of the portfolio return is wTΣww^{T}\Sigma w, where Σ\Sigma is the positive definite covariance matrix of the asset returns.

We consider two additional objective terms. One is a trading or transaction cost (κtc)T|wwprev|(\kappa^{\text{tc}})^{T}|w-w^{\text{prev}}|, where the absolute value is elementwise, wprevw^{\text{prev}} is the previous period weights, and κtc0\kappa^{\text{tc}}\geq 0 is a parameter. The other is a short-selling cost (κsh)T(w)(\kappa^{\text{sh}})^{T}(w)_{-}, where (w)=max{w,0}(w)_{-}=\max\{-w,0\} (elementwise), and κsh0\kappa^{\text{sh}}\geq 0 is a parameter.

The overall objective function is

αTwγriskwTΣwγtc(κtc)T|wwprev|γsh(κsh)T(w),\alpha^{T}w-\gamma^{\text{risk}}w^{T}\Sigma w-\gamma^{\text{tc}}\left(\kappa^{\text{tc}}\right)^{T}|w-w^{\text{prev}}|-\gamma^{\text{sh}}\left(\kappa^{\text{sh}}\right)^{T}(w)_{-},

where γrisk\gamma^{\text{risk}}, γtc\gamma^{\text{tc}}, and γsh\gamma^{\text{sh}} are positive parameters that scale the risk, transaction cost, and shorting cost, respectively.

Our final optimization problem is

maximizeαTwγriskwTΣwγtc(κtc)T|wwprev|γsh(κsh)T(w)subject to𝟏Tw=1,w1L,\begin{array}[]{ll}\text{maximize}&\alpha^{T}w-\gamma^{\text{risk}}w^{T}\Sigma w\\ &-\gamma^{\text{tc}}\left(\kappa^{\text{tc}}\right)^{T}|w-w^{\text{prev}}|-\gamma^{\text{sh}}\left(\kappa^{\text{sh}}\right)^{T}(w)_{-}\\[5.0pt] \text{subject to}&\mathbf{1}^{T}w=1,\quad\lVert w\rVert_{1}\leq L,\end{array}

where w𝐑N+1w\in\mathbf{R}^{N+1} is the variable, and all other symbols are parameters.

The covariance matrix Σ\Sigma takes the standard factor model form,

Σ=FFT+D,\Sigma=FF^{T}+D,

where F𝐑(N+1)×KF\in\mathbf{R}^{(N+1)\times K} and DD is positive definite diagonal. The number of factors in this risk model is KK, which is usually much less than NN.

This problem formulation is not DPP-compliant, but we can rewrite it in DPP-compliant form by eliminating quadratic forms and collecting products of parameters, as in

maximize(αγrisk)TwFTw22D1/2w22(γtcγriskκtc)T|Δw|(γshγriskκsh)T(w)subject to𝟏Tw=1,w1LΔw=wwprev,\begin{array}[]{ll}\text{maximize}&\big{(}\frac{\alpha}{\gamma^{\text{risk}}}\big{)}^{T}w-\lVert F^{T}w\rVert_{2}^{2}-\lVert D^{1/2}w\rVert_{2}^{2}\\ &-\big{(}\frac{\gamma^{\text{tc}}}{\gamma^{\text{risk}}}\kappa^{\text{tc}}\big{)}^{T}|\Delta w|-\big{(}\frac{\gamma^{\text{sh}}}{\gamma^{\text{risk}}}\kappa^{\text{sh}}\big{)}^{T}(w)_{-}\\[5.0pt] \text{subject to}&\mathbf{1}^{T}w=1,\quad\lVert w\rVert_{1}\leq L\\ &\Delta w=w-w^{\text{prev}},\end{array}

where the portfolio weight vector w𝐑N+1w\in\mathbf{R}^{N+1} and the weight change vector Δw𝐑N+1\Delta w\in\mathbf{R}^{N+1} are the variables. The problem is parametrized by

αγrisk,F,D1/2,γtcγriskκtc,γshγriskκsh,L,wprev.\frac{\alpha}{\gamma^{\text{risk}}},\quad F,\quad D^{1/2},\quad\frac{\gamma^{\text{tc}}}{\gamma^{\text{risk}}}\kappa^{\text{tc}},\quad\frac{\gamma^{\text{sh}}}{\gamma^{\text{risk}}}\kappa^{\text{sh}},\quad L,\quad w^{\text{prev}}.

We consider NN stock assets, chosen randomly from the S&P 500, with historical return data from 2017–2019. For each value of NN, we set K=max(N/10,5)K=\max(N/10,5).

Acknowledgment

We would like to express our gratitude to John Lygeros for enabling this collaboration. Moreover, we would like to thank JunEn Low and Mac Schwager for providing the quadcopter testing environment at the Stanford Flight Room.

References

  • [1] J. Mattingley and S. Boyd, “Real-time convex optimization in signal processing,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 50–61, 2010.
  • [2] M. Zibulevsky and M. Elad, “L1-L2 optimization in signal and image processing,” IEEE Signal Processing Magazine, vol. 27, no. 3, pp. 76–88, 2010.
  • [3] Y. Wang and S. Boyd, “Fast model predictive control using online optimization,” IEEE Transactions on Control Systems Technology, vol. 18, no. 2, pp. 267–278, 2009.
  • [4] C. Garcia, D. Prett, and M. Morari, “Model predictive control: Theory and practice – a survey,” Automatica, vol. 25, no. 3, pp. 335–348, 1989.
  • [5] S. Boyd, E. Busseti, S. Diamond, R. Kahn, K. Koh, P. Nystrup, and J. Speth, “Multi-period trading via convex optimization,” Foundations and Trends in Optimization, vol. 3, no. 1, pp. 1–76, 2017.
  • [6] H. Markowitz, “Portfolio selection,” Journal of Finance, vol. 7, no. 1, pp. 77–91, 1952.
  • [7] S. Boyd and L. Vandenberghe, Convex Optimization.   Cambridge University Press, 2004.
  • [8] J. Löfberg, “YALMIP: a toolbox for modeling and optimization in MATLAB,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2004, pp. 284–289.
  • [9] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” 2014.
  • [10] S. Diamond and S. Boyd, “CVXPY: A Python-embedded modeling language for convex optimization,” Journal of Machine Learning Research, vol. 17, no. 83, pp. 1–5, 2016.
  • [11] M. Udell, K. Mohan, D. Zeng, J. Hong, S. Diamond, and S. Boyd, “Convex optimization in Julia,” SC14 Workshop on High Performance Technical Computing in Dynamic Languages, 2014.
  • [12] I. Dunning, J. Huchette, and M. Lubin, “JuMP: a modeling language for mathematical optimization,” SIAM Review, vol. 59, no. 2, pp. 295–320, 2017.
  • [13] A. Fu, B. Narasimhan, and S. Boyd, “CVXR: an R package for disciplined convex optimization,” Journal of Statistical Software, vol. 94, no. 14, pp. 1–34, 2020.
  • [14] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd, “OSQP: an operator splitting solver for quadratic programs,” Mathematical Programming Computation, vol. 12, no. 4, pp. 637–672, 2020.
  • [15] B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd, “Conic optimization via operator splitting and homogeneous self-dual embedding,” Journal of Optimization Theory and Applications, vol. 169, no. 3, pp. 1042–1068, 2016.
  • [16] A. Domahidi, E. Chu, and S. Boyd, “ECOS: An SOCP solver for embedded systems,” in European Control Conference (ECC).   IEEE, 2013, pp. 3071–3076.
  • [17] G. Holzmann, “The power of 10: Rules for developing safety-critical code,” Computer, vol. 39, no. 6, pp. 95–99, 2006.
  • [18] J. Mattingley and S. Boyd, “CVXGEN: a code generator for embedded convex optimization,” Optimization and Engineering, vol. 13, no. 1, pp. 1–27, 2012.
  • [19] L. Blackmore, “Autonomous precision landing of space rockets,” in Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2016 Symposium.   The Bridge Washington, DC, 2016, pp. 15–20.
  • [20] A. Domahidi and J. Jerez, “Forces professional,” Embotech AG, url=https://embotech.com/FORCES-Pro, 2014–2019.
  • [21] A. Zanelli, A. Domahidi, J. Jerez, and M. Morari, “FORCES NLP: an efficient implementation of interior-point methods for multistage nonlinear nonconvex programs,” International Journal of Control, vol. 93, no. 1, pp. 13–29, 2020.
  • [22] E. Chu and S. Boyd, “QCML: Quadratic cone modeling language,” url=https://github.com/cvxgrp/qcml, 2017.
  • [23] N. Moehle, J. Mattingley, and S. Boyd, “Embedded convex optimization with CVXPY,” url=https://github.com/moehle/cvxpy_codegen, 2017.
  • [24] M. Grant, “Disciplined convex programming (unpublished doctoral dissertation),” 2004.
  • [25] A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and Z. Kolter, “Differentiable convex optimization layers,” in Advances in Neural Information Processing Systems (NeurIPS), 2019.
  • [26] A. Buluç, J. Fineman, M. Frigo, J. Gilbert, and C. Leiserson, “Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks,” in Symposium on Parallelism in Algorithms and Architectures (SPAA).   Association for Computing Machinery, 2009, pp. 233–244.
  • [27] G. Banjac, B. Stellato, N. Moehle, P. Goulart, A. Bemporad, and S. Boyd, “Embedded code generation using the OSQP solver,” in IEEE Conference on Decision and Control (CDC).   IEEE, 2017, pp. 1906–1911.