This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Generating Large-Scale Trajectories Efficiently using
Double Descriptions of Polynomials

Zhepei Wang1,2,3, Hongkai Ye1,2,3, Chao Xu1,2, and Fei Gao1,2 This work was supported by National Natural Science Foundation of China under Grant 62003299.1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China. {wangzhepei, hkye, cxu, fgaoaa}@zju.edu.cn2Huzhou Institute, Zhejiang University, Huzhou 313000, China.3National Engineering Research Center for Industrial Automation (Ningbo Institute), Ningbo 315000, China.
Abstract

For quadrotor trajectory planning, describing a polynomial trajectory through coefficients and end-derivatives both enjoy their own convenience in energy minimization. We name them double descriptions of polynomial trajectories. The transformation between them, causing most of the inefficiency and instability, is formally analyzed in this paper. Leveraging its analytic structure, we design a linear-complexity scheme for both jerk/snap minimization and parameter gradient evaluation, which possesses efficiency, stability, flexibility, and scalability. With the help of our scheme, generating an energy optimal (minimum snap) trajectory only costs 1 μs\mu s per piece at the scale up to 1,000,000 pieces. Moreover, generating large-scale energy-time optimal trajectories is also accelerated by an order of magnitude against conventional methods.

I Introduction

Smooth polynomial trajectories generated from minimizing jerk/snap are widely used in the navigation of quadrotors [1, 2, 3]. Among these applications, the double descriptions of polynomial trajectory are frequently involved. One description, consisting of piece coefficients and piece times, is convenient for cost evaluation and trajectory configuration. Another description, consisting of piece end-derivatives and times, is convenient and stable for cost minimization [4].

Although these double descriptions offer an efficient and accurate way to obtain energy-optimal trajectories, the overhead and instability are often inevitable caused by numerical transformations between them [5]. Besides, piece times are coupled into transformations. Without knowing its structure, directly optimizing times becomes hard or quite inefficient. In this situation, many perturbed energy-optimal trajectories are often generated to obtain gradient information [6], thus ruining the convenience from descriptions.

To overcome these drawbacks, we study the transformation between double descriptions. Its concrete structure and analytic expression are clearly provided, which is indeed a diffeomorphism. Leveraging its analytic form, we first design a scheme for linear-complexity jerk/snap minimization. Unnecessary computation on transformation is eliminated from this scheme, making its speed faster than many known schemes by at least an order of magnitude. Utilizing the smoothness, we also derive an analytic gradient for waypoints and times, which also enjoys minimal complexity. The exact gradient information makes it possible to directly optimize all parameters under complex constraints.

Refer to caption
Figure 1: Large-scale energy-time optimal trajectories generated by the proposed scheme and the benchmarked scheme which is the global trajectory optimization in Teach-Repeat-Replan [7]. The proposed one takes 0.430.43 seconds while the other takes 7.707.70 seconds. The energy-time cost is 2643.432643.43 for the proposed one while it is 3269.763269.76 for the other. The efficiency is improved by an order of magnitude.

In this paper, we propose a framework based on the above results. Provided with a collection of waypoints and piece times, the minimization of jerk/snap is taken as a black box with promising efficiency. Through the cheap exact gradient, our framework directly optimizes intermediate waypoints and piece times to meet the safety constraints and dynamic limits, respectively. Its flexibility and efficiency are validated by applications and benchmarks on classic problems. Summarizing our contributions in this work:

  • An analytic transformation between double descriptions is derived.

  • A linear-complexity minimum jerk/snap solution is designed with extreme efficiency.

  • An analytic gradient for waypoints and piece times is provided with linear complexity.

  • Applications and benchmarks on classic problems are provided to validate the superiority of our framework.

  • High-performance implementation of solution and gradient computation are open-sourced for the reference of the community111Source code: https://github.com/ZJU-FAST-Lab/large_scale_traj_optimizer.

II Related Work

Quadrotor trajectory planning using polynomials has been prosperous since Mellinger et al. [6]. They eliminate differential constraints from quadrotor dynamics via differential flatness. Then, enough smoothness of flat output trajectories guarantees the constraint satisfaction by default. It is thus quite different from common robotics trajectory generation where standard Nonlinear Programming (NLP) approaches must be employed to enforcing complicated dynamic constraints. Consequently, they conduct snap minimization for smooth flying trajectories with description of the first kind, where the coefficients are optimized through a Quadratic Programming (QP) with prescribed times and waypoints. The gradient for time is roughly estimated by solving perturbation problems. Richter et al. [1] propose the description of the second kind which eliminates equality constraints in the QP, thus forming a closed-form solution. For large problems, its efficiency deteriorates because of the involved sparse matrix inverse. Safety constraints and dynamic limits are heuristically enforced, which can cause low trajectory quality. The two methods above provide many insights in quadrotor trajectory planning, even though there are weaknesses.

Many schemes are proposed to improve the above two frameworks. To improve efficiency, Burke et al. [8] first propose a linear-complexity scheme to solve primal and dual variables of the QP, which generates trajectories with 500,000500,000 pieces in less than 33 minutes. However, its efficiency is still not satisfactory because of the redundant problem size and block inverses. Optimizing waypoints and times is still not considered. Burri et al. [2] and Oleynikova et al. [3] directly optimize all end-derivatives of inner pieces through NLP, where piece times are fixed. Almeida et al. [9] train a supervised neural network to learn time allocation offline. Thus relatively good piece times can be allocated online. Our previous work [10] proposes an efficient optimization scheme for piece times, while the handling for safety constraints lacks flexibility.

Due to the seeming inflexibility in raw polynomial splines, other methods describe trajectories via control points of B-Splines and Bézier curves [11, 12, 13]. The safety constraints and dynamic limits can be easily enforced via the convex hull property. Tordesillas et al. [14] utilize the property and optimize the interval allocation. A Mixed Integer Quadratic Programming (MIQP) is formulated to assign each trajectory piece into a convex region. However, the property can be conservative since the temporal profile of the trajectory is limited by the geometrical profile. To handle the issue, Gao et al. [7] propose a convex formulation to improve the dynamic performance of an already-known geometrical curve.

III Preliminaries

Since the differential flatness of quadrotor has been validated in [6], polynomial trajectories are widely used in the continuous-time motion planning for quadrotors. Consider an (N+1)(N+1)-order MM-piece spline whose ii-th piece is indeed an NN-degree polynomial pi(t)=ciTβ(t),t[0,Ti]p_{i}(t)=c_{i}^{\mathrm{T}}\beta(t),~{}t\in\mathbf{[}0,T_{i}], where ci(N+1)c_{i}\in\mathbb{R}^{(N+1)} is a coefficient vector of this piece and β(t)=(1,t,t2,,tN)T\beta(t)=({1,t,t^{2},\cdots,t^{N}})^{\mathrm{T}} a natural basis. The entire spline on the duration [0,τM][0,\tau_{M}] is defined by p(t):=ciTβ(tτi1)p(t):=c_{i}^{\mathrm{T}}\beta(t-\tau_{i-1}) where t[τi1,τi]t\in[\tau_{i-1},\tau_{i}] and τi=j=1iTj\tau_{i}=\sum_{j=1}^{i}{T_{j}}.

For any given differentially flat quadrotor, high-order continuity is required to satisfy the differential constraints induced by the dynamics. Besides, the smoothness is also ensured by penalizing the integral of square of the high-order derivative. Assuming the order of the penalized derivative to be ss, the cost function can be written as

J(c,T)=0τMp(s)(t)2dt,J(c,T)=\int_{0}^{\tau_{M}}{p^{(s)}(t)^{2}}\mathrm{d}{t}, (1)

where c=(c1T,,cMT)TM(N+1)c=(c_{1}^{\mathrm{T}},\dots,c_{M}^{\mathrm{T}})^{\mathrm{T}}\in\mathbb{R}^{M(N+1)} is a coefficient vector of the entire spline and T=(T1,,TM)TMT=(T_{1},\dots,T_{M})^{\mathrm{T}}\in\mathbb{R}^{M} is a piece time vector. The cost JJ implicitly decouples for each dimension [4], thus we only consider 11-dimensional splines. According to the Linear Quadratic Minimum-Time (LQMT) problem [15], we set the degree as N=2s1N=2s-1 hereafter because it is the optimal degree for the minimizer of JJ.

A spline p(t)p(t) can be naturally described by the collection {c,T}\{{c,T}\}, i.e, the first description. Consider the minimization of J(c,T)J(c,T) with fixed TT and some derivatives specified at certain timestamps. It can be formulated as a QP [6] if cc is taken as a vector of decision variables. The second description is denoted by the collection {d,T}\left\{{d,T}\right\} where d2Msd\in\mathbb{R}^{2Ms} is an end-derivative vector d=(d1T,,dMT)Td=({d_{1}^{\mathrm{T}},\dots,d_{M}^{\mathrm{T}}})^{\mathrm{T}} where

di=(pi(0),,pi(s1)(0),pi(Ti),,pi(s1)(Ti))T.d_{i}=({p_{i}(0),\dots,p^{(s-1)}_{i}(0),p_{i}(T_{i}),\dots,p^{(s-1)}_{i}(T_{i})})^{\mathrm{T}}. (2)

Bry et al. [4] leverage the convenience of {d,T}\left\{{d,T}\right\} to eliminate equality constraints in the QP so that a closed-form solution for the optimal dd is constructed. Obviously, the double descriptions of polynomial trajectories provide a lot of convenience for energy minimization.

IV Efficient Solution and Gradient Computation With Double Descriptions

In this section, we derive an explicit diffeomorphism between double descriptions of polynomial trajectories. Based on the transformation, we construct the solution for J(c,T)J(c,T) with any TT in O(M)O(M) linear complexity. Utilizing its properties, we obtain the analytical gradient for problem parameters, i.e., specified derivatives and TT, which offers much flexibility to further improve the trajectory quality.

IV-A Explicit Diffeomorphism Between Double Descriptions

First, we explore the relation between parameterization spaces of both descriptions. We denote by c:2Ms×>0M\mathbb{P}_{\mathrm{c}}:\mathbb{R}^{2Ms}\times\mathbb{R}_{>0}^{M} the space for {c,T}\{{c,T}\} and by d:2Ms×>0M\mathbb{P}_{\mathrm{d}}:\mathbb{R}^{2Ms}\times\mathbb{R}_{>0}^{M} the space for {d,T}\left\{{d,T}\right\}. The relation between c\mathbb{P}_{\mathrm{c}} and d\mathbb{P}_{\mathrm{d}} is given as below.

Proposition 1.

For a polynomial trajectory, denote by {c,T}\{{c,T}\} its parameters in c\mathbb{P}_{\mathrm{c}} and by {d,T}\left\{{d,T}\right\} its parameters in d\mathbb{P}_{\mathrm{d}}. The map from {c,T}\{{c,T}\} to {d,T}\left\{{d,T}\right\} is a CC^{\infty}-diffeomorphism between c\mathbb{P}_{\mathrm{c}} and d\mathbb{P}_{\mathrm{d}}. An explicit diffeomorphism consists of an identity map on TT and a smooth bijection between cc and dd:

d=𝐀F(T)c,c=𝐀B(T)d,d=\mathbf{A}_{\mathrm{F}}(T)c,~{}c=\mathbf{A}_{\mathrm{B}}(T)d, (3)

where 𝐀F(T)=k=1M𝐀f(Tk)\mathbf{A}_{\mathrm{F}}(T)=\oplus_{k=1}^{M}{\mathbf{A}_{f}(T_{k})}, 𝐀B(T)=k=1M𝐀b(Tk)\mathbf{A}_{\mathrm{B}}(T)=\oplus_{k=1}^{M}{\mathbf{A}_{b}(T_{k})}, and \oplus is the direct sum stacking all diagonal sub-matrices. The forward and backward sub-matrices are

𝐀f(t)=(𝐄𝟎𝐅(t)𝐆(t)),𝐀b(t)=(𝐔𝟎𝐕(t)𝐖(t)),\mathbf{A}_{f}(t)=\begin{pmatrix}\mathbf{E}&\mathbf{0}\\ \mathbf{F}({t})&\mathbf{G}({t})\end{pmatrix},~{}\mathbf{A}_{b}(t)=\begin{pmatrix}\mathbf{U}&\mathbf{0}\\ \mathbf{V}({t})&\mathbf{W}({t})\end{pmatrix}, (4)

which are partitioned into four blocks in s×s\mathbb{R}^{s\times{s}}. The entry at the ii-th row and jj-th column of any block is defined by

𝐄ij={(i1)!𝑖𝑓i=j,0𝑖𝑓ij,\displaystyle\mathbf{E}_{ij}=\begin{cases}(i-1)!&\mathit{if}~{}i=j,\\ 0&\mathit{if}~{}i\neq{j},\end{cases} (5)
𝐅ij(t)={(j1)!/(ji)!tji𝑖𝑓ij,0𝑖𝑓i>j,\displaystyle\mathbf{F}_{ij}({t})=\begin{cases}(j-1)!/(j-i)!\cdot t^{j-i}&\mathit{if}~{}i\leq{j},\\ 0&\mathit{if}~{}i>j,\end{cases} (6)
𝐆ij(t)=(s+j1)!(s+ji)!tsi+j,\displaystyle\mathbf{G}_{ij}({t})=\frac{(s+j-1)!}{(s+j-i)!}\cdot t^{s-i+j}, (7)
𝐔ij={1/(i1)!𝑖𝑓i=j,0𝑖𝑓ij,\displaystyle\mathbf{U}_{ij}=\begin{cases}1/(i-1)!&\mathit{if}~{}i=j,\\ 0&\mathit{if}~{}i\neq{j},\end{cases} (8)
𝐕ij(t)=k=0smax(i,j)(1)k(si+k)(2sjk1s1)(j1)!(1)its+ij,\displaystyle\mathbf{V}_{ij}({t})=\frac{\sum_{k=0}^{s-\max{({i,j})}}{(-1)^{k}\binom{s}{i+k}\binom{2s-j-k-1}{s-1}}}{(j-1)!~{}(-1)^{i}\cdot t^{s+i-j}}, (9)
𝐖ij(t)=k=0smax(i,j)(sk1i1)(2sjk1s1)(j1)!(1)i+jts+ij.\displaystyle\mathbf{W}_{ij}({t})=\frac{\sum_{k=0}^{s-\max{({i,j})}}{\binom{s-k-1}{i-1}\binom{2s-j-k-1}{s-1}}}{(j-1)!~{}(-1)^{i+j}\cdot t^{s+i-j}}. (10)
Proof.

The forward sub-matrix 𝐀f(t)\mathbf{A}_{f}(t) given in (4), (5), (6), and (7), comes from the definition of did_{i} in (2). Actually, 𝐀f(t)\mathbf{A}_{f}(t) is a general confluent Vandermonde Matrix generated from two variables λ0=0\lambda_{0}=0 and λ1=t\lambda_{1}=t, both with multiplicity ss. According to Spitzbart’s Theorem [16], the inverse of 𝐀f(t)\mathbf{A}_{f}(t) always exists for λ0λ1\lambda_{0}\neq\lambda_{1}, whose entries are exactly coefficients of a set of polynomials constructed from λ0\lambda_{0} and λ1\lambda_{1}. The backward sub-matrix 𝐀b(t)\mathbf{A}_{b}(t) given in (4), (8), (9), and (10), is derived following [16]. The process only involves lengthy but mechanical derivation thus is omitted here for brevity. Obviously, the map from cic_{i} to did_{i} and the inverse are always smooth at any T>0MT\in\mathbb{R}^{M}_{>0}. The bijectivity and the smoothness imply a diffeomorphism. ∎

Proposition 1 gives analytic transformations between double descriptions, which have long been evaluated unwisely. In [1], 𝐀b(t)\mathbf{A}_{b}(t) is numerically computed by inverting 𝐀f(t)\mathbf{A}_{f}(t) for any given tt. In [2], the structure of 𝐀f(t)\mathbf{A}_{f}(t) is exploited so that 𝐀b(t)\mathbf{A}_{b}(t) is evaluated efficiently and stably via Schur complement, where only the inverse of a sub-matrix is needed. Our previous work [10] has the fewest online computations, where coefficients in 𝐀b(t)\mathbf{A}_{b}(t) is offline numerically computed by setting t=1t=1. However, all of these schemes involve numerically unstable matrix inverse. Proposition 1 is free of these drawbacks for the exact analytic expression. The diffeomorphism structure can be further utilized to greatly improve the efficiency and quality of trajectory generation.

IV-B Linear-Complexity Trajectory Generation

Consider the following trajectory generation problem,

minc,T\displaystyle\min_{c,T} J(c,T)=0τMp(s)(t)2dt,\displaystyle~{}{J(c,T)}=\int_{0}^{\tau_{M}}{p^{(s)}(t)^{2}}\mathrm{d}t, (11)
s.t.\displaystyle s.t.~{} p(j)(0)=d0,j,0j<s,\displaystyle~{}p^{(j)}(0)=d_{0,j},~{}0\leq j<s, (12)
p(j)(τM)=dM,j,0j<s,\displaystyle~{}p^{(j)}(\tau_{M})=d_{M,j},~{}0\leq j<s, (13)
p(τi)=qi,0i<M.\displaystyle~{}p(\tau_{i})=q_{i},~{}0\leq i<M. (14)

The initial and terminal derivatives are specified by d0=(d0,0,,d0,s1)Td_{0}=(d_{0,0},\dots,d_{0,s-1})^{\mathrm{T}} and dM=(dM,0,,dM,s1)Td_{M}=(d_{M,0},\dots,d_{M,s-1})^{\mathrm{T}}, respectively. Each qiq_{i} is a specified intermediate waypoint at τi\tau_{i}. The s1s-1 times continuous differentiability of p(t)p(t) on [0,τM][0,\tau_{M}] is implicitly required by the definition of J(c,T)J(c,T), which is not explicitly formulated as equality constraints here.

Although a closed-form solution of (11) is given by Bry et al. [4], its efficiency is limited by the numerical evaluation of 𝐀b(t)\mathbf{A}_{b}(t) and the sparse permutation matrix inverse. To attain the linear complexity, Burke et al. [8] leverage the problem structure to calculate both primal and dual variables through a block tridiagonal linear equation system. However, it still needs frequently inverting sub-blocks. The size of the system is also redundant because of the dual variables. Therefore, we give a linear-complexity scheme with minimal problem size, where the matrix inverse is totally eliminated.

Consider the {d,T}\left\{{d,T}\right\} description of p(t)p(t). We only need to obtain all unspecified entries in dd, which are indeed p(j)(τi)p^{(j)}(\tau_{i}) for 1i<M1\leq i<M and 1j<s1\leq j<s. Rewrite dd as

d=𝐏(d¯+𝐁d~)d=\mathbf{P}(\bar{d}+\mathbf{B}\widetilde{d}) (15)

where d~(M1)(s1)\widetilde{d}\in\mathbb{R}^{(M-1)(s-1)} is a vector containing all unspecified entries. The constant vector d¯(M+1)s\bar{d}\in\mathbb{R}^{(M+1)s} is defined as

d¯=(d0T,q1,0s1,,qM1,0s1,dMT)T.\bar{d}=({d_{0}^{\mathrm{T}},q_{1},0_{s-1},\dots,q_{M-1},0_{s-1},d_{M}^{\mathrm{T}}})^{\mathrm{T}}. (16)

where 0s1s10_{s-1}\in\mathbb{R}^{s-1} is a zero vector. The permutation matrix 𝐏=(𝐏1T,,𝐏MT)T\mathbf{P}=({\mathbf{P}_{1}^{\mathrm{T}},\dots,\mathbf{P}_{M}^{\mathrm{T}}})^{\mathrm{T}} is defined as

𝐏i=(𝟎2s×(i1)s,𝐈2s,𝟎2s×(Mi)s).\mathbf{P}_{i}=\left({\mathbf{0}_{2s\times(i-1)s},\mathbf{I}_{2s},\mathbf{0}_{2s\times(M-i)s}}\right). (17)

The matrix 𝐁=(𝐁1,,𝐁M1)\mathbf{B}=({\mathbf{B}_{1},\dots,\mathbf{B}_{M-1}}) is defined as

𝐁i=(𝟎(s1)×is,𝐃,𝟎(s1)×(Mi)s)T,\mathbf{B}_{i}=\left({\mathbf{0}_{(s-1)\times is},\mathbf{D},\mathbf{0}_{(s-1)\times(M-i)s}}\right)^{\mathrm{T}}, (18)

where 𝐃=(0s1,𝐈s1)\mathbf{D}=({0_{s-1},\mathbf{I}_{s-1}}). It should be noted that computing (15) only involves orderly accessing entries. We do not really need to compute the matrix product considering that 𝐏\mathbf{P} and 𝐁\mathbf{B} is highly structured.

The cost function J(c,T)J(c,T) indeed takes a quadratic form

J(c,T)=cT𝐐Σ(T)cJ(c,T)=c^{\mathrm{T}}\mathbf{Q}_{\Sigma}(T)c (19)

in which 𝐐Σ(T)=i=1M𝐐(Ti)\mathbf{Q}_{\Sigma}(T)=\oplus_{i=1}^{M}{\mathbf{Q}(T_{i})}. Note that the symmetric matrix 𝐐(t)\mathbf{Q}(t) has an analytical form whose entries are simple power functions of tt. Its analytical form is provided by Bry et al. in the appendix of [4], to which we refer for details. Substituting (3) and (15) into (19) gives

J=(d¯+𝐁d~)T𝐏T𝐇Σ(T)𝐏(d¯+𝐁d~)J=(\bar{d}+\mathbf{B}\widetilde{d})^{\mathrm{T}}\mathbf{P}^{\mathrm{T}}\mathbf{H}_{\Sigma}(T)\mathbf{P}(\bar{d}+\mathbf{B}\widetilde{d}) (20)

in which 𝐇Σ(T)=i=1M𝐇(Ti)\mathbf{H}_{\Sigma}(T)=\oplus_{i=1}^{M}{\mathbf{H}(T_{i})} is a symmetric matrix. All its diagonal blocks is fully determined by the matrix function

𝐇(t)=𝐀bT(t)𝐐(t)𝐀b(t).\mathbf{H}(t)=\mathbf{A}_{b}^{\mathrm{T}}(t)\mathbf{Q}(t)\mathbf{A}_{b}(t). (21)

Obviously, the analytical form of 𝐇(t)\mathbf{H}(t) can be easily derived for a fixed ss by combining our Proposition 1 and the result from Bry et al. [4]. Therefore, we omit the parameter TT and denote 𝐇Σ=i=1M𝐇i\mathbf{H}_{\Sigma}=\oplus_{i=1}^{M}{\mathbf{H}_{i}} hereafter because it is trivial to obtain all diagonal blocks for any given TT in linear time and space.

Differentiating JJ w.r.t. d~\widetilde{d} gives

dJ/dd~=2𝐁T𝐏T𝐇Σ𝐏(𝐁d~+d¯).{\mathrm{d}{J}}/{\mathrm{d}{\widetilde{d}}}=2\mathbf{B}^{\mathrm{T}}\mathbf{P}^{\mathrm{T}}\mathbf{H}_{\Sigma}\mathbf{P}(\mathbf{B}\widetilde{d}+\bar{d}). (22)

The optimal d~\widetilde{d} satisfies dJ/dd~=0\|{{\mathrm{d}{J}}/{\mathrm{d}{\widetilde{d}}}}\|=0, i.e.,

𝐌d~=b¯\mathbf{M}\widetilde{d}=\bar{b} (23)

where 𝐌=𝐁T𝐏T𝐇Σ𝐏𝐁\mathbf{M}=\mathbf{B}^{\mathrm{T}}\mathbf{P}^{\mathrm{T}}\mathbf{H}_{\Sigma}\mathbf{P}\mathbf{B} and b¯=𝐁T𝐏T𝐇Σ𝐏d¯\bar{b}=-\mathbf{B}^{\mathrm{T}}\mathbf{P}^{\mathrm{T}}\mathbf{H}_{\Sigma}\mathbf{P}\bar{d}.

Actually, the computation for 𝐌\mathbf{M} and b¯\bar{b} is quite easy. We partition each 𝐇i\mathbf{H}_{i} into four square blocks as

𝐇i=(𝚪i𝚲i𝚽i𝛀i).\mathbf{H}_{i}=\begin{pmatrix}\boldsymbol{\Gamma}_{i}&\boldsymbol{\Lambda}_{i}\\ \boldsymbol{\Phi}_{i}&\boldsymbol{\Omega}_{i}\end{pmatrix}. (24)

Expanding 𝐌\mathbf{M} gives

𝐌=(𝜶1𝜷2𝟎𝟎𝟎𝜸2𝜶2𝜷3𝟎𝟎𝟎𝜸3𝜶3𝟎𝟎𝟎𝟎𝟎𝜶M2𝜷M1𝟎𝟎𝟎𝜸M1𝜶M1)\mathbf{M}=\begin{pmatrix}\boldsymbol{\alpha}_{1}&\boldsymbol{\beta}_{2}&\mathbf{0}&\cdots&\mathbf{0}&\mathbf{0}\\ \boldsymbol{\gamma}_{2}&\boldsymbol{\alpha}_{2}&\boldsymbol{\beta}_{3}&\cdots&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\boldsymbol{\gamma}_{3}&\boldsymbol{\alpha}_{3}&\cdots&\mathbf{0}&\mathbf{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\cdots&\boldsymbol{\alpha}_{M-2}&\boldsymbol{\beta}_{M-1}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\cdots&\boldsymbol{\gamma}_{M-1}&\boldsymbol{\alpha}_{M-1}\end{pmatrix} (25)

where

𝜶i=𝐃(𝛀i+𝚪i+1)𝐃T,\displaystyle\boldsymbol{\alpha}_{i}=\mathbf{D}(\boldsymbol{\Omega}_{i}+\boldsymbol{\Gamma}_{i+1})\mathbf{D}^{\mathrm{T}}, (26)
𝜷i=𝐃𝚲i𝐃T,𝜸i=𝐃𝚽i𝐃T.\displaystyle\boldsymbol{\beta}_{i}=\mathbf{D}\boldsymbol{\Lambda}_{i}\mathbf{D}^{\mathrm{T}},~{}\boldsymbol{\gamma}_{i}=\mathbf{D}\boldsymbol{\Phi}_{i}\mathbf{D}^{\mathrm{T}}. (27)

Similarly, we partition d¯\bar{d} as

d¯=(κ0T,κ1T,,κM1T,κMT)T\bar{d}=({\kappa_{0}^{\mathrm{T}},\kappa_{1}^{\mathrm{T}},\dots,\kappa_{M-1}^{\mathrm{T}},\kappa_{M}^{\mathrm{T}}})^{\mathrm{T}} (28)

where each κis\kappa_{i}\in\mathbb{R}^{s} is a constant vector. Expanding b¯\bar{b} gives

b¯=(b1T,b2T,,bM2T,bM1T)T\bar{b}=({b_{1}^{\mathrm{T}},b_{2}^{\mathrm{T}},\dots,b_{M-2}^{\mathrm{T}},b_{M-1}^{\mathrm{T}}})^{\mathrm{T}} (29)

in which the ii-th part can be computed as

bi=𝐃(𝚽iκi1+(𝛀i+𝚪i+1)κi+𝚲iκi+1).b_{i}=-\mathbf{D}\left({\boldsymbol{\Phi}_{i}\kappa_{i-1}+(\boldsymbol{\Omega}_{i}+\boldsymbol{\Gamma}_{i+1})\kappa_{i}+\boldsymbol{\Lambda}_{i}\kappa_{i+1}}\right). (30)

Now we conclude the procedure to obtain the optimal trajectory. Firstly, compute each 𝐇i\mathbf{H}_{i} for any given TT using the analytical function 𝐇(t)\mathbf{H}(t) defined in (21). Secondly, compute d¯\bar{d} according to its definition (16) for the specified derivatives. Thirdly, compute nonzero entries in 𝐌\mathbf{M} and b¯\bar{b} according to (26), (27) and (30). Fourthly, solve the linear equation system (23) using Banded PLU Factorization [17]. Finally, recover the optimal coefficient vector cc through (15) and (3).

The above-concluded procedure only costs O(M)O(M) linear time and space complexity. There is no numerical matrix inverse needed in the whole procedure because 𝐀b\mathbf{A}_{b} is overcome by deriving its analytical form. Moreover, the linear equation system needed to be solved only involves necessary primal decision variables, thus achieving the minimal problem size.

IV-C Parameter Gradient Computation in Double Descriptions

Although the double descriptions make it possible to obtain the solution of (11) in a cheap way, the resultant trajectory quality is still determined by the parameter of the problem, i.e., intermediate waypoints q=(q1,,qM1)Tq=({q_{1},\dots,q_{M-1}})^{\mathrm{T}} and piece times TT. Further optimization on qq and TT is needed to improve the trajectory quality while maintaining feasibility. Therefore, we utilize the diffeomorphism between double descriptions of the trajectory to derive the analytical gradient w.r.t. qq and TT. The gradient helps to obtain optimal qq and TT, which bridges the gap in many traditional trajectory planning methods such as [18] and [19].

For any pair of qq and TT, denote by d~(q,T)\widetilde{d}(q,T) the corresponding optimal d~\widetilde{d}, which satisfies

d~(q,T)=argmind~J(𝐀B(T)𝐏(d¯+𝐁d~),T).\widetilde{d}(q,T)=\operatorname*{arg\,min}_{\widetilde{d}}{J(\mathbf{A}_{\mathrm{B}}(T)\mathbf{P}(\bar{d}+\mathbf{B}\widetilde{d}),T)}. (31)

Then, the optimal cc, denoted by c(q,T)c(q,T), is computed as

c(q,T)=𝐀B(T)𝐏(d¯+𝐁d~(q,T)).c(q,T)=\mathbf{A}_{\mathrm{B}}(T)\mathbf{P}(\bar{d}+\mathbf{B}\widetilde{d}(q,T)). (32)

Define a new cost as J^(q,T):=J(c(q,T),T)\hat{J}(q,T):=J(c(q,T),T). The gradient we concern is indeed for J^(q,T)\hat{J}(q,T), i.e., J^/q\partial\hat{J}/\partial{q} and J^/T\partial\hat{J}/\partial{T}.

Without causing ambiguity, we temporarily omit the parameters in J^(q,T)\hat{J}(q,T), J(c,T)J(c,T), d~(q,T)\widetilde{d}(q,T), c(q,T)c(q,T), 𝐀B(T)\mathbf{A}_{\mathrm{B}}(T) and 𝐀F(T)\mathbf{A}_{\mathrm{F}}(T) for simplicity. As for the cost function in (31), d~\widetilde{d} is a stationary point, which means its gradient w.r.t. d~\widetilde{d} satisfies (J/c)T𝐀B𝐏𝐁=0\|{({{\partial J}/{\partial c}})^{\mathrm{T}}\mathbf{A}_{\mathrm{B}}\mathbf{P}\mathbf{B}}\|=0. Now we know that the gradient computation in either c\mathbb{P}_{c} or d\mathbb{P}_{d} possesses its convenience. In c\mathbb{P}_{c}, the gradient J/Ti\partial J/\partial T_{i} and J/ci\partial J/\partial c_{i} both have easy-to-derive analytic expressions. In d\mathbb{P}_{d}, the gradient of the cost (31) by d~\widetilde{d} is already zero. Thus, we first express J^/T\partial\hat{J}/\partial{T} and J^/q\partial\hat{J}/\partial{q} by J/Ti\partial J/\partial T_{i} and J/ci\partial J/\partial c_{i}. Then, the diffeomorphism in Proposition 1 is utilized to transform them into d\mathbb{P}_{d} such that terms relevant to d~\widetilde{d} can be eliminated.

As for the gradient of J^\hat{J} w.r.t. TiT_{i}, we have

J^Ti\displaystyle\frac{\partial\hat{J}}{\partial T_{i}} =JTi+(Jc)T𝐀BTi𝐏(d¯+𝐁d~)\displaystyle=\frac{\partial J}{\partial T_{i}}+\left({\frac{\partial J}{\partial c}}\right)^{\mathrm{T}}\frac{\partial\mathbf{A}_{\mathrm{B}}}{\partial T_{i}}\mathbf{P}(\bar{d}+\mathbf{B}\widetilde{d})~{}~{}~{}~{}~{}~{}~{}
+(Jc)T𝐀B𝐏𝐁d~Ti\displaystyle+\left({\frac{\partial J}{\partial c}}\right)^{\mathrm{T}}\mathbf{A}_{\mathrm{B}}\mathbf{P}\mathbf{B}\frac{\partial\widetilde{d}}{\partial T_{i}}
=JTi+(Jc)T𝐀BTi𝐀Fc\displaystyle=\frac{\partial J}{\partial T_{i}}+\left({\frac{\partial J}{\partial c}}\right)^{\mathrm{T}}\frac{\partial\mathbf{A}_{\mathrm{B}}}{\partial T_{i}}\mathbf{A}_{\mathrm{F}}c
=JTi+(Jci)T𝐀˙b(Ti)𝐀f(Ti)ci.\displaystyle=\frac{\partial J}{\partial T_{i}}+\left({\frac{\partial J}{\partial c_{i}}}\right)^{\mathrm{T}}\dot{\mathbf{A}}_{b}(T_{i})\mathbf{A}_{f}(T_{i})c_{i}. (33)

As for the gradient of J^\hat{J} w.r.t. qiq_{i}, we have

J^qi\displaystyle\frac{\partial\hat{J}}{\partial q_{i}} =(Jc)T𝐀B𝐏(d¯qi+𝐁d~qi)\displaystyle=\left({\frac{\partial J}{\partial c}}\right)^{\mathrm{T}}\mathbf{A}_{\mathrm{B}}\mathbf{P}\left({\frac{\partial\bar{d}}{\partial q_{i}}+\mathbf{B}\frac{\partial\widetilde{d}}{\partial q_{i}}}\right)
=(Jc)T𝐀B𝐏d¯qi\displaystyle=\left({\frac{\partial J}{\partial c}}\right)^{\mathrm{T}}\mathbf{A}_{\mathrm{B}}\mathbf{P}\frac{\partial\bar{d}}{\partial q_{i}}
=k=0,1(Jci+k)T𝐀b(Ti+k)e(1k)s+1.\displaystyle=\sum_{k=0,1}{\left({\frac{\partial J}{\partial c_{i+k}}}\right)^{\mathrm{T}}\mathbf{A}_{b}(T_{i+k})e_{(1-k)s+1}}. (34)

where eje_{j} is the jj-th column vector of 𝐈2s\mathbf{I}_{2s}. As for J/Ti\partial J/\partial T_{i} and J/ci\partial J/\partial c_{i}, their analytic expressions are clearly derived in the appendix of [4].

It is obvious that the gradient computations given in (IV-C) and (IV-C) are both irrelevant to the piece number MM. Thus, computing the gradient w.r.t. qq and TT only costs O(M)O(M) linear time and space complexity. Besides, all involved formulas have smooth analytic forms for T>0MT\in\mathbb{R}^{M}_{>0}, which much increase the efficiency of gradient computation.

Aside from efficiency, our analytical gradient scheme enjoy advantages in numerical stability. Specifically, the major numerical issue comes from the structure of J^(q,T)\hat{J}({q,T}) on TT. As analyzed in [10], if any piece time goes to zero, the trajectory energy goes to infinity, thus making J^\hat{J} behave like a barrier function. Consequently, any scheme that evaluates gradient indirectly suffers instability from bad accuracy because this requires unrealistically small step size for finite difference [6] near the barrier. In comparison, ours is free from this issue because of the available accurate gradient.

V Applications

V-A Fast Minimum Jerk/Snap Trajectory Generation

One natural application of the previous linear-complexity solution scheme is that we can compute minimum-jerk/snap trajectory with extreme efficiency. Actually, there are already many reliable trajectory generation schemes for this topic. One thing that really matters is whether we can use a scheme as a black box without even worrying about its computation burden. We show that our scheme almost satisfies the requirements on this black box.

To demonstrate the efficiency of our scheme, we benchmark it with several conventional schemes, including the QP scheme by Mellinger et al. [6], the closed-form scheme by Bry et al. [4] and the linear-complexity scheme by Burke et al. [8]. As for Mellinger’s scheme, we use OSQP [20] to solve the QP formed by linear conditions and quadratic cost. As for Bry’s scheme, both the dense and sparse linear system solvers are implemented to calculate the closed-form solution. As for Burke’s scheme, we implement an optimized version of our own for the sake of fairness. More specifically, it only costs several seconds to calculate a minimum-snap trajectory with 500,000500,000 pieces, while the one in their paper is reported to cost more than 22 minutes [8]. All schemes are implemented in C++11 without any explicit hardware acceleration.

Refer to caption
Figure 2: Benchmark of small-scale trajectory generation. Pink lines are from Mellinger’s scheme [6]. Lightblue lines are from Bry’s scheme [4]. Green lines are from its sparse version. Red lines are from Burke’s scheme [8]. Darkblue lines are from the proposed scheme. Solid lines are for jerk minimization. Dashed lines are for snap minimization.

We benchmark all these five schemes on 33-dimensional minimum jerk (s=3s=3) and minimum snap (s=4s=4) problems. All comparisons are conducted on an Intel Core i7-8700 CPU under Linux environment. For small-scale problems, the piece number ranges from 22 to 272^{7}. In each case, 100100 sub-problems are randomly generated to be solved by these schemes. The results are provided in Fig. 2. For large-scale problems, we only benchmark two linear-complexity schemes, where the piece number is sparsely sampled from 2102^{10} to 2202^{20}. The results are given in Fig. 3.

As shown in Fig. 2, Bry’s closed-form solution and Burke’s scheme are faster than Mellinger’s scheme for small piece numbers. The dense version of Bry’s scheme becomes the slowest since the cubic complexity for the dense solver dominates the time. Its sparse version still retains the efficiency as piece number grows. Due to the linear complexity, Burke’s scheme and our scheme consume significantly less time than all other schemes. Moreover, ours is nearly an order of magnitude faster than Burke’s at any problem scale in Fig. 2 or Fig. 3. Intuitively, ours is able to generate trajectories with 10610^{6} pieces in less than 11 second.

Refer to caption
Figure 3: Benchmark of large-scale trajectory generation. Red lines are from Burke’s scheme [8]. Darkblue lines are from the proposed scheme. Solid lines are for jerk minimization. Dashed lines are for snap minimization.

V-B Fast Global Trajectory Optimization

Now we give another application for the linear-complexity solution and gradient. We provide a simple example here to significantly improve the efficiency of large-scale global trajectory optimization.

A drawback in traditional minimum snap based schemes is the lack of flexibility to adjust the piece times and waypoints. Such kind of scheme can only obtain gradient information unwisely by solving several perturbed problems [6]. To avoid this drawback, Gao et al. [7] propose a more flexible framework. It alternately optimizes the geometrical and temporal profile of a trajectory through two well-designed convex formulations. This framework generates high-quality large-scale trajectories within safe flight corridors while it cannot be done in real-time.

We assume that a polyhedron-shaped flight corridor has been generated as in [7]. The flight corridor \mathcal{F} is defined as

=i=1M𝒫i\mathcal{F}=\bigcup_{i=1}^{M}{\mathcal{P}_{i}} (35)

where each 𝒫i\mathcal{P}_{i} is a finite convex polyhedron

𝒫i={x3|𝐀ixbi}.\mathcal{P}_{i}=\left\{{x\in\mathbb{R}^{3}~{}\Big{|}~{}\mathbf{A}_{i}x\preceq b_{i}}\right\}. (36)

Besides, locally sequential connection is also assumed

{𝒫i𝒫j=𝑖𝑓|ij|=2,𝒫i𝒫j𝑖𝑓|ij|1.\begin{cases}\mathcal{P}_{i}\cap\mathcal{P}_{j}=\varnothing&\mathit{if}~{}|{i-j}|=2,\\ \mathcal{P}_{i}\cap\mathcal{P}_{j}\neq\varnothing&\mathit{if}~{}|{i-j}|\leq 1.\end{cases} (37)

For such a \mathcal{F}, the start and goal position is located in 𝒫1\mathcal{P}_{1} and 𝒫M\mathcal{P}_{M}, respectively. As is shown in Fig. 4, we assign each 33-dimensional intermediate waypoint qiq_{i} in the intersection 𝒫i𝒫i+1\mathcal{P}_{i}\cap\mathcal{P}_{i+1}, which roughly ensures the trajectory safety.

Refer to caption
Figure 4: Each intermediate waypoint qiq_{i} is confined within the intersection of two polyhedra 𝒫i𝒫i+1\mathcal{P}_{i}\cap\mathcal{P}_{i+1} using barrier functions. All piece times TiT_{i} and intermediate waypoints qiq_{i} are decision variables to be optimized.

To obtain the spatial-temporal optimal trajectory within \mathcal{F}, we optimize the following cost function

JΣ(q,T)=J^(q,T)+JF(q)+JD(q,T).J_{\Sigma}(q,T)=\hat{J}(q,T)+J_{F}(q)+J_{D}(q,T). (38)

The cost term J^(q,T)\hat{J}(q,T) is also the 33-dimensional version. The cost term JF(q,T)J_{F}(q,T) is just a logarithmic barrier term to ensure that each qiq_{i} is confined within 𝒫i𝒫i+1\mathcal{P}_{i}\cap\mathcal{P}_{i+1}, defined as

JF(q)=κi=1M1j=ii+1𝟏Tln[bj𝐀jqi],J_{F}(q)=-\kappa\sum_{i=1}^{M-1}{\sum_{j=i}^{i+1}\mathbf{1}^{\mathrm{T}}\ln{[b_{j}-\mathbf{A}_{j}q_{i}]}}, (39)

where κ\kappa is a constant barrier coefficient, 𝟏\mathbf{1} an all-ones vector with an appropriate length and ln[]\ln{[\cdot]} the entry-wise natural logarithm. Actually, any C2C^{2} clamped barrier function is an alternative to further eliminate the potential of the barrier in the interior of 𝒫i𝒫i+1\mathcal{P}_{i}\cap\mathcal{P}_{i+1}, which can be easily constructed by following [21]. The cost term JD(q,T)J_{D}(q,T) is just a penalty to adjust the trajectory aggressiveness. It is defined as

JD(q,T)=ρti=1MTi+\displaystyle J_{D}(q,T)=\rho_{t}\sum_{i=1}^{M}{T_{i}}+ (40)
ρvi=1M1g(qi+1qi1Ti+1+Ti2vm2)+\displaystyle\rho_{v}\sum_{i=1}^{M-1}{g\left({\left\|{\frac{q_{i+1}-q_{i-1}}{T_{i+1}+T_{i}}}\right\|^{2}-v_{m}^{2}}\right)}+
ρai=1M1g((qi+1qi)/Ti+1(qiqi1)/Ti(Ti+1+Ti)/22am2)\displaystyle\rho_{a}\sum_{i=1}^{M-1}{g\left({\left\|{\frac{(q_{i+1}-q_{i})/T_{i+1}-(q_{i}-q_{i-1})/T_{i}}{(T_{i+1}+T_{i})/2}}\right\|^{2}-a_{m}^{2}}\right)}

where g(x)=max{x,0}3g(x)=\max{\{{x,0}\}}^{3} is a C2C^{2} penalty function, vmv_{m} the velocity limit and ama_{m} acceleration limit. The constant ρt\rho_{t} prevents the entire duration from growing too large. Constants ρv\rho_{v} and ρa\rho_{a} prevent the trajectory from being too aggressive. It also costs linear-complexity time for the value and gradient computation on JΣ(q,T)J_{\Sigma}(q,T). Then, we utilize the L-BFGS [22] with strong Wolfe conditions as an efficient quasi-Newton method to minimize the cost function.

Refer to caption
Figure 5: If constraints are much violated on the piece between qi1q_{i-1} and qiq_{i}, the corresponding polyhedron 𝒫i\mathcal{P}_{i} is split into two intersecting polyhedra 𝒫i,0\mathcal{P}_{i,0} and 𝒫i,1\mathcal{P}_{i,1}. where the barrier/penalty coefficients are also increased. Two perturbed planes near the perpendicular bisector of qi1q_{i-1} and qiq_{i} are chosen as new facets. A new waypoint qi1,iq_{i-1,i} is also added in 𝒫i,0𝒫i,1\mathcal{P}_{i,0}\cap\mathcal{P}_{i,1}.

As for constraints, it is possible that the interior part of a piece becomes unsafe or violates the limit too much. To handle such situations, we first utilize the feasibility checker proposed in [10] to locate such a piece. Then, the corresponding 𝒫i\mathcal{P}_{i} is split into two intersecting parts, 𝒫i,0\mathcal{P}_{i,0} and 𝒫i,1\mathcal{P}_{i,1}, as shown in Fig. 5. An intermediate waypoint qi1,iq_{i-1,i} is added as decision variables. Accordingly, we increase the corresponding penalty or barrier coefficient only for these two new pieces. In general, a larger penalty coefficient and shorter piece length make the soft constraint JD(q,T)J_{D}(q,T) tighter. A larger barrier coefficient in JF(q)J_{F}(q) makes qi1,iq_{i-1,i} more likely to be in the interior of 𝒫i,0𝒫i,1\mathcal{P}_{i,0}\cap\mathcal{P}_{i,1}, which helps to ensure the safety. Empirically, these operations are seldom needed according to our simulations.

Refer to caption
Figure 6: Benchmark of spatial-temporal trajectory optimization in safe flight corridor. The red line is from Teach-Repeat-Replan [7]. The green line is from FASTER [14]. The blue line is from the proposed one. FASTER only supports small-scale corridors since the computation time of its MIQP grows approximately in an exponential way.

We benchmark our simple scheme with the global trajectory optimizer in Teach-Repeat-Replan [7] which minimizes J(c,T)+ρti=1MTiJ(c,T)+\rho_{t}\sum_{i=1}^{M}{T_{i}} under the same constraints. The interval allocation optimization in FASTER [14] is also compared here since it supports polyhedron-shaped corridor constraints. Due to the fact that it does not optimize time, we initialize it using total time from Teach-Repeat-Replan. We set s=3s=3, ρt=32.0\rho_{t}=32.0, vm=4.0v_{m}=4.0 and am=5.0a_{m}=5.0 for all schemes and ρv=ρa=128.0\rho_{v}=\rho_{a}=128.0 for the proposed one. The relative cost tolerance is set as 10410^{-4} for ours and the default value for the other two open-source implementations. The time out is set as 1.5s1.5s. For each size, the computation time is averaged over 1010 randomly generated corridors. The results from three methods is shown in Fig. 6. An intuitive comparison for large-scale trajectory generation is also provided in Fig. 1 where a spatial-temporal optimal trajectory is generated by our scheme using significantly less time.

VI Conclusion

In this paper, we explore and exploit the relation between double descriptions of quadrotor polynomial trajectory. The resultant linear-complexity solution and gradient computation provide much flexibility and efficiency in classic trajectory generation problems. Simple applications are provided to demonstrate their promising performance. Our future work focus on incorporating our results into existing local planners that use polynomial trajectories. It remains to be validated whether our results can greatly improve the trajectory quality in these planners without sacrificing the efficiency.

References

  • [1] C. Richter, A. Bry, and N. Roy, “Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments,” in Proc. of the Intl. Sym. of Robot. Research, Singapore, Dec 2013.
  • [2] M. Burri, H. Oleynikova, M. Achtelik, and R. Siegwart, “Real-time visual-inertial mapping, re-localization and planning onboard MAVs in unknown environments,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Hamburg, Germany, Sep 2015.
  • [3] H. Oleynikova, C. Lanegger, Z. Taylor, M. Pantic, A. Millane, R. Siegwart, and J. Nieto, “An open-source system for vision-based micro-aerial vehicle mapping, planning, and flight in cluttered environments,” Journal of Field Robotics, vol. 37, no. 4, pp. 642–666, 2020.
  • [4] A. Bry, C. Richter, A. Bachrach, and N. Roy, “Aggressive flight of fixed-wing and quadrotor aircraft in dense indoor environments,” The International Journal of Robotics Research, vol. 34, p. 1002, 2015.
  • [5] M. M. De Almeida and M. Akella, “New numerically stable solutions for minimum-snap quadcopter aggressive maneuvers,” in Proc. of the American Control Conference, Seattle, USA, 2017.
  • [6] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Shanghai, China, May 2011.
  • [7] F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, “Teach-Repeat-Replan: A complete and robust system for aggressive flight in complex environments,” IEEE Transactions on Robotics, vol. 36, no. 5, pp. 1526–1545, 2020.
  • [8] D. Burke, A. Chapman, and I. Shames, “Generating minimum-snap quadrotor trajectories really fast,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Las Vegas, USA, 2020.
  • [9] M. M. de Almeida, R. Moghe, and M. Akella, “Real-time minimum snap trajectory generation for quadcopters: Algorithm speed-up through machine learning,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Montreal, Canada, May 2019.
  • [10] Z. Wang, X. Zhou, C. Xu, J. Chu, and F. Gao, “Alternating minimization based trajectory generation for quadrotor aggressive flight,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4836–4843, 2020.
  • [11] J. A. Preiss, K. Hausman, G. S. Sukhatme, and S. Weiss, “Trajectory optimization for self-calibration and navigation.” in Robotics: Science and Systems, 2017.
  • [12] F. Gao, W. Wu, Y. Lin, and S. Shen, “Online safe trajectory generation for quadrotors using fast marching method and Bernstein basis polynomial,” in Proc. of the IEEE Intl. Conf. on Robot. and Autom., Brisbane, Australia, May 2018.
  • [13] B. Zhou, F. Gao, L. Wang, C. Liu, and S. Shen, “Robust and efficient quadrotor trajectory generation for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3529–3536, 2019.
  • [14] J. Tordesillas, B. T. Lopez, and J. P. How, “FASTER: Fast and safe trajectory planner for flights in unknown environments,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Macau, China, 2019.
  • [15] E. Verriest and F. Lewis, “On the linear quadratic minimum-time problem,” IEEE Transactions on Automatic Control, vol. 36, no. 7, pp. 859–863, 1991.
  • [16] R. Schappelle, “The inverse of the confluent vandermonde matrix,” IEEE Transactions on Automatic Control, vol. 17, no. 5, pp. 724–725, 1972.
  • [17] G. H. Golub and F. V. Loan, Matrix Computations.   The Johns Hopkins University Press, 2013.
  • [18] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, “Continuous-time trajectory optimization for online uav replanning,” in Proc. of the IEEE/RSJ Intl. Conf. on Intell. Robots and Syst., Daejeon, Korea, 2016.
  • [19] F. Gao, W. Wu, W. Gao, and S. Shen, “Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments,” Journal of Field Robotics, vol. 36, no. 4, pp. 710–733, 2019.
  • [20] B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd, “OSQP: An operator splitting solver for quadratic programs,” Mathematical Programming Computation, pp. 1–36, 2020.
  • [21] M. Li, Z. Ferguson, T. Schneider, T. Langlois, D. Zorin, D. Panozzo, C. Jiang, and D. M. Kaufman, “Incremental potential contact: Intersection-and inversion-free, large-deformation dynamics,” ACM Transactions on Graphics, vol. 39, no. 4, 2020.
  • [22] D. C. Liu and J. Nocedal, “On the limited memory BFGS method for large scale optimization,” Mathematical programming, vol. 45, no. 1-3, pp. 503–528, 1989.