This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Continuous-Time Ultra-Wideband-Inertial Fusion

Kailai Li1, Ziyu Cao2, and Uwe D. Hanebeck2 Published on IEEE Robotics and Automation Letters (RA-L). ©2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses. DOI: 10.1109/LRA.2023.3281932This work was partially supported by the German Federal Ministry of Education and Research (BMBF) under grant POMAS (FKZ 01IS17042), the German Research Foundation (DFG) under grant HA 3789/25-1, and the Swedish Research Council under grant Scalable Kalman Filters.1Kailai Li is with the Division of Automatic Control, Department of Electrical Engineering, Linköping University, Sweden (e-mail: [email protected])2Ziyu Cao and Uwe D. Hanebeck are with the Intelligent Sensor-Actuator-Systems Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Germany (e-mail: [email protected]; [email protected])
Abstract

We introduce a novel framework of continuous-time ultra-wideband-inertial sensor fusion for online motion estimation. Quaternion-based cubic cumulative B-splines are exploited for parameterizing motion states continuously over time. Systematic derivations of analytic kinematic interpolations and spatial differentiations are further provided. Based thereon, a new sliding-window spline fitting scheme is established for asynchronous multi-sensor fusion and online calibration. We conduct a dedicated validation of the quaternion spline fitting method, and evaluate the proposed system, SFUISE (spline fusion-based ultra-wideband-inertial state estimation), in real-world scenarios using public data set and experiments. The proposed sensor fusion system is real-time capable and delivers superior performance over state-of-the-art discrete-time schemes. We release the source code and own experimental data at https://github.com/KIT-ISAS/SFUISE .

Index Terms:
Sensor fusion, localization.

I Introduction and Related Work

Online estimation of dynamical motions is of fundamental importance in achieving reliable autonomy of mobile robots [1, 2, 3, 4]. Recent advancements in ultra-wideband (UWB) technology have offered promising alternative solutions to localization in GPS-denied environments. Compared with common sensing principles, e.g., cameras or LiDARs, UWB sensors are lightweight, low-cost, and more scalable in large-scale deployment, particularly, in indoor scenarios [5]. Despite the high spatial resolution of ultra-wideband impulse radio, which transmits at the nanosecond level, there are still technical challenges to achieving high-performance UWB-based tracking in practice. Non-line-of-sight (NLOS) and multipath conditions are well-known issues in UWB ranging, which are dependent on sensor placements and can be further exacerbated by complex and time-varying environments, such as those with moving obstacles [6, 7]. In addition, UWB ranging often exhibits non-Gaussian noise patterns, with surrounding-dependent interference and diffraction that are impossible to model parametrically [8, 9, 10]. Thus, basic UWB tracking solutions using multilateration are almost always insufficient for high-performance motion estimation.

Refer to caption

Refer to caption

(A) const1-trial6-tdoa2 (B) const2-trial3-tdoa3

Refer to caption

Refer to caption

(C) const3-trial4-tdoa3 (D) const3-trial5-tdoa3
Figure 1: Exemplary runs of SFUISE on UTIL. The proposed system delivers accurate trajectory estimates (green) compared to ground truth (red). Blue dots depict anchor positions.

The performance of UWB tracking can be improved through sensor fusion with other modalities. Inertial measurement units (IMUs) provide instantaneous and higher-order motion information that can bridge the gap between consecutive UWB readings. They are cost- and resource-efficient, and can be easily integrated into UWB sensor networks. Conventional UWB-inertial fusion methods often rely on recursive filtering algorithms, particularly the extended Kalman filter (EKF), with inertial measurements facilitating state propagation and ultra-wideband ranging updating the predicted prior. One basic application was introduced in [11], where an EKF was used to localize micro aerial vehicles with a UWB-inertial setup. Further practices involve utilizing six-DoF motion estimation through quaternion kinematics, and incorporating error-states to enhance overall system performance [8, 12, 13].

However, recursive filters rely on the Markov assumption, where evidence for predicting the current state is only traced back to the last state [4]. Sensor measurements of different modalities are usually fused in a decoupled manner, inducing substantial information loss in correlations across multi-sensor readings [2]. These issues can be substantially mitigated by fusing sensor measurements into one joint graph-based nonlinear optimization. It improves tracking accuracy, while still maintaining tractable computational complexity thanks to its sparse structure [14]. Such a paradigm shift has been predominantly reflected in visual or LiDAR based odometry [15, 1]. For UWB-inertial state estimation, current techniques have not fully embraced state-of-the-art methodologies, with most use cases limited to batchwise (offline) or planar motion estimation [16, 5].

Conventional sensor fusion schemes are built atop discrete timestamps of constant interval, necessitating temporal alignment of measurements w.r.t. the estimation [15]. However, different sensors fire asynchronously without any common time instant. In a single-sensor setup, measurements can also be non-uniformly sampled over time due to timestamp jitter, particularly in UWB sensing [8]. Interpolating asynchronous range measurements from different anchors results in degraded tracking performance, especially under outliers and complex noise patterns. Thus, continuous-time state estimation is appealing. In this regard, data-driven approaches based on Gaussian processes have been systematically investigated, where various motion priors are learned within the stochastic differential equation formulations [17, 18].

B-splines parameterize trajectories atop knots, or control points, through temporal polynomials, enabling interpolation at any given time instant with locality and smoothness. This concept can be practically generalized to Lie groups or nonlinear manifolds via reformulation into a cumulative form, based on which rigid body motions can be modeled continuously over time. In [19, 20], B-splines were applied to estimating continuous-time states via batchwise maximum a posteriori. Similar offline schemes have also been proposed for attitude estimation, multi-sensor calibration, and trajectory estimation using visual/event/LiDAR-inertial setups [21, 22, 23, 24, 25]. However, these batchwise schemes are computationally expensive due to the rudimentary strategies of computing time derivatives (via product rule) and Jacobians on B-splines (via numerical or automatic differentiations). A breakthrough in efficient continuous-time motion estimation using B-splines was made in [26], where recursive computation of time derivatives and Jacobians on SO(3)\text{SO(}3\text{)} trajectories was introduced for spline fitting. This has inspired successive work in online motion estimation, with applications including RGB-D tracking, LiDAR/visual-inertial calibration and odometry [27, 28, 29, 30, 31].

While significant progress has been made in continuous-time sensor fusion, there is still a considerable gap towards more extensive engineering practice. To the best knowledge of the authors, no continuous-time solution is currently available for UWB-inertial fusion. Unit quaternions are widely accepted in robotics as a nonsingular rotation representation and have certain favorable attributes in memory efficiency, numerical stability, and computation [32, 2, 15, 33]. However, current spline-based state estimation systems rely heavily on the theory in [26] using rotation matrices, while unit quaternions are only utilized for basic arithmetic in implementation [30]. Thus, there lacks an open-source quaternion-based B-spline sensor fusion framework, with analytic kinematic interpolations and spatial differentiations unified systematically.

Contribution

We introduce SFUISE, a novel Spline Fusion-based Ultra-wideband-Inertial State Estimation scheme (Sec. II). Quaternion-based cubic cumulative B-splines serve as the backbone of state representation, with systematic and unified derivations of analytic kinematic interpolations and Jacobians (Sec. III). Based thereon, an efficient sliding-window spline fitting scheme is established to fuse UWB-inertial readings at raw timestamps with an added option of online calibration (Sec. IV). A dedicated study is first conducted to validate the viability of the quaternion spline fitting scheme. Afterward, we evaluate SFUISE for UWB-inertial tracking in various real-world scenarios using public data sets and experiments, including comparisons with state-of-the-art discrete-time fusion schemes (Sec. V). The proposed system delivers real-time and superior performance over the discrete-time counterparts. Considering the generality of the proposed scheme to extensive scenarios, we open-source our implementation together with own experimental data sets.

Refer to caption
Figure 2: System pipeline of SFUISE.

II System Overview

In the considered scenario, we aim to estimate the following motion-related variables

x¯(t)=[q¯(t),p¯(t),b¯(t)]𝕊3×3×613{\underline{x}}(t)=[\,\underline{q}(t)^{\top},\,\underline{p}(t)^{\top},\,{\underline{b}}(t)^{\top}\,]^{\top}\in\mathbb{S}^{3}\times\mathbb{R}^{3}\times\mathbb{R}^{6}\subset\mathbb{R}^{13} (1)

as a function over time. Quaternion q¯(t)𝕊3\underline{q}(t)\in\mathbb{S}^{3} and vector p¯(t)3\underline{p}(t)\in\mathbb{R}^{3} denote orientations and positions, respectively, and b¯(t)=[b¯acc,b¯gyro]6{\underline{b}}(t)=[\,{\underline{b}}_{{{\texttt{acc}}}}^{\top},{\underline{b}}_{{\texttt{gyro}}}^{\top}\,]^{\top}\in\mathbb{R}^{6} incorporates biases of accelerometer and gyroscope. The proposed spline fusion-based ultra-wideband-inertial state estimation (SFUISE) system is depicted in Fig. 2. It composes an estimation interface and a functional core of spline fusion. Asynchronous UWB and IMU measurements are first preprocessed for potential downsampling, to which a cubic cumulative B-spline is fitted over a time window of limited span. As sensor data are streamed in, the spline fusion window first grows to a pre-given width, afterward slides, both w.r.t. the current timestamp for online state estimation. In the growing stage, additional online calibrations are performed to obtain the gravity vector g¯W{\underline{g}}^{{\texttt{W}}} and transformation 𝐓WU{\mathbf{T}}_{{\texttt{W}}}^{{\texttt{U}}} from world (W) to UWB (U) frames. Further, knot estimates given by spline fusion are sent back to estimation interface, where a global spline is maintained and interpolated for visualization.

III State Estimation on Cumulative B-Splines

III-A Continuous-Time State Modeling

We deploy cubic (fourth-order) cumulative B-splines for continuous-time state representation. Being established upon a set of control points {(𝓅¯i,ti)}i=1n\{({\underline{\mathscr{p}}}_{i},t_{i})\}_{i=1}^{n} associated with time tit_{i} of uniform interval, it allows for fusing sensor readings up to the second order of motion (e.g., from accelerometer) [26]. State at an arbitrary timestamp t[ti,ti+1)t\in[\,t_{i},t_{i+1}\,) can be interpolated w.r.t. a local set of knots {𝓅¯i+j2}j=03\{{\underline{\mathscr{p}}}_{i+j-2}\}_{j=0}^{3} according to

p¯(t)=𝓅¯i2+j=13λj(u)δ¯j,withu=ttiti+1ti\underline{p}(t)={\underline{\mathscr{p}}}_{i-2}+{\sum}_{j=1}^{3}\lambda_{j}(u)\,{\underline{\delta}}_{j}\,,{\quad\text{with}\quad}u=\frac{t-t_{i}}{t_{i+1}-t_{i}} (2)

being the normalized time and δ¯j=𝓅¯i+j2𝓅¯i+j3{\underline{\delta}}_{j}={\underline{\mathscr{p}}}_{i+j-2}-{\underline{\mathscr{p}}}_{i+j-3} the distance between neighboring knots. Note that we use calligraphic fonts to denote spline knots, highlighting that they do not lie on the trajectory itself. The cumulative basis functions {λj(u)}j=13\{\lambda_{j}(u)\}_{j=1}^{3} follow [λ1(u),λ2(u),λ3(u)]=Φu¯[\,\lambda_{1}(u),\lambda_{2}(u),\lambda_{3}(u)\,]^{\top}=\Phi\,{\underline{u}} , with

Φ=16[ 533113320001]andu¯=[ 1,u,u2,u3]\Phi={\textstyle\frac{1}{6}}\bigg{[}\begin{smallmatrix}\,5&3&-3&1\,\\ 1&3&3&-2\\ 0&0&0&1\end{smallmatrix}\bigg{]}{\quad\text{and}\quad}{\underline{u}}=[\,1,u,u^{2},u^{3}\,]^{\top}

endowing cubic B-splines with 𝒞2\mathcal{C}^{2}-continuity. Expression (2) can be applied to model positions and IMU biases in (1). Based on Riemannian geometry, the concept of cumulative B-splines can be naturally extended to the manifold of unit quaternions [34]. It follows

q¯(t)=𝓆¯i2j=13Exp𝟙(λj(u)δ¯j),\underline{q}(t)={\underline{\mathscr{q}}}_{i-2}\otimes{\prod}_{j=1}^{3}\mathrm{Exp}_{\mathds{1}}{\big{(}\lambda_{j}(u)\,{\underline{\delta}}_{j}\big{)}}\,, (3)

with \otimes denoting the Hamilton product and λj(u)\lambda_{j}(u) the basis functions in (2). Distance between adjacent knots is quantified via logarithm map δ¯j=Log𝟙(𝓆¯i+j31𝓆¯i+j2){\underline{\delta}}_{j}=\mathrm{Log}_{\mathds{1}}\big{(}{{\underline{\mathscr{q}}}_{i+j-3}^{-1}\otimes{\underline{\mathscr{q}}}_{i+j-2}}\big{)} at identity 𝟙=[1,0,0,0]\mathds{1}=[1,0,0,0]^{\top} and contributes to the on-manifold interpolation via exponential map Exp𝟙()\mathrm{Exp}_{\mathds{1}}(\cdot) [35]. Unless otherwise specified, the term ‘B-spline’ in the following content refers to the cumulative formulation of uniform intervals.

On modeling motion-related states, knots are optimally estimated by fitting the spline to measurements in the least squares sense. Constructing residuals in the objective refers to kinematic interpolations on cubic B-splines w.r.t. related sensory modalities. This can be computationally expensive due to large data volume and complexity in deriving motion derivatives. Meanwhile, solving nonlinear least squares requires Jacobians of on-manifold kinematic interpolations w.r.t. knots. In the remainder of this section, we provide these theoretical building blocks for quaternion-based B-splines towards high-performance continuous-time sensor fusion.

III-B Kinematic Interpolations

We now present time derivatives of cubic B-splines for interpolating linear and angular velocities, and acceleration.

Linear velocity and acceleration

Given a position B-spline p¯(t)\underline{p}(t) in (2), its first-order time derivative (denoted by dot atop the variable) can be derived via p¯˙(t)=p¯(u)u(t)\dot{\underline{p}}(t)=\underline{p}^{\prime}(u){u}^{\prime}(t), with u(t)=1/(ti+1ti)1/Δiu^{\prime}(t)=1/(t_{i+1}-t_{i})\coloneqq 1/{\Delta}_{i}. This leads to

p¯˙(t)=j=13λj(u)δ¯ju(t)=j=13λ˙j(u)δ¯j,\dot{\underline{p}}(t)={\sum}_{j=1}^{3}{\lambda}^{\prime}_{j}(u)\,{\underline{\delta}}_{j}u^{\prime}(t)={\sum}_{j=1}^{3}\dot{\lambda}_{j}(u)\,{\underline{\delta}}_{j}\,,

with λ˙j\dot{\lambda}_{j} being derivatives of basis functions in (2) given by [λ˙1(u),λ˙2(u),λ˙3(u)]=Φ[ 0,1,2u,3u2]/Δi[\dot{\lambda}_{1}(u),\dot{\lambda}_{2}(u),\dot{\lambda}_{3}(u)]^{\top}=\Phi\,[\,0,1,2u,3u^{2}\,]^{\top}/{\Delta}_{i}. Further, the acceleration over time follows

p¯¨(t)=j=13λ¨j(u)δ¯j,\ddot{\underline{p}}(t)={\sum}_{j=1}^{3}\ddot{\lambda}_{j}(u)\,{\underline{\delta}}_{j}\,, (4)

with [λ¨1(u),λ¨2(u),λ¨3(u)]=Φ[ 0,0,2,6u]/Δi2[\ddot{\lambda}_{1}(u),\ddot{\lambda}_{2}(u),\ddot{\lambda}_{3}(u)]^{\top}=\Phi\,[\,0,0,2,6u\,]^{\top}/{\Delta}_{i}^{2}.

Angular velocity

By definition [12], directly computing the time derivative of the quaternion spline function in (3) leads to the angular velocity ω¯(t){\underline{\omega}}(t) in body frame via relation

q¯˙(t)=0.5q¯(t)ω¯(t).\dot{\underline{q}}(t)=0.5\,\underline{q}(t)\otimes{\underline{\omega}}(t)\,. (5)

In practice, however, this requires quadratic complexity w.r.t. spline order due to the chain of Hamilton products. For cumulative B-splines on matrix Lie group SO(3)\text{SO(}3\text{)}, an efficient recursive method has been introduced in [26]. We hereby provide the full derivation for its quaternion counterpart.

For brevity, we discard the time variable in (3) and split the product chain into q¯=q¯kj=k+13Exp𝟙(λjδ¯j)\underline{q}=\underline{q}_{k}\otimes\prod_{j=k+1}^{3}\mathrm{Exp}_{\mathds{1}}{(\lambda_{j}\,{\underline{\delta}}_{j})}, where q¯k=𝓆¯i2j=1kExp𝟙(λjδ¯j){\underline{q}}_{k}={\underline{\mathscr{q}}}_{i-2}\otimes\prod_{j=1}^{k}\mathrm{Exp}_{\mathds{1}}{(\lambda_{j}\,{\underline{\delta}}_{j})} for k{1,2,3}k\in\{1,2,3\}. The quaternion spline interpolation can then be expressed in a recursive fashion according to

q¯k=q¯k1¯k,{\underline{q}}_{k}={\underline{q}}_{k-1}\otimes{\underline{\mathscr{e}}}_{k}\,, (6)

with ¯kExp𝟙(λkδ¯k){\underline{\mathscr{e}}}_{k}\coloneqq\mathrm{Exp}_{\mathds{1}}{(\lambda_{k}\,{\underline{\delta}}_{k})} denoting the kk-th spline increment obtained from exponential map. Applying the kinematic relation in (5) to q¯k\underline{q}_{k} yields

q¯˙k=0.5q¯kω¯k,\dot{\underline{q}}_{k}=0.5\,\underline{q}_{k}\otimes{\underline{\omega}}_{k}\,, (7)

with ω¯k{\underline{\omega}}_{k} being the angular velocity to be computed recursively. For that, we compute the time derivative of (6) and obtain

q¯˙k=q¯˙k1¯k+q¯k1¯˙k,\dot{\underline{q}}_{k}=\dot{\underline{q}}_{k-1}\otimes{\underline{\mathscr{e}}}_{k}+\underline{q}_{k-1}\otimes\dot{{\underline{\mathscr{e}}}}_{k}\,, (8)

where time derivative of increment ¯k{\underline{\mathscr{e}}}_{k} is given by ¯˙k=¯k(λ˙kδ¯k)\dot{{\underline{\mathscr{e}}}}_{k}={\underline{\mathscr{e}}}_{k}\otimes(\dot{\lambda}_{k}{\underline{\delta}}_{k}) according to (5). Expression in (8) then follows

q¯˙k\displaystyle\dot{\underline{q}}_{k} =0.5q¯k1ω¯k1¯k+q¯k1¯k(λ˙kδ¯k)\displaystyle=0.5\,{\underline{q}}_{k-1}\otimes{\underline{\omega}}_{k-1}\otimes{\underline{\mathscr{e}}}_{k}+\underline{q}_{k-1}\otimes{\underline{\mathscr{e}}}_{k}\otimes(\dot{\lambda}_{k}{\underline{\delta}}_{k})
=0.5q¯k(¯k1ω¯k1¯k+2λ˙kδ¯k),\displaystyle=0.5\,\underline{q}_{k}\otimes\big{(}{\underline{\mathscr{e}}}_{k}^{-1}\otimes{\underline{\omega}}_{k-1}\otimes{\underline{\mathscr{e}}}_{k}+2\dot{\lambda}_{k}{\underline{\delta}}_{k}\big{)}\,,

based on (6) and (7). Subsequently, the following recursive expression for angular velocity interpolation can be established

ω¯k=(¯k1)ω¯k1+2λ˙kδ¯k,{\underline{\omega}}_{k}={\mathscr{R}}({\underline{\mathscr{e}}}_{k}^{-1})\,{\underline{\omega}}_{k-1}+2\dot{\lambda}_{k}{\underline{\delta}}_{k}\,, (9)

where (¯k1)ω¯k1=¯k1ω¯k1¯k{\mathscr{R}}({\underline{\mathscr{e}}}_{k}^{-1})\,{\underline{\omega}}_{k-1}={\underline{\mathscr{e}}}_{k}^{-1}\otimes{\underline{\omega}}_{k-1}\otimes{\underline{\mathscr{e}}}_{k} rotates angular velocity ω¯k1{\underline{\omega}}_{k-1} with the inverse quaternion increment ¯k1{\underline{\mathscr{e}}}_{k}^{-1}. To bootstrap the recursion, we can derive ω¯1{\underline{\omega}}_{1} via time derivative of q¯1=𝓆¯i2¯1\underline{q}_{1}={\underline{\mathscr{q}}}_{i-2}\otimes{\underline{\mathscr{e}}}_{1}, namely, q¯˙1=𝓆¯i2¯˙1=q¯1(λ˙1δ¯1)\dot{\underline{q}}_{1}={\underline{\mathscr{q}}}_{i-2}\otimes\dot{{\underline{\mathscr{e}}}}_{1}=\underline{q}_{1}\otimes(\dot{\lambda}_{1}{\underline{\delta}}_{1}), leading to ω¯1=2λ˙1δ¯1{\underline{\omega}}_{1}=2\dot{\lambda}_{1}{\underline{\delta}}_{1}.

III-C Spatial Differentiations

On solving the nonlinear least squares for spline fitting, the gradient of the objective can be computed via the chain rule, where the major complexity goes to the kinematic terms. For that, we consider a spline segment ranging over t[ti,ti+1)t\in[\,t_{i},t_{i+1}) as introduced in Sec. III-A. Jacobians of interpolated kinematic states x¯{\underline{x}} w.r.t. knots {𝓍¯i+j2}j=03\{{\underline{\mathscr{x}}}_{i+j-2}\}_{j=0}^{3} are in principle expressed as

dx¯d𝓍¯i+j2=𝕀j=0𝐉i2+𝕀j0𝐉j+𝕀j3𝐉j+1,\frac{\,\textrm{d}{{\underline{x}}}}{\,\textrm{d}{{\underline{\mathscr{x}}}}_{i+j-2}}={\mathbb{I}}_{j=0}{\mathbf{J}}_{i-2}+{\mathbb{I}}_{j\neq{0}}{\mathbf{J}}_{j}+{\mathbb{I}}_{j\neq{3}}{\mathbf{J}}_{j+1}\,, (10)

with 𝕀{\mathbb{I}} being the indicator function. Depending on the knot indices, Jacobian components in (10) are calculated via

𝐉i2=x¯𝓍¯i2,𝐉j=x¯δ¯jδ¯j𝓍¯i+j2,𝐉j+1=x¯δ¯j+1δ¯j+1𝓍¯i+j2{\mathbf{J}}_{i-2}\mspace{-3.0mu}=\mspace{-3.0mu}\frac{\partial{{\underline{x}}}}{\partial{{\underline{\mathscr{x}}}_{i-2}}},{\mathbf{J}}_{j}\mspace{-3.0mu}=\mspace{-3.0mu}\frac{\partial{{\underline{x}}}}{\partial{{\underline{\delta}}_{j}}}\frac{\partial{{\underline{\delta}}_{j}}}{\partial{{\underline{\mathscr{x}}}_{i+j-2}}},{\mathbf{J}}_{j+1}\mspace{-3.0mu}=\mspace{-3.0mu}\frac{\partial{{\underline{x}}}}{\partial{{\underline{\delta}}_{j+1}}}\frac{\partial{{\underline{\delta}}_{j+1}}}{\partial{{\underline{\mathscr{x}}}_{i+j-2}}}

that are concretized for the kinematic types as follows.

Jacobians of position and orientation interpolations

Following (10), it is trivial to obtain the Jacobian of the position spline w.r.t knot as dp¯/d𝓅¯i+j2=(𝕀j=0+𝕀j0λj𝕀j3λj+1)𝐈3\,\textrm{d}{\underline{p}}/\,\textrm{d}{{\underline{\mathscr{p}}}_{i+j-2}}=({\mathbb{I}}_{j=0}+{\mathbb{I}}_{j\neq{0}}\lambda_{j}-{\mathbb{I}}_{j\neq{3}}\lambda_{j+1})\,{{\mathbf{I}}_{{3}}}. For the orientation spline defined in (3), its first Jacobian component in (10) is derived as 𝐉i2=𝒬(j=13¯j){\mathbf{J}}_{i-2}={{\mathscr{Q}}}^{\lrcorner}\big{(}{\prod}_{j=1}^{3}{\underline{\mathscr{e}}}_{j}\big{)}, with function 𝒬{{\mathscr{Q}}}^{\lrcorner} mapping a quaternion multiplied from right-hand side into its matrix representation [35, Eq. 3.5]. To obtain the Jacobian components 𝐉j{\mathbf{J}}_{j} and 𝐉j+1{\mathbf{J}}_{j+1} in (10), we reformulate the quaternion interpolation (3) into

q¯\displaystyle\underline{q} =(𝓆¯i2k=1j1¯k)¯jk=j+13¯k=𝐐𝐐¯j\displaystyle=\Big{(}{\underline{\mathscr{q}}}_{i-2}\otimes{\prod}_{k=1}^{j-1}{\underline{\mathscr{e}}}_{k}\Big{)}\otimes{\underline{\mathscr{e}}}_{j}\otimes{\prod}_{k=j+1}^{3}{\underline{\mathscr{e}}}_{k}={{\mathbf{Q}}}^{\llcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to5.67pt{\vbox to3.2pt{\pgfpicture\makeatletter\hbox{\hskip 1.19998pt\lower-1.59998pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{{ {\pgfsys@beginscope\pgfsys@setlinewidth{0.32pt}\pgfsys@setdash{}{0.0pt}\pgfsys@roundcap\pgfsys@roundjoin{} {}{}{} {}{}{} \pgfsys@moveto{-1.19998pt}{1.59998pt}\pgfsys@curveto{-1.09998pt}{0.99998pt}{0.0pt}{0.09999pt}{0.29999pt}{0.0pt}\pgfsys@curveto{0.0pt}{-0.09999pt}{-1.09998pt}{-0.99998pt}{-1.19998pt}{-1.59998pt}\pgfsys@stroke\pgfsys@endscope}} }{}{}{{}}\pgfsys@moveto{0.45999pt}{0.0pt}\pgfsys@lineto{4.26791pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{0.45999pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}{{\mathbf{Q}}}^{\lrcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.80792pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.80792pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}\,{\underline{\mathscr{e}}}_{j}

to expose the jj-th increment ¯j{\underline{\mathscr{e}}}_{j}. 𝐐{{\mathbf{Q}}}^{\llcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.45999pt}{0.0pt}\pgfsys@lineto{4.26791pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{0.45999pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}} and 𝐐{{\mathbf{Q}}}^{\lrcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.80792pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.80792pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}} are matrices representing the quaternions multiplied on the left- and right-hand sides of ¯j{\underline{\mathscr{e}}}_{j} via 𝒬{{\mathscr{Q}}}^{\llcorner} and 𝒬{{\mathscr{Q}}}^{\lrcorner}, respectively, i.e.,

𝐐=𝒬(𝓆¯i2k=1j1¯k),𝐐=𝒬(k=j+13¯k).{{\mathbf{Q}}}^{\llcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.45999pt}{0.0pt}\pgfsys@lineto{4.26791pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{0.45999pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}={{\mathscr{Q}}}^{\llcorner}\Big{(}{\underline{\mathscr{q}}}_{i-2}\otimes{\prod}_{k=1}^{j-1}{\underline{\mathscr{e}}}_{k}\Big{)},{{\mathbf{Q}}}^{\lrcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.80792pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.80792pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}={{\mathscr{Q}}}^{\lrcorner}\Big{(}{\prod}_{k=j+1}^{3}{\underline{\mathscr{e}}}_{k}\Big{)}\,.

We then obtain the gradient of quaternion spline w.r.t. δ¯j{\underline{\delta}}_{j} as

q¯δ¯j=λj𝐐𝐐Exp𝟙(v¯)v¯|v¯=λjδ¯j,\frac{\partial{\underline{q}}}{\partial{{\underline{\delta}}_{j}}}=\lambda_{j}\,{{\mathbf{Q}}}^{\llcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.45999pt}{0.0pt}\pgfsys@lineto{4.26791pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{0.45999pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}\,{{\mathbf{Q}}}^{\lrcorner}_{{\parbox{4.26773pt}{ \leavevmode\hbox to4.67pt{\vbox to0.4pt{\pgfpicture\makeatletter\hbox{\hskip 0.2pt\lower-0.2pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{} {{}{}}{}{}{}{}{}{{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.80792pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{ }{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.80792pt}{0.0pt}\pgfsys@invoke{ }\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}{{}}}} } \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}}\,\frac{\partial{\mathrm{Exp}_{\mathds{1}}({\underline{v}})}}{\partial{{\underline{v}}}}\bigg{|}_{{\underline{v}}=\lambda_{j}{\underline{\delta}}_{j}}\,, (11)

with the gradient of exponential map provided in (19). The gradient w.r.t. δ¯j+1{\underline{\delta}}_{j+1} takes the same expression as in (11), which can be computed recursively given the computation result for δ¯j{\underline{\delta}}_{j}. The gradient of δ¯j{\underline{\delta}}_{j} w.r.t. knot follows

δ¯j𝓆¯i+j2=Log𝟙(𝒹¯j)𝓆¯i+j2=Log𝟙(q¯)q¯|q¯=𝒹¯j𝒬(𝓆¯i+j31),\mspace{-3.0mu}\frac{\partial{{\underline{\delta}}_{j}}}{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}\mspace{-3.0mu}=\mspace{-3.0mu}\frac{\partial{\mathrm{Log}_{\mathds{1}}\big{(}{{\underline{\mathscr{d}}}_{j}})}}{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}\mspace{-3.0mu}=\mspace{-3.0mu}\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{\underline{q}}}\bigg{|}_{\underline{q}={\underline{\mathscr{d}}}_{j}}\mspace{-25.0mu}{{\mathscr{Q}}}^{\llcorner}({\underline{\mathscr{q}}}_{i+j-3}^{-1})\,, (12)

with 𝒹¯j𝓆¯i+j31𝓆¯i+j2{\underline{\mathscr{d}}}_{j}\coloneqq{\underline{\mathscr{q}}}_{i+j-3}^{-1}\otimes{\underline{\mathscr{q}}}_{i+j-2}. We provide the gradient of logarithm map in (20). The gradient of δ¯j+1{\underline{\delta}}_{j+1} is derived as

δ¯j+1𝓆¯i+j2=Log𝟙(q¯)q¯|q¯=𝒹¯j+1𝒬(𝓆¯i+j1)𝐃,\frac{\partial{{\underline{\delta}}_{j+1}}}{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}=\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{\underline{q}}}\bigg{|}_{\underline{q}={\underline{\mathscr{d}}}_{j+1}}{{\mathscr{Q}}}^{\lrcorner}\big{(}{\underline{\mathscr{q}}}_{i+j-1})\,{\mathbf{D}}\,, (13)

where 𝐃=diag(1,1,1,1){\mathbf{D}}={\textrm{diag}}(1,-1,-1,-1) denotes a diagonal matrix.

Jacobian of angular velocity interpolation

In reference to (10), the Jacobian takes the following general form

dω¯d𝓆¯i+j2=𝕀j0𝛀j+𝕀j3𝛀j+1,\frac{\,\textrm{d}{{\underline{\omega}}}}{\,\textrm{d}{{\underline{\mathscr{q}}}}_{i+j-2}}={\mathbb{I}}_{j\neq{0}}\bm{\Omega}_{j}+{\mathbb{I}}_{j\neq{3}}\bm{\Omega}_{j+1}\,, (14)

with the two partial derivatives expressed as follows

𝛀j=ω¯ω¯jω¯jδ¯jδ¯j𝓆¯i+j2,𝛀j+1=ω¯ω¯j+1ω¯j+1δ¯j+1δ¯j+1𝓆¯i+j2.{\bm{\Omega}}_{j}=\frac{\partial{{\underline{\omega}}}}{\partial{{\underline{\omega}}_{j}}}\frac{\partial{{\underline{\omega}}_{j}}}{\partial{{\underline{\delta}}_{j}}}\frac{\partial{{\underline{\delta}}_{j}}}{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}\,,{\bm{\Omega}}_{j+1}=\frac{\partial{{\underline{\omega}}}}{\partial{{\underline{\omega}}_{j+1}}}\frac{\partial{{\underline{\omega}}_{j+1}}}{\partial{{\underline{\delta}}_{j+1}}}\frac{\partial{{\underline{\delta}}_{j+1}}}{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}\,.

The two components above are computed in the same fashion, and we now only demonstrate the derivation for 𝛀j{\bm{\Omega}}_{j}. The first term in 𝛀j{\bm{\Omega}}_{j} can be obtained via the recursion in (9) according to the following derivation. For cubic cumulative B-spline, we have ω¯(t)=ω¯k(t){\underline{\omega}}(t)={\underline{\omega}}_{k}(t) with k=3k=3. Thus, we obtain

ω¯ω¯j=ω¯ω¯3k=13jω¯4kω¯3k=k=13j(¯4k1),\frac{\partial{{\underline{\omega}}}}{\partial{{\underline{\omega}}_{j}}}=\frac{\partial{{\underline{\omega}}}}{\partial{{\underline{\omega}}_{3}}}{\prod}_{k=1}^{3-j}\frac{\partial{{\underline{\omega}}_{4-k}}}{\partial{{\underline{\omega}}_{3-k}}}={\prod}_{k=1}^{3-j}{\mathscr{R}}({\underline{\mathscr{e}}}_{4-k}^{-1})\,,

and the second term is derived as

ω¯jδ¯j=λj((q¯)ω¯j1)q¯|q¯=¯j1𝐃Exp𝟙(v¯)v¯|v¯=λjδ¯j+2λ˙j𝐈3.\frac{\partial{{\underline{\omega}}_{j}}}{\partial{{\underline{\delta}}_{j}}}=\lambda_{j}\frac{\partial{({\mathscr{R}}(\underline{q})\,{\underline{\omega}}_{j-1})}}{\partial{\underline{q}}}\bigg{|}_{\underline{q}={\underline{\mathscr{e}}}_{j}^{-1}}\mspace{-10.0mu}{\mathbf{D}}\frac{\partial{\mathrm{Exp}_{\mathds{1}}({\underline{v}})}}{\partial{{\underline{v}}}}\bigg{|}_{{\underline{v}}=\lambda_{j}{\underline{\delta}}_{j}}\mspace{-10.0mu}+2\dot{\lambda}_{j}{{\mathbf{I}}_{{3}}}\,.

((q¯)ω¯j1)/q¯{{\partial{({\mathscr{R}}(\underline{q}){\underline{\omega}}_{j-1})}}/{\partial{\underline{q}}}} denotes the Jacobian of quaternion rotation that is given by in [12, Eq.19]. And the last term δ¯j/𝓆¯i+j2{{\partial{{\underline{\delta}}_{j}}}/{\partial{{\underline{\mathscr{q}}}_{i+j-2}}}} is available in (12).

Jacobian of acceleration interpolation

We apply the general formulation (10) to the acceleration interpolation (4). It is then straightforward to obtain the Jacobian as follows

dp¯¨d𝓅¯i+j2=(𝕀j0λ¨j𝕀j3λ¨j+1)𝐈3.\frac{\,\textrm{d}{\ddot{\underline{p}}}}{\,\textrm{d}{{\underline{\mathscr{p}}}_{i+j-2}}}=({\mathbb{I}}_{j\neq{0}}\ddot{\lambda}_{j}-{\mathbb{I}}_{j\neq{3}}\ddot{\lambda}_{j+1}){{\mathbf{I}}_{{3}}}\,.

IV Online UWB-Inertial Spline Fusion

Refer to caption
Figure 3: Sliding-window spline fitting for online estimation.

IV-A Sliding-Window Spline Fitting

Shown in Fig. 3, we exploit cubic B-splines to parameterize state (1) continuously over time with knots concatenated as 𝓍¯=[𝓆¯,𝓅¯,𝒷¯]13{\underline{\mathscr{x}}}=[\,{\underline{\mathscr{q}}}^{\top},{\underline{\mathscr{p}}}^{\top},{\underline{\mathscr{b}}}^{\top}]^{\top}\in\mathbb{R}^{13}. To keep the computational intensity tractable for online performance, we bound the spline fitting problem over a window of recent τw\tau_{\text{w}} knots 𝒳w=[𝓍¯1,,𝓍¯τw]13×τw{\mathcal{{X}}}_{\text{w}}=[\,{\underline{\mathscr{x}}}_{1},...,{\underline{\mathscr{x}}}_{\tau_{\text{w}}}\,]\in\mathbb{R}^{13\times\tau_{\text{w}}}, namely,

𝒳w=argmin𝒳w(𝒳w),{\mathcal{{X}}}_{\text{w}}^{\star}=\textstyle\operatorname*{arg\,{\min}}_{{\mathcal{{X}}}_{\text{w}}}{\mathscr{F}}({\mathcal{{X}}}_{\text{w}})\,, (15)

with the objective function formulated as (𝒳w)={\mathscr{F}}({\mathcal{{X}}}_{\text{w}})=

𝓋ui=1mu(𝒳w,z^u,i)𝐂u2+𝓋ik=1ni(𝒳w,z¯^i,k)𝐂i2.{\mathscr{v}}_{{\texttt{u}}}\sum_{i=1}^{m}\big{\|}{\mathscr{E}}_{{\texttt{u}}}({\mathcal{{X}}}_{\text{w}},{\hat{z}}_{{{\texttt{u}}},i})\big{\|}^{2}_{{\mathbf{C}}_{{\texttt{u}}}}+{\mathscr{v}}_{{{\texttt{i}}}}\sum_{k=1}^{n}\big{\|}{\mathscr{E}}_{{\texttt{i}}}({\mathcal{{X}}}_{\text{w}},\hat{\underline{z}}_{{{\texttt{i}}},k})\big{\|}^{2}_{{\mathbf{C}}_{{\texttt{i}}}}\,. (16)

u{\mathscr{E}}_{{\texttt{u}}} and i{\mathscr{E}}_{{\texttt{i}}} denote residual terms built upon UWB and IMU measurements, {z^u,i}i=1m\{{\hat{z}}_{{{\texttt{u}}},i}\}_{i=1}^{m} and {z¯^i,k}k=1n\{\hat{\underline{z}}_{{{\texttt{i}}},k}\}_{k=1}^{n}, respectively, each observed at raw timestamps. 𝐂{\mathbf{C}}_{\circ} and 𝓋{\mathscr{v}}_{\circ} indicate the sensor noise covariances and tunable weights, respectively, for each error term (subscript ’\circ’ stands for ’u’ or ’i’ denoting UWB or IMU, respectively). As the window slides, the so-called active knots within the current window are updated over time until to be dropped out. Meanwhile, the three knots that are most recently removed from the window are turned into an idle state. They are no longer being optimized, however, still participate in computing residuals in the current window via kinematic interpolations, which enables motion continuation across sliding windows over time [29]. Note that timestamps of residuals and underlying knots are not to be aligned. This allows for flexible multi-sensor fusion and efficient motion representation compared with discrete-time paradigm.

We now specify the residual terms in (16). Throughout the following derivations, we use B-splines to describe the 66-DoF motion of the IMU body (I) w.r.t. the world frame (W). Anchor positions are given w.r.t. UWB frame (U) with a transformation 𝐓WUSE(3){\mathbf{T}}_{{\texttt{W}}}^{{\texttt{U}}}\in\text{SE(}3\text{)} from the world frame.

UWB residual

We demonstrate the UWB residual in (16) for time-of-arrival (ToA) ranging. Given a range measurement z^u,i{\hat{z}}_{{{\texttt{u}}},i} at timestamp tit_{i}, its residual term follows

u(𝒳w,z^u,i)=𝐓WUp¯ta,iWα¯an,iUz^u,i,{\mathscr{E}}_{{\texttt{u}}}({\mathcal{{X}}}_{\text{w}},{\hat{z}}_{{{\texttt{u}}},i})=\big{\|}{{\mathbf{T}}_{{\texttt{W}}}^{{\texttt{U}}}\,\underline{p}_{{{\texttt{ta}}},i}^{{\texttt{W}}}-{\underline{\alpha}}_{{{\texttt{an}}},i}^{{\texttt{U}}}}\big{\|}-{\hat{z}}_{{{\texttt{u}}},i}\,, (17)

with p¯ta,iW=q¯(ti)ν¯taIq¯1(ti)+p¯(ti)\underline{p}_{{{\texttt{ta}}},i}^{{\texttt{W}}}=\underline{q}(t_{i})\otimes{\underline{\nu}}_{{\texttt{ta}}}^{{\texttt{I}}}\otimes\underline{q}^{-1}(t_{i})+\underline{p}(t_{i}) being the UWB tag position in world frame obtained by transforming tag coordinates ν¯taI{\underline{\nu}}_{{\texttt{ta}}}^{{\texttt{I}}} w.r.t. body pose interpolated at tit_{i}. α¯an,iU{\underline{\alpha}}_{{{\texttt{an}}},i}^{{\texttt{U}}} denotes coordinates of the corresponding anchor. Residual for time-difference-of-arrival (TDoA) ranging can be obtained similarly, which we do not specify due to page limit.

IMU residual

Given the kk-th IMU reading z¯^i,k\hat{\underline{z}}_{{{\texttt{i}}},k} of acceleration a¯^kI{\hat{{\underline{a}}}}_{k}^{{\texttt{I}}} and angular velocity ω¯^kI{\hat{{\underline{\omega}}}}_{k}^{{\texttt{I}}}, we construct residual i(𝒳w,z¯^i,k)=[𝓏¯acc,k,𝓏¯gyro,k,𝓏¯bias,k]{\mathscr{E}}_{{\texttt{i}}}({\mathcal{{X}}}_{\text{w}},\hat{\underline{z}}_{{{\texttt{i}}},k})=[\,{\mathscr{{\underline{z}}}}_{{{\texttt{acc}}},k}^{\top},\,{\mathscr{{\underline{z}}}}_{{{\texttt{gyro}}},k}^{\top},{\mathscr{{\underline{z}}}}_{{{\texttt{bias}}},k}^{\top}\,]^{\top} for spline fitting to inertial data. The accelerometer residual 𝓏¯acc,k{\mathscr{{\underline{z}}}}_{{{\texttt{acc}}},k} is

𝓏¯acc,k=a¯kI+b¯acc,ka¯^kI,{\mathscr{{\underline{z}}}}_{{{\texttt{acc}}},k}={\underline{a}}^{{\texttt{I}}}_{k}+{\underline{b}}_{{{\texttt{acc}}},k}-{\hat{{\underline{a}}}}_{k}^{{\texttt{I}}}\,, (18)

with a¯kI=q¯1(tk)(p¯¨(tk)+g¯W)q¯(tk){\underline{a}}^{{\texttt{I}}}_{k}=\underline{q}^{-1}(t_{k})\otimes\big{(}\ddot{\underline{p}}(t_{k})+{\underline{g}}^{{\texttt{W}}}\big{)}\otimes\underline{q}(t_{k}) transforming the interpolated acceleration p¯¨(tk)\ddot{\underline{p}}(t_{k}) together with gravity g¯W{\underline{g}}^{{\texttt{W}}} to body frame. b¯acc,k{\underline{b}}_{{{\texttt{acc}}},k} is the accelerometer bias interpolated on spline b¯(t){\underline{b}}(t) at tkt_{k}. The gyroscope residual 𝓏¯gyro,k{\mathscr{{\underline{z}}}}_{{{\texttt{gyro}}},k} follows 𝓏¯gyro,k=ω¯I(tk)+b¯gyro,kω¯^kI{\mathscr{{\underline{z}}}}_{{{\texttt{gyro}}},k}={\underline{\omega}}^{{\texttt{I}}}(t_{k})+{\underline{b}}_{{{\texttt{gyro}}},k}-{\hat{{\underline{\omega}}}}_{k}^{{\texttt{I}}}, with ω¯I(tk){\underline{\omega}}^{{\texttt{I}}}(t_{k}) interpolated recursively as shown in (9). b¯gyro,k{\underline{b}}_{{{\texttt{gyro}}},k} denotes the gyroscope bias interpolated at tkt_{k}. Further, we compute the difference of consecutive bias interpolations at tkt_{k} and tk+1t_{k+1} as the IMU bias residual, namely, 𝓏¯bias,k=b¯(tk+1)b¯(tk){\mathscr{{\underline{z}}}}_{{{\texttt{bias}}},k}={\underline{b}}(t_{k+1})-{\underline{b}}(t_{k}).

IV-B Concurrent Calibration

In general, the transformation 𝐓WU{\mathbf{T}}_{{\texttt{W}}}^{{\texttt{U}}} (parameterized by a quaternion q¯WU𝕊3{\underline{q}}_{{\texttt{W}}}^{{\texttt{U}}}\in\mathbb{S}^{3} and a translation vector t¯WU3{\underline{t}}_{{\texttt{W}}}^{{\texttt{U}}}\in\mathbb{R}^{3}) from world to UWB frames in (17) is unknown and typically dependent on the anchor coordinates and the tag pose at system initialization. Also, the gravity orientation g¯¯W\bar{{\underline{g}}}^{{\texttt{W}}} (obtained via normalizing the gravity g¯W=g¯Wg¯¯W{\underline{g}}^{{\texttt{W}}}=\|{\underline{g}}^{{\texttt{W}}}\|\,\bar{{\underline{g}}}^{{\texttt{W}}}) in (18) is in general not available upon navigation. To approximate it, a common practice is to average the first several accelerometer readings under the assumption of a static motion at start. This can be easily violated by an undesirable starting condition (e.g., on the fly). To address these issues, we concatenate (q¯WU,t¯WU)({\underline{q}}_{{\texttt{W}}}^{{\texttt{U}}},{\underline{t}}_{{\texttt{W}}}^{{\texttt{U}}}) and g¯¯W\bar{{\underline{g}}}^{{\texttt{W}}} after the knots in the state vector (15) during window-growing phase and estimate them via spline fitting.

const1 const2 const3 const4
tdoa2

Refer to caption

Refer to caption

Refer to caption

Refer to caption

tdoa3

Refer to caption

Refer to caption

Refer to caption

Refer to caption

Figure 4: APEs obtained from benchmark on all sequences of UTIL data set. The vertical axes denote RMSEs in meters. The horizontal axes denote trial# in const1 to const3 and trial#. traj# in const4, where n indicates a manual sequence. ‘#’ denotes the sequence index. Results from SFUISE are plotted with \bullet. Results from ESKF and GUIF are given by \boldsymbol{\boldsymbol{\circ}} and ×\boldsymbol{\boldsymbol{\times}}, with \bm{\circ} and ×\bm{\times} indicating tracking fails, respectively.

IV-C Implementation

The proposed spline fusion-based UWB-inertial state estimation (SFUISE) scheme is developed in C++ using ROS [36]. The system composes two individual nodes corresponding to the two functional blocks in the system pipeline Fig. 2. Besides basic computational tools such as Eigen (https://eigen.tuxfamily.org), no further external dependency is required. We customize Levenberg-Marquardt (LM) algorithm to solve the nonlinear least squares in (15) iteratively. The closed-form gradients w.r.t. quaternion states are further established in the tangent space at estimate 𝓆¯ˇ\check{{\underline{\mathscr{q}}}} w.r.t. a local perturbation ϕ¯3{\underline{\phi}}\in\mathbb{R}^{3} by multiplying with (𝓆¯ˇExp𝟙(ϕ¯))ϕ¯|ϕ¯=0¯=𝒬(𝓆¯ˇ)Exp𝟙(ϕ¯)ϕ¯|ϕ¯=0¯\frac{\partial{({\check{{\underline{\mathscr{q}}}}}\otimes\mathrm{Exp}_{\mathds{1}}({{\underline{\phi}}}))}}{\partial{{\underline{\phi}}}}\big{|}_{{\underline{\phi}}={\underline{0}}}={{\mathscr{Q}}}^{\llcorner}(\check{{\underline{\mathscr{q}}}})\frac{\partial{\mathrm{Exp}_{{\mathds{1}}}({\underline{\phi}})}}{\partial{{\underline{\phi}}}}\big{|}_{{\underline{\phi}}={\underline{0}}} . Solving the linearized system yields an increment ϕ¯{\underline{\phi}}^{*}, which updates the estimate through 𝓆¯ˇ𝓆¯ˇExp𝟙(ϕ¯)\check{{\underline{\mathscr{q}}}}\leftarrow\check{{\underline{\mathscr{q}}}}\otimes\mathrm{Exp}_{{\mathds{1}}}({\underline{\phi}}^{*}) [37, 2]. Furthermore, the gravitation orientation g¯¯W𝕊2\bar{{\underline{g}}}^{{\texttt{W}}}\in\mathbb{S}^{2} is handled in a similar fashion as introduced in [15].

V Evaluation

In this section, we first provide a rigorous validation for the proposed quaternion spline fitting scheme. Afterward, we conduct in-depth benchmarks of SFUISE in diverse real-world scenarios. All evaluations are conducted using a laptop (Intel i7-12800H CPU, 3232 GB RAM) running Ubuntu 20.04.

V-A Validation of Quaternion Spline Fitting

The major theoretical complexity for the proposed spline fusion schemes lies in the part for orientation. Thus, we adopt the same synthesis as presented in [26, Sec. 6.1] for batchwise continuous-time SO(3)\text{SO(}3\text{)} estimation using both orientation and angular velocity measurements. We compare our quaternion-based scheme with the one using rotation matrices in [26] for fitting cubic cumulative B-splines. Both schemes are equipped with the same objective and stopping criteria. For each scheme, we exploit the Ceres Solver (http://ceres-solver.org) using LM algorithm with auto-differentiation for gradient computation, and the custom LM solvers using corresponding analytic gradients. In addition, we integrate our quaternion-based analytic gradient into Ceres for validation. We scale up the number of knots and measurements in the original sequence by a factor of {1,5,10}\{1,5,10\} and compute average runtime over 500500 runs with knot initializations around identities perturbed by a small random noise. All methods have converged with an averaged RMSE of 2.21×104{{2.21}\times 10^{-4}} w.r.t. the ground truth in terms of the SO(3)\text{SO(}3\text{)} metric in [38, Eq. 19]. As shown in Tab. I, the proposed quaternion spline fitting scheme achieves superior runtime efficiency, with gradients obtained from both auto-differentiation and analytic expressions. By utilizing closed-form gradients, our quaternion-based scheme runs slightly faster within Ceres than the custom LM in [26], which employs splines defined on rotation matrices.

Auto. Diff.* Analytic
Factor [26] Ours [26] Ours* Ours
×\times1 0.03650.0365 0.0274\bf 0.0274 0.02010.0201 0.01840.0184 0.0137\bf 0.0137
×\times5 0.21610.2161 0.1748\bf 0.1748 0.11980.1198 0.11570.1157 0.0833\bf 0.0833
×\times10 0.49830.4983 0.4083\bf 0.4083 0.27110.2711 0.26600.2660 0.1850\bf 0.1850
TABLE I: Comparisons of different schemes of orientation spline fitting w.r.t. runtime in seconds. Methods with ‘*’ exploit Ceres for optimization. Otherwise, the custom solvers are implemented using the same LM algorithm. The best runtime in each category is highlighted in bold.

V-B Benchmarking Setup

In the upcoming sections, the proposed system is evaluated in real-world scenarios based on the public data set UTIL [7] and own experiments, incorporating both ToA and TDoA ultra-wideband data. Two major discrete-time sensor fusion schemes from the state of the art are considered for comparison. An own composition of graph-based UWB-inertial fusion (GUIF) system is developed with reference to [39] and [1]. Here, discrete-time states are estimated via sliding window optimization with residuals of IMU preintegration, UWB ranging, and the prior factor from marginalization. Online calibration of 𝐓WU{\mathbf{T}}_{{\texttt{W}}}^{{\texttt{U}}} is enabled during window-growing stage. Further, we deploy the error-state Kalman filter (ESKF) provided by [7] with default calibration parameters for evaluation against the recursive estimation scheme. In order to achieve functional UWB ranging under signal interference, the three systems are equipped with a simple thresholding step to reject outliers in UWB measurements. Throughout the benchmark, SFUISE is configured with a sliding window of 100100 knots at 1010 Hz. All UWB readings are exploited for state estimation without downsampling. In order to devote the focus to investigating the core sensor fusion scheme, we avoid fine-tuning of system configuration parameters.

V-C Public Data Set

We exploit UTIL flight data set for evaluating the proposed UWB-inertial fusion scheme with TDoA ultra-wideband ranging. Overall 7979 sequences are collected onboard a quadcopter mounted with a UWB tag and an IMU (10001000 Hz). The UWB sensor network is low-cost and operated at both centralized (tdoa2) and decentralized (tdoa3) modes under four different anchor constellations (const1-4). The TDoA measurements are collected with data rates ranging from 200200 to 500500 Hz including numerous challenging scenarios created by static and dynamic obstacles of various types of materials. Some sequences also exhibit an absence of IMU measurements. Given knot estimates from SFUISE, we obtain the global pose trajectory via interpolation at the frame rate of ground truth (200200 Hz), w.r.t. which we compute the RMSE of the absolute position error (APE) to quantify the tracking accuracy.

As shown in Fig. 4, results given by the three systems are summarized w.r.t. UWB operation modes and anchor constellations. The proposed spline fusion scheme delivers superior performance over discrete-time approaches using recursive estimation or graph-based optimization. In particular, it exhibits a fairly good robustness against challenging conditions, e.g., in sequences with const4, where the tracking space is cluttered with static and dynamic obstacles of different materials including metal. In the face of time-varying NLOS and multi-path interference, discrete-time approaches are more susceptible to these effects, especially during online calibration, leading to large tracking errors or even complete failure. For demonstration, we plot results of a few representative runs of SFUISE in Fig. 1. Another qualitative comparison of spline fusion with discrete-time schemes is shown in Fig. 5 based on a representative sequence.

Refer to caption

Refer to caption

Refer to caption

SFUISE GUIF ESKF
Figure 5: A qualitative comparison of evaluated methods on const4-trial7-tdoa2-manual1. Ground truth and estimates are depicted in red and green, respectively.

Runtime

As shown in Fig. 2, the two functional modules in the system pipeline run in parallel, and the spline fusion module dominates the computational cost compared with the lightweight estimation interface. Therefore, we record runtime of the backend fusion module w.r.t. the frame rate of knots at 1010 Hz (thus 100100 ms available for computation in real time). The proposed system delivers real-time performance with average runtime of 42.3±1.942.3\pm 1.9 ms and 37.2±1.837.2\pm 1.8 ms per sliding step throughout subsets tdoa2 and tdoa3, respectively. The small standard deviations indicate a well-bounded computational cost during sliding-window spline fusion. According to our investigation, solving the nonlinear least squares in (16) using our custom LM method usually converges within five iterations.

V-D Experiment

To further evaluate the proposed scheme on UWB-inertial fusion using ToA ranging, a miniature sensor suite has been instrumented as shown in Fig. 6-(A). It is composed of a UWB tag provided by Fraunhofer IOSB-AST and an IMU embedded on Sense HAT (B), both mounted to a Raspberry Pi (https://www.waveshare.com) for sensor coordination and data recording. An additional VIVE tracker (https://www.vive.com) is added to provide the ground truth. Overall three sequences, ISAS-Walk1, ISAS-Walk2 and ISAS-Walk3, are recorded during indoor walks with inertial and ultra-wideband (including five anchors) readings both at a frame rate of about 8080 Hz. We list the APE (RMSE) in meters given by the three systems in Fig. 6-(B). The proposed scheme SFUISE produces the best tracking accuracy consistently throughout the data set.

Refer to caption
(A) sensor suite
Sequence SFUISE GUIF ESKF
Walk1 0.1170.117 0.1430.143 0.2250.225
Walk2 0.0940.094 0.0980.098 0.2450.245
Walk3 0.0940.094 0.2290.229 0.2700.270
(B) APE (RMSE) in meters.
Figure 6: Miniature sensor suite for recording ISAS-Walk and corresponding APEs as shown in (A) and (B), respectively.

V-E Discussion

As verified in the mass evaluation based on UTIL and ISAS-Walk, SFUISE shows superior performance over discrete-time fusion schemes, particularly, in unfavorable scenarios of signal interference and complex noise patterns that are hard or infeasible to detect and model. Since any kinematic interpolation refers to four consecutive knots (for cubic B-splines), the induced pose trajectory inherently guarantees motion continuation up to the second order. In comparison with discrete-time fusions, this implicitly brings extra constraints to solving the nonlinear optimization problem in spline fitting, while still acknowledging the locality of observations. As a result, sensor fusion exhibits better stability and robustness under challenging conditions even together with online calibration. To further highlight the strength of spline fusion in motion estimation, we select const1-trial1-tdoa2 of UTIL, and discard the IMU readings for UWB-only navigation. We run SFUISE and GUIF online at estimation frame rates of 11 Hz and 1010 Hz, respectively, with sliding windows both configured as 1010 seconds to guarantee same amount of TDoA measurements. No explicit kinematic constraint, such as motion smoothness, is involved into state estimation. A qualitative comparison is demonstrated in Fig. 7. Due to signal noise and interference, graph-based approach produces physically-infeasible motion estimates. The proposed spline-based scheme delivers a smooth trajectory with an efficient state representation using knots estimated at 11 Hz (1/101/10 memory consumption of the discrete-time states delivered at 1010 Hz by GUIF).

VI Conclusion

In this work, we propose a new framework for continuous-time state estimation using ultra-wideband-inertial sensors. Quaternion-based cubic B-splines are exploited for six-DoF motion representation, based on which we systematically derive a unified set of theoretical tools for efficient spline fitting, including analytic kinematic interpolations and spatial differentiations on B-splines. This further facilitates the establishment of the novel sliding-window spline fusion scheme for online UWB-inertial state estimation. The resulted system, SFUISE, is evaluated in real-world scenarios based on public data set and experiments. It is real-time capable and shows superior performance over major discrete-time estimation approaches w.r.t. tracking accuracy, robustness and deployment flexibility. There still remains considerable potential to exploit the proposed scheme. B-splines of nonuniform knots can be further investigated to achieve more efficient state representation and estimation. Based on our theoretical contribution and open-source implementation, extensive engineering practice can be equipped with continuous-time paradigm in areas of automatic control, path planning, odometry and mapping, etc.

Refer to caption Refer to caption
(A) spline-based (B) graph-based
Figure 7: UWB-only tracking using SFUISE (A) and GUIF (B). Estimates (green) are depicted w.r.t. ground truth (red) at 200200 Hz. The spline-based approach guarantees physically feasible estimates inherently.

Appendix

VI-A Jacobian of quaternion exponential map

For any point v¯3{\underline{v}}\in\mathbb{R}^{3} in the tangent space at identity 𝟙{\mathds{1}} on the manifold of unit quaternions, it can be retracted to 𝕊3\mathbb{S}^{3} via exponential map according to

Exp𝟙(v¯)=[cosv¯,v¯sincv¯]𝕊3,\mathrm{Exp}_{\mathds{1}}({\underline{v}})=[\,\cos\|{\underline{v}}\|,{\underline{v}}^{\top}\operatorname{sinc}\|{\underline{v}}\|\,]^{\top}\in\mathbb{S}^{3}\,,

with \|\cdot\| denoting the 2\mathscr{L}_{2} norm [35]. The Jacobian of exponential map w.r.t. v¯{\underline{v}} then follows

Exp𝟙(v¯)v¯=[((cosv¯)v¯),((v¯sincv¯)v¯)]4×3,\frac{\partial{\mathrm{Exp}_{\mathds{1}}({\underline{v}})}}{\partial{{\underline{v}}}}\mspace{-3.0mu}=\mspace{-3.0mu}\Big{[}\Big{(}\frac{\partial{(\cos\|{\underline{v}}\|)}}{\partial{{\underline{v}}}}\Big{)}^{\top},\Big{(}\frac{\partial{({\underline{v}}\operatorname{sinc}\|{\underline{v}}\|)}}{\partial{{\underline{v}}}}\Big{)}^{\top}\Big{]}^{\top}\mspace{-15.0mu}\in\mathbb{R}^{4\times{3}}\,, (19)

with the two items expressed as

(cosv¯)v¯=v¯sincv¯and\displaystyle\frac{\partial{(\cos\|{\underline{v}}\|)}}{\partial{{\underline{v}}}}=-{\underline{v}}^{\top}\operatorname{sinc}\|{\underline{v}}\|{\quad\text{and}\quad}
(v¯sincv¯)v¯=cosv¯sincv¯v¯2v¯v¯+𝐈3sincv¯.\displaystyle\frac{\partial{({\underline{v}}\operatorname{sinc}\|{\underline{v}}\|)}}{\partial{{\underline{v}}}}=\frac{\cos\|{\underline{v}}\|-\operatorname{sinc}\|{\underline{v}}\|}{\|{\underline{v}}\|^{2}}{\underline{v}}\,{\underline{v}}^{\top}+{{\mathbf{I}}_{{3}}}\operatorname{sinc}\|{\underline{v}}\|\,.

VI-B Jacobian of quaternion logarithm map

Given any unit quaternion q¯=[q0,q¯v]𝕊3\underline{q}=[\,q_{0},\underline{q}_{\text{v}}^{\top}]^{\top}\in\mathbb{S}^{3} with q0q_{0} and q¯v\underline{q}_{\text{v}} denoting the scalar and vector components, the rotation angle and axis can be retrieved by definition as θ=2arctan(q¯v/q0)\theta=2\arctan(\|\underline{q}_{\text{v}}\|/q_{0}) and u¯=q¯v/q¯v{\underline{u}}=\underline{q}_{\text{v}}/\|\underline{q}_{\text{v}}\|, respectively. It can be mapped to the tangent space at 𝟙{\mathds{1}} via logarithm map

Log𝟙(q¯)=θu¯/2=arctan(q¯v/q0)q¯v/q¯v3.\mathrm{Log}_{\mathds{1}}(\underline{q})=\theta{\underline{u}}/2=\arctan(\|\underline{q}_{\text{v}}\|/q_{0})\underline{q}_{\text{v}}/\|\underline{q}_{\text{v}}\|\in\mathbb{R}^{3}\,.

The Jacobian of logarithm map follows

Log𝟙(q¯)q¯=[Log𝟙(q¯)q0,Log𝟙(q¯)q¯v]3×4,\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{\underline{q}}}=\bigg{[}\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{q_{0}}},\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{\underline{q}_{\text{v}}}}\bigg{]}\in\mathbb{R}^{3\times{4}}\,, (20)
whereLog𝟙(q¯)q0=q¯vand\displaystyle\text{where}\quad\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{q_{0}}}=-\underline{q}_{\text{v}}{\quad\text{and}\quad}
Log𝟙(q¯)q¯v=1q¯v2(q0q¯vq¯v+q¯v2𝐈3q¯vq¯vq¯varctanq¯vq0).\displaystyle\frac{\partial{\mathrm{Log}_{\mathds{1}}(\underline{q})}}{\partial{\underline{q}_{\text{v}}}}=\frac{1}{\|\underline{q}_{\text{v}}\|^{2}}\Big{(}q_{0}\underline{q}_{\text{v}}\underline{q}_{\text{v}}^{\top}+\frac{\|\underline{q}_{\text{v}}\|^{2}{{\mathbf{I}}_{{3}}}-\underline{q}_{\text{v}}\underline{q}_{\text{v}}^{\top}}{\|\underline{q}_{\text{v}}\|}\arctan\frac{\|\underline{q}_{\text{v}}\|}{q_{0}}\Big{)}\,.

Acknowledgment

We would like to thank Norbert Fränzel from Fraunhofer IOSB-AST and Christopher Funk for help with instrumenting the sensor suite to record ISAS-Walk data set.

References

  • [1] K. Li, M. Li, and U. D. Hanebeck, “Towards high-performance solid-state-lidar-inertial odometry and mapping,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5167–5174, 2021.
  • [2] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual-inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
  • [3] H. Möls, K. Li, and U. D. Hanebeck, “Highly parallelizable plane extraction for organized point clouds using spherical convex hulls,” in IEEE International Conference on Robotics and Automation, May 2020, pp. 7920–7926.
  • [4] M.-G. Li, H. Zhu, S.-Z. You, and C.-Q. Tang, “UWB-based localization system aided with inertial sensor for underground coal mine applications,” IEEE Sensors Journal, vol. 20, no. 12, pp. 6652–6669, 2020.
  • [5] S. Zheng, Z. Li, Y. Liu, H. Zhang, and X. Zou, “An optimization-based UWB-IMU fusion framework for UGV,” IEEE Sensors Journal, vol. 22, no. 5, pp. 4369–4377, 2022.
  • [6] W. Zhao, A. Goudar, and A. P. Schoellig, “Finding the right place: Sensor placement for UWB time difference of arrival localization in cluttered indoor environments,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6075–6082, 2022.
  • [7] W. Zhao, A. Goudar, X. Qiao, and A. P. Schoellig, “UTIL: An ultra-wideband time-difference-of-arrival indoor localization dataset,” arXiv preprint arXiv:2203.14471, 2022.
  • [8] J. D. Hol, F. Dijkstra, H. Luinge, and T. B. Schön, “Tightly coupled UWB/IMU pose estimation,” in IEEE International Conference on Ultra-wideband, Sep. 2009, pp. 688–692.
  • [9] M. Kok, J. D. Hol, and T. B. Schön, “Indoor positioning using ultrawideband and inertial measurements,” IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp. 1293–1303, 2015.
  • [10] W. Zhao, J. Panerati, and A. P. Schoellig, “Learning-based bias correction for time difference of arrival ultra-wideband localization of resource-constrained mobile robots,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3639–3646, 2021.
  • [11] J. Li, Y. Bi, K. Li, K. Wang, F. Lin, and B. M. Chen, “Accurate 3D localization for MAV swarms by UWB and IMU fusion,” in IEEE International Conference on Control and Automation, June 2018, pp. 100–105.
  • [12] J. Sola, “Quaternion kinematics for the error-state Kalman filter,” arXiv preprint arXiv:1711.02508, 2017.
  • [13] A. Goudar and A. P. Schoellig, “Online spatio-temporal calibration of tightly-coupled ultrawideband-aided inertial localization,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2021, pp. 1161–1168.
  • [14] H. Strasdat, J. M. M. Montiel, and A. J. Davison, “Real-time monocular SLAM: Why filter?” in IEEE International Conference on Robotics and Automation, May 2010, pp. 2657–2664.
  • [15] T. Qin, P. Li, and S. Shen, “VINS-Mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  • [16] Y. Song and L.-T. Hsu, “Tightly coupled integrated navigation system via factor graph for UAV indoor localization,” Aerospace Science and Technology, vol. 108, p. 106370, 2021.
  • [17] T. Y. Tang, D. J. Yoon, and T. D. Barfoot, “A white-noise-on-jerk motion prior for continuous-time trajectory estimation on SE(3),” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 594–601, 2019.
  • [18] T. D. Barfoot, State Estimation for Robotics.   Cambridge University Press, 2017.
  • [19] P. Furgale, T. D. Barfoot, and G. Sibley, “Continuous-time batch estimation using temporal basis functions,” in IEEE International Conference on Robotics and Automation, May 2012, pp. 2088–2095.
  • [20] S. Anderson and T. D. Barfoot, “Towards relative continuous-time SLAM,” in IEEE International Conference on Robotics and Automation, May 2013, pp. 1033–1040.
  • [21] H. Sommer, J. R. Forbes, R. Siegwart, and P. Furgale, “Continuous-time estimation of attitude using B-splines on Lie groups,” Journal of Guidance, Control, and Dynamics, vol. 39, no. 2, pp. 242–261, 2016.
  • [22] P. Furgale, J. Rehder, and R. Siegwart, “Unified temporal and spatial calibration for multi-sensor systems,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 2088–2095.
  • [23] A. Patron-Perez, S. Lovegrove, and G. Sibley, “A spline-based trajectory representation for sensor fusion and rolling shutter cameras,” International Journal of Computer Vision, vol. 113, no. 3, pp. 208–219, 2015.
  • [24] E. Mueggler, G. Gallego, H. Rebecq, and D. Scaramuzza, “Continuous-time visual-inertial odometry for event cameras,” IEEE Transactions on Robotics, vol. 34, no. 6, pp. 1425–1440, 2018.
  • [25] G. Cioffi, T. Cieslewski, and D. Scaramuzza, “Continuous-time vs. discrete-time vision-based SLAM: A comparative study,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2399–2406, 2022.
  • [26] C. Sommer, V. Usenko, D. Schubert, N. Demmel, and D. Cremers, “Efficient derivative computation for cumulative B-splines on Lie groups,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 11 148–11 156.
  • [27] A. J. Yang, C. Cui, I. A. Bârsan, R. Urtasun, and S. Wang, “Asynchronous multi-view SLAM,” in IEEE International Conference on Robotics and Automation, May 2021, pp. 5669–5676.
  • [28] J. Tirado and J. Civera, “Jacobian computation for cumulative B-splines on SE(3) and application to continuous-time object tracking,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7132–7139, 2022.
  • [29] J. Lv, K. Hu, J. Xu, Y. Liu, X. Ma, and X. Zuo, “CLINS: Continuous-time trajectory estimation for lidar-inertial system,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 2021, pp. 6657–6663.
  • [30] D. Hug, P. Bänninger, I. Alzugaray, and M. Chli, “Continuous-time stereo-inertial odometry,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6455–6462, 2022.
  • [31] M. Persson, G. Häger, H. Ovrén, and P.-E. Forssén, “Practical pose trajectory splines with explicit regularization,” in International Conference on 3D Vision, Dec. 2021, pp. 156–165.
  • [32] E. Fresk and G. Nikolakopoulos, “Full quaternion based attitude control for a quadrotor,” in European Control Conference, July 2013, pp. 3864–3869.
  • [33] S. Sun, A. Romero, P. Foehn, E. Kaufmann, and D. Scaramuzza, “A comparative study of nonlinear MPC and differential-flatness-based control for quadrotor agile flight,” IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3357–3373, 2022.
  • [34] M.-J. Kim, M.-S. Kim, and S. Y. Shin, “A general construction scheme for unit quaternion curves with simple high order derivatives,” in Annual Conference on Computer Graphics and Interactive Techniques, 1995, pp. 369–376.
  • [35] K. Li, “On-manifold recursive bayesian estimation for directional domains,” Ph.D. dissertation, Karlsruhe Institute of Technology, 2022.
  • [36] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng, “ROS: An open-source robot operating system,” ICRA Workshop on Open Source Software, vol. 3, no. 3.2, p. 5, 2009.
  • [37] G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31–43, 2010.
  • [38] D. Q. Huynh, “Metrics for 3D rotations: Comparison and analysis,” Journal of Mathematical Imaging and Vision, vol. 35, pp. 155–164, 2009.
  • [39] G. Cioffi and D. Scaramuzza, “Tightly-coupled fusion of global positional measurements in optimization-based visual-inertial odometry,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2020, pp. 5089–5095.