Dynamics on Lie groups with applications to attitude estimation
Abstract
The problem of filtering – propagation of states through stochastic differential equations (SDEs) and association of measurement data using Bayesian inference – in a state space which forms a Lie group is considered. Particular emphasis is given to concentrated Gaussians (CGs) as a parametric family of probability distributions to capture the uncertainty associated with an estimated state. The so-called group-affine property of the state evolution is shown to be necessary and sufficient for linearity of the dynamics on the associated Lie algebra, in turn implying CGs are invariant under such evolution. A putative SDE on the group is then reformulated as an SDE on the associated Lie algebra. The vector space structure of the Lie algebra together with the notion of a CG enables the leveraging of techniques from conventional Gaussian-based Kalman filtering in an approach called the tangent space filter (TSF). We provide example calculations for several Lie groups that arise in the problem of estimating position, velocity, and orientation of a rigid body from a noisy, potentially biased inertial measurement unit (IMU). For the specific problem of attitude estimation, numerical experiments demonstrate that TSF-based approaches are more accurate and robust than another widely used attitude filtering technique.
Nomenclature
= | , the set of integers. | |
---|---|---|
= | the set of real numbers. | |
= | the set of -dimensional vectors with real entries. | |
= | the set of matrices with real entries. | |
= | the identity matrix. | |
= | standard Euclidean inner product on . | |
= | cross product matrix satisfying for . | |
= | , the commutator of . | |
= | , -dimensional sphere. | |
= | , the general linear group in dimensions. | |
= | , the special orthogonal group in dimensions. | |
= | , the Lie algebra of . | |
= | , the special Euclidean group in dimensions. | |
= | the Lie algebra of . | |
= | matrix Lie group (i.e., a closed subgroup of for some ). | |
= | the Lie algebra associated with , often denoted . | |
= | "vectorization" map associating the Lie algebra with as . | |
= | "matrization" map associating with , i.e. the inverse of . | |
= | the adjoint action on given by with . | |
= | , for , is the Jacobian of the exponential map. | |
= | , for . | |
= | for . | |
= | , for . | |
= | , for . | |
= | for . | |
= | CG distribution on with group mean and covariance matrix . | |
= | -dimensional Gaussian distribution with mean and covariance |
I Introduction
The impact of the Kalman filter (KF) in statistical estimation and control theory is difficult to overstate. The KF enables the calculation of the joint probability distribution of a vector of state variables from a collection of noisy measurements supplemented with linear models of the state vector dynamics and measurement kalman1960new ; kalman1961new . The key assumptions required for the KF to be optimal – that dynamics and measurements are linear in the state space variables, that the noise appearing in dynamics and measurements is additive and white, and that the state vector distribution is initially Gaussian – are physically reasonable or can be made to hold approximately with good accuracy in many cases. This has led to the KF being used ubiquitously in guidance, navigation, and control grewal2007global , in robotics, in space and defense bar2004estimation ; ristic2004beyond , and myriad other applications. In places where the linearity assumption breaks down, the extended Kalman filter (EKF), in which the nonlinear models are linearized about the current state estimate point, or the unscented Kalman filter (UKF), in which a collection of nonlinear model outcomes are utilized to estimate state vector properties julier1997new , are widely employed, often to good accuracy. Some exact results are known for distributions other than Gaussians, though these usually require specific dynamical and measurement models to ensure the that resulting filter process leaves the form of the non-Gaussian distribution invariant and only updates its parameters; the work of Benes̆ benevs1981exact and Daum daum2005nonlinear is paradigmatic of this approach. At the extreme of nonlinear filtering is particle filters gordon1993novel , which represent the filtered distribution as a weighted collection of delta functions ristic2004beyond and so place no constraints on dynamical or measurement models or the underlying state vector distribution. However, particle filters are limited by a sampling error which can be slow to converge, especially for large state space dimension daum2003curse , and this can pose challenges for stability and efficient implementation.
A more subtle setting for nonlinear filtering is estimation of a state which is subject to a nonlinear constraint. Here, several complications can occur: linearization of the constraint may be inaccurate across the support of the state vector distribution and so key properties are not maintained. The constraint also reduces the effective dimensionality of the state vector, which introduces singularities into its associated covariance matrix that are known to cause issues for filtering. In such cases, a modification of the distribution is required to avoid non-physical predictions, but such modification will likely be at odds with the Gaussian assumptions required for the standard KF.
The present work investigates a special case of this situation in which the constraint is formulated as a requirement that the state vector lives on a Lie group, which is a mathematical space that is both a smooth manifold and a group. In this setting, the filtering problem may be translated from the Lie group to its associated tangent spaces (i.e., its Lie algebra), which enables the constraint to be maintained globally through the exponential map connecting the tangent spaces with the Lie group, allows for accurate linearization due to the "flat nature" of the tangent spaces, and enables an unconstrained, singularity-free representation of the state and covariance.
Before presenting our general theoretical results, we introduce the motivating application of attitude estimation from noisy, potentially biased gyro measurements in Sec. II. In addition to providing motivation for our research, this also allows for concrete example calculations alongside the theory. Our theoretical exposition starts with the most general setting in Section III, which is that of an arbitrary matrix Lie group with Lie algebra endowed with an evolution equation of the form for some vector field . The question of the necessary and sufficient conditions on for which the corresponding dynamics on are linear is then addressed. Leveraging the results of barrau2019linear , must satisfy the so-called group-affine property and we go beyond prior work to give a complete characterization of group-affine vector fields (see Theorem 1).
Section IV formally introduces the class of CGs on a matrix Lie group. The main result here is Corollary 1, which makes precise the claim that the class of CGs are invariant under the flow generated by a group-affine vector field. This Corollary provides the theoretical foundation for one of the main contentions in this work, namely that CGs are the most natural choice of probability distributions to filter with on a Lie group. Particular emphasis is then given to choosing a group law on the underlying state space which makes the state space dynamics in question group-affine. Section V considers an application-motivated stochastic generalization of group-affine dynamics and derives the corresponding SDE on . This SDE on will be referred to as the tangent space SDE (TSSDE) on and is, in general, an SDE with state-dependent diffusion. This section ends with a discussion regarding the invariance of CGs in the presence of noise.
The theoretical results generated in Section III, IV, and V are used as motivation for our Lie group filtering methodology, detailed in Section VI. After introducing a discrete-time observation process, we design an (approximate) nonlinear filtering algorithm, called the TSF, that is based on the unscented transform (UT). Key novelties of this algorithm are the use of the continuous-time unscented transform (CTUT) sarkka2007unscented (generalized to state-dependent diffusion) to propagate moments through the TSSDE and a whitening procedure based on the Baker–Campbell–Hausdorff (BCH) formula. We also suggest how the TSF may be made more computationally efficient by considering a short-time expansion of the Fokker-Planck equation (FPE) associated with the TSSDE and deriving moment evolution equations based off this to replace to CTUT for moment propagation.
In Section VII, three application-relevant examples are worked out in detail. These three examples correspond to the well-known group , the special Euclidean group , and the group , whose underlying state space is and is a simple generalization of . The example is motivated by the problem of attitude filtering under the paradigm of dynamical model replacement (DMR) without gyro bias, while the example is motivated by the inertial navigation system (INS) state estimation problem without IMU biases. The group provides a bridge between these two examples. The natural dynamics for the problem falls into the class of constant Lie group dynamics, which are the simplest to analyze. On the other hand, the natural dynamics motivated by the INS problem are not constant, but fall into the more general class of group-affine dynamics. Emphasis is placed on how the group law on the underlying state space is the choice which results in group-affine dynamics (as compared to, e.g., the direct product (DP) group law). Additional insights regarding the fundamental solution of the FPE associated with the problem are provided at the end of Section VII.1.
A fourth example, covered in Section VIII, is the Lie group whose underlying manifold is . This group is motivated by the problem of attitude estimation with the inclusion of the gyro bias that is to-be-estimated. This example will highlight the interplay between the equations of motion (i.e., dynamics), which are motivated by physics, and the freedom to choose a group structure which in turn makes the associated vector field group-affine. In this fourth example, we argue that there is no fixed group structure on with makes the attitude dynamics with gyro bias group-affine. Instead, a time-parameterized family of group laws must be considered to yield a vector field which is almost group-affine. This time-parameterized family of group laws is approximately to zeroth-order in time. (This observation is consistent with the proposal to use the group law for treating gyro bias in, e.g., barrau2022geometry .) This section ends with a numerical experiment that compares performance of the TSF using two different group laws on , namely that induced by the DP and that of a semidirect product (i.e., ), to the Unscented Quaternion Estimator (USQUE) Crassidis_UKF . It is shown that the TSF using the group law exhibits near-perfect filter consistency, far outperforming the USQUE and TSF using the DP group law (Figure 1).
II Attitude descriptions and estimation
In attitude filtering, one is concerned with estimating an element of the non-abelian Lie group of -dimensional spatial rotations, denoted as . Specifically, a matrix describes a rotation from one reference frame (e.g., the body frame of a rigid body) to another reference frame (e.g., an inertial or Earth-fixed frame). Since is a -dimensional manifold, one may use, at least locally, a set of real parameters (e.g., Euler angles) to describe a matrix . However, since is not homeomorphic to , a single set of such parameters will necessarily have a singularity at some point (e.g., Euler angles have a singularity that cause the problem of gimbal lock).
The standard approach to avoid the issue of coordinate singularities is to work with a higher dimensional parameterization of rotations, such as the unit quaternions. A quaternion may be thought of as a vector which is partitioned as , where is the "vector" part and is the "scalar" part. A unit quaternion has , where indicates the standard Euclidean length in dimensions. The set of all unit quaternions is the -sphere, denoted as . Given a unit quaternion , there is a map that associates the unit quaternion with a rotation between two specified reference frames. Explicitly,
(1) |
where is the cross product matrix, which is the matrix satisfying for all . Note that and is a result of the constraint .
One may endow with a multiplication operation such that the multiplication of two unit quaternions describes the composition of their individual rotations, i.e. . (Note that we are using M. Shuster’s convention for quaternion multiplication shuster1993survey . This choice is made so that the map in (1) is a (Lie) group homomorphism . The reader should be aware that another convention is in use in the literature and which convention is being used should always be verified for consistency.) It may be shown that defines another unit quaternion given by
(4) |
where indicates the standard Euclidean inner product of the vectors . This multiplication operation can be represented in matrix form as
(5) |
where is the anti-symmetric matrix
(8) |
in which is interpreted as a column vector and as a row vector. Under the operation the inverse quaternion is given by and the identity quaternion is given by . Hence, endowed with quaternionic multiplication forms a Lie group. As a Lie group, is recognized as the double cover of . The term double cover refers to the fact that the map (1) satisfies . In other words, there are two distinct unit quaternions that yield the same -matrix . Finally, it is worth noting that the Lie group is typically identified with the special unitary group .
If a rigid body is subject to an angular velocity relative to a chosen reference frame and describes its orientation relative to that frame, then , where the "dot" denotes the time derivative. This equation of motion for rotations may be recast in terms of quaternions using the inverse of the map (1), which results in
(11) |
where describes the initial orientation of the rigid body. In the case of deterministic and constant , the solution is given by the matrix exponential:
(12) |
Typically, the angular rate that is passed to (11) is a noisy measurement obtained from a gyroscope. A common parameterization of angular rate measurements is given by the Farrenkopf model farrenkopf1978analytic ; Crassidis_Extension . Stated as a system of coupled SDEs, the Farrenkopf model reads
(15) |
in which is the measured rate, is the true rate, is the rate bias, and and are vectors of mutually uncorrelated Wiener processes whose derivatives are delta-correlated white noises with spectral density matrices and , respectively. This model excludes gyro misalignment and scale factor errors, but they can be straightforwardly included.
It must be stressed that the vector is a measurement, and so, strictly speaking, should be estimated when utilized in a filter Crassidis_Extension . Instead, the conventional approach, known as dynamic model replacement (DMR) Crassidis_UKF , is followed. DMR uses the measurements directly in the model and treats this measured angular rate as a constant over the propagation time step. The SDE describing the evolution of the quaternion under the Farrenkopf model is then determined by solving eqn. (15) for and plugging the result into eqn. (11). The result is the quaternion SDE:
(19) |
As an SDE, (19) must carry the Stratonovich interpretation, otherwise solutions may not maintain the constraint . Eqn. (19) represents the SDE relevant for propagation in an attitude filter based on DMR using the Farrenkopf model (15).
A key question not yet addressed is: what is an appropriate family of probability distributions to describe the uncertainty associated with ? In this regard, let and be some fixed reference quaternion. Consider the distribution over induced by (c.f. (12)). The distribution associated with the random quaternion is an example of a CG on the Lie group with parameters and . With this family of distributions identified, the next question of immediate practical importance is: how do the parameters of the distribution evolve under (19)? Here, we find our first connection with the main results of our work. Namely, if for some and , then the solution (12) is of the form for some and . In other words, if is CG distributed, then in (12) is also CG distributed (c.f. Corollary 1). The remainder of this section is concerned with explicitly demonstrating this claim. First, it is necessary to collect a few more facts about and its Lie algebra.
Eqn. (5) gives us a convenient faithful representation of the group as a subgroup of , which is the Lie group of dimensional rotations, a higher dimensional generalization of . (In what follows, we will write to denote the subgroup of representing unit quaternions.) The convenience comes, in part, from the determination of the Lie algebra of , namely the set where is the set of all antisymmetric matrices (i.e., the Lie algebra of ). That is the Lie algebra of follows from the identity
(20) |
which follows from the Taylor series expansion of the matrix exponential and .
Formula (20) is an instance of the exponential map from the Lie algebra of a Lie group , which is a concept that will be dealt with regularly in the sequel. The exponential map is defined for abstract Lie groups and their associated Lie algebras, but for the case of matrix Lie groups (i.e., closed subgroups of ), the abstract exponential map coincides with the standard matrix exponential. The Lie bracket on is given by
(21) |
In other words, the map is a Lie algebra isomorphism from the Lie algebra to the matrix Lie algebra . The physical interpretation of is that, when applied to a unit quaternion, it yields a rotation through the angle about the axis .
Let . Suppose where with and . The goal is to show that is still of the form for some and . The first step is to observe that
where . What remains is to demonstrate that for some .
Recall a standard result in Lie theory, namely the braiding identity (hall2013lie, , Proposition 3.35):
(22) |
which holds for all . In (22), is given by and denotes the adjoint action on the Lie algebra . Let . The braiding identity (22), the commutator identity (21), and linearity of , yields
Applying the previous to identity to the expression , one finds that the desired is given by . In other words, where . What has just been demonstrated is the invariance of the class of CGs on under the flow generated by the dynamics (11). The deeper theory that is developed in Section III informs us that this invariance is really due to the fact that the vector field is group-affine on .
In general, additional uncertainty contributes to the attitude quaternion during propagation due to bias and noise corrupting the measured angular rates, as in (15). Some natural questions are then: are CGs invariant under propagation in the presence of bias and noise? If not, how should a CG approximation be maintained during propagation? These and related questions are addressed in generality throughout the remainder of this manuscript. In particular, in Section VI, the main result of Section V is used together with the CTUT to (approximately) propagate the parameters of a CG forward in time in the presence of noise. Once these tools are in hand, the problem of attitude filtering (with the inclusion of gyro bias) is revisited in Section VIII.
III Characterization of group-affine vector fields
The main theoretical exposition begins with a determination on necessary and sufficient conditions for dynamics on a Lie group to result in linear dynamics on the associated Lie algebra. Let be a matrix Lie group of dimension , and be its Lie algebra. Throughout this manuscript, is assumed to have surjective exponential map. Consider an initial value problem (IVP) on of the form
(25) |
where is a smooth map and the dot indicates time derivative. Here, is taken to be time-independent to avoid technical complications but this isn’t necessary. Suppose that the solution to (25) satisfies , where has , and satisfies an ordinary differential equation (ODE) analogous to the first equation in (25):
with known a priori. Our goal is to find the corresponding ODE for , and then determine the conditions on that are required to make this ODE for affine (in the sense of affine maps on ).
In order to derive an ODE for , we recall the result in Lie theory known as the derivative of the exponential map (hall2013lie, , Theorem 5.4). Let be a -path in the Lie algebra. The derivative of the exponential map is given by
(26) |
where is referred to as the Jacobian of the exponential map and is given by
(27) |
(Note that some authors define the Jacobian with the on the right hand side (RHS) of (26) included.) Taking the time derivative of , while using (26) and (25), results in , or, equivalently,
(30) |
The goal now is to determine a condition on that makes the RHS of (30) a linear (affine) map of . The relevant condition here is referred to as the group-affine property barrau2019linear . A vector field is said to be group-affine if, for all ,
(31) |
where is the identity element. An important class of examples of group-affine vector fields are those of the form for . The ODE in (25) with is referred to as the (left) Poisson equation on the group in some of the literature (see, e.g., muller2021review ). However, in this manuscript, this special case is referred to as "constant" dynamics on the group, using an analogy with . Constant vector fields will be analyzed in detail in Section V. As shown in Theorem 1, not all group-affine vector fields are constant.
It is not entirely trivial to see how (31) forces (30) to simplify to a linear equation. Indeed, if is assumed to be group-affine and the identity (31) is applied to simplify the RHS of (30), then
(34) |
Strikingly, while the dependence on was eliminated from the RHS of (30), eqn. (34) does not appear to reduce to a linear equation in . Additional facts regarding group-affine vector fields beyond condition (31) are needed to argue that (34) is indeed a linear system.
Let denote the flow associated with the equation , where satisfies (31). The reason why (31) is called group-affine is owed to (barrau2019linear, , Proposition 8), which asserts that, for each , there exists an element of the automorphism group of and such that
(35) |
The terminology of "group-affine" now becomes clear by analogy with linear systems on since automorphisms of are precisely linear maps. Eqn. (35) is key to proving one implication in the following Theorem. Before presenting this Theorem, recall that a derivation is a linear map that satisfies .
Theorem 1
A vector field is group-affine (31) if and only if there exists a derivation and such that, for all ,
(36) |
where is given by (27). As a consequence, for a group-affine vector field, the ODE (34) simplifies to
(39) |
The appearance of in (39) instead of is a consequence of our choice to represent as instead of .
Proof. It is first argued that, if is group-affine, then there exists a derivation such that has the form (36). This argument will leverage three keys facts: 1) the flow associated with is of the form (35), 2) an automorphism of a Lie group induces an automorphism of the Lie algebra (40), and 3) the Lie algebra of (viewed as a Lie group) is equal to the set of derivations on .
The starting point to show the RHS of (34) is a linear function of is (hall2013lie, , Theorem 5.6), which asserts the existence of a unique such that
(40) |
where is the automorphism appearing in (35). ( means that is an invertible linear map on that preserves brackets.) Using the automorphism property of ,
By definition, satisfies and . Therefore, by time-differentiating the identity and using the definition of the flow, one finds that must satisfy the ODE
(41) |
which is precisely (34). Since , there exists a derivation such that . This observation together with (41) implies that satisfies
(42) |
For the reverse direction, suppose has the form (36) for some derivation . It is readily verified that (36) satisfies (31) when . Therefore, without loss of generality, take . Let , and let be the element of such that
(43) |
In other words, . As is well-known and suggested by our notation, is the BCH series, which consists of a series of nested commutators of and . With this context, the left hand side (LHS) of (31) reads
(44) |
while the RHS of (31) reads
(45) |
Hence, the objective is to argue that the following equality holds:
Notice that . Since is a derivation, . Using the fact that is given by a series of nested commutators, , and so
The identity follows.
IV The class of CG distributions
The definition of a CG on a (matrix) Lie group has been alluded to at several points in the preceding exposition. It is now appropriate to present a formal definition. First, consider a "matrization" map that associates an element of to its matrix representation in with respect to some basis for (c.f. eqn. (8)). Let be an -dimensional Gaussian distributed vector with zero mean and covariance matrix . Fix a group element , which is to be interpreted as the MAP estimate (this may also be interpreted as the Lie group mean wolfe2011bayesian ). From and , induce a random element via the formula
(46) |
Write to indicate is a random element of with a distribution following (46). The distribution on induced via eqn. (46) is known as a (left) CG in the literature wolfe2011bayesian ; barrau2014intrinsic ; brossard2017unscented . A right CG has the and in (46) interchanged. A CG has the physically intuitive interpretation that there is a reference group element representing the "best" estimate of the state and the uncertainty in the estimate is given by random “rotations" generated by the random vector . When the Lie group in question is , then a CG is simply a Gaussian and the theory reduces to the standard Gaussian-based filtering framework on .
There is a decent volume of literature focused on proposing distributions on Lie groups for the purposes of filtering, and no attempt is made to provide a comprehensive review here (see, e.g., AttitudeSurvey07 ; barrau2014intrinsic ; WangLee2021 for relatively recent reviews and approaches for and ). The class of CGs has been considered by other authors for the purposes of estimation on general Lie groups and, in particular, attitude estimation barfoot2014associating ; barrau2014intrinsic ; brossard2017unscented . However, one major contribution of this manuscript, in addition to Theorem 1, is the following corollary of Theorem 1.
Corollary 1
Corollary 1 demonstrates that the class of CGs (46) is invariant under the flow generated by group-affine dynamics. The somewhat surprising aspect of this is that group-affine dynamics may be nonconstant (on the group), yet invariance is maintained (see, e.g., Section VII.3). For conventional Kalman filtering, it is well-known that linear state space dynamics take Gaussian initial conditions to Gaussians with updated parameters. What has just been proven is that for CGs the analogous condition on the vector field is that it is group-affine. This observation provides a case for the claim that CGs are the natural class of probability distributions on a Lie group to filter with respect to, assuming the dynamics under consideration are group-affine. (It is being assumed that the posterior distribution after an observation on the state is well approximated by a CG when making this claim. This may not always be the case.) In fact, given an underlying state space (i.e., a manifold of some dimension ) and a dynamical system on this state space, one is encouraged to design a group law which makes the associated vector field group-affine (c.f. barrau2022geometry ). If this is not possible, and one still insists on filtering with respect to CGs, then one should seek a group law which makes the dynamics "as close to group-affine" as possible. Numerical evidence for this claim is provided in the special case of attitude filtering in the presence of gyro bias (see Section VIII).
V The TSSDE
In this section, a stochastic generalization of a group-affine dynamical system as required for filtering is discussed. For simplicity, the focus is on the case where the deterministic part of the dynamics is constant (i.e., assuming in (36)). The more general case carries over straightforwardly with minimal modifications to the calculations below.
Suppose there is a smooth function that satisfies
(49) |
for some (possibly time-dependent) . The solution to (49) may be written in a Lie-algebraic way as with satisfying and giving rise to the well-known Magnus expansion. In particular, if for all , then .
Let be a Brownian motion with delta-correlated spectral density matrix . Consider the following Langevin equation on :
(52) |
As an SDE, (52) must carry the Stratonovich interpretation, otherwise solutions may not remain in . Suppose satisfying (52) is of the form , where is given by and the equation for is to-be-determined. In what follows, drop the implied -dependence. Following a derivation similar to that of eqn. (30), satisfies
(53) |
Applying the braiding identity (22) to (53) and simplifying yields
(54) |
To simplify eqn. (54) further, introduce the operator given by
(55) |
and note . Noting further that , eqn. (54) simplifies to
(56) |
Eqn. (56) is essentially our desired SDE defined on the Lie algebra. However, it is an equation for matrices and it is desirable to convert this to an equation for the coordinates . In order to accomplish this, introduce the map defined as the inverse of . Using this map, define
(57) |
for . For each , is a linear map and, hence, may be represented as a matrix. Examples of this matrix representation of the adjoint are in Section VII. Moreover, the identity holds for all . This identity is a consequence of the fact that the maps and are inverses of each other. From this identity, introduce
(58) |
and note that . Using these notations, one may "vectorize" (56) to find the equation
(59) |
Again, as an SDE, eqn. (59) must carry the Stratonovich interpretation. If the derivation in eqn. 36 is non-zero, then (59) becomes
(60) |
where is the linear map defined by for .
Eqn. (60) represents the TSSDE relevant for propagation of the group elements in terms of the exponential coordinates which parameterize the Lie algebra through the vectorization operation. Clearly, if , then (60) becomes a linear equation, which is consistent with Theorem 1. Due to the nonlinear, state-dependent diffusion tensor in (60), it no longer appears as though the class of CGs are invariant under propagation in the presence of noise. Indeed, at the end of Section VII.1, the FPE associated with (60) for the case of (i.e., the attitude filtering problem in the absence of gyro bias) is analyzed and the fundamental solution of this FPE is explicitly computed. The FPE fundamental solution defines the class of probability distributions which remain invariant during propagation in the presence of noise, and the computed fundamental solution for is notably not a CG. It is shown, however, that this fundamental solution is well-approximated by a CG when (in appropriate units). In many applications is small and the CG approximation appears to hold with high accuracy during propagation.
VI Filtering on Lie groups: the TSF
With the TSSDE in hand, one has at their disposal the myriad of nonlinear filtering techniques to determine how the mean and covariance of (approximately) propagate through (60). The perhaps crudest approach would involve linearization in an EKF fashion and then proceed to numerically integrate the resulting ODEs defining the (approximate) mean and covariance evolution. One can do much better, however. In Section VI.1 a generalization of the CTUT described in sarkka2007unscented to the case of state-dependent diffusion is provided. It is used to propagate the mean and covariance of through (60). This approach provides a relatively accurate way to update the mean and covariance of in between measurement updates within the TSF framework.
Propagation of distributional parameters is only one of two fundamental stages in recursive filtering, with the other being the measurement stage, i.e. refining these distributional parameters by an incoming measurement according to Bayes’ theorem. In this regard, a measurement is assumed to be related to the group element through the equation , where is a function of the group element and a random vector that is typically assumed to be . The function associating group elements to measurement values is typically of a form that demands an approximate means of performing the Bayesian fusion. In this work, we advocate for utilization of the UT UnscentedKF04 . Its application to the measurement stage of the TSF is described Section VI.2. A similar approach to a UT-based measurement update on Lie groups using a CG as a reference distribution appeared in brossard2017unscented ; sjoberg2021lie .
When using the CTUT to update the first two moments of the distribution over according to the TSSDE (60), the mean of will no longer remain zero, even if it is initially zero. This is a result of the nonlinear, state-dependent diffusion tensor present in (60). (However, it has been observed that the mean generally remains small over time scales for which , where is the largest eigenvalue of ). The mean of will also generally become non-zero after the measurement stage. Because the definition of a CG (46) requires the mean of to be zero (in order to interpret as the MAP estimate), a procedure to approximate the distribution , , by a CG must be developed. This is done in Section VI.3 and the resulting procedure is called whitening. Whitening is fundamental to the TSF and is required to produce statistically consistent MAP estimates.
Since the bulk of the TSF algorithm presented here relies on the UT, it is helpful to briefly review its basics. The UT represents the distribution of a random vector as a collection of sigma points , , given by
(64) |
in which denotes the column of the Cholesky decomposition of the covariance matrix of , and is a tuning parameter that controls the spread of the sigma points around their mean. In addition, one defines the collection of weights
(67) |
The unscented transform then approximates the expectation of the distribution of for an arbitrary map as . There are variants of the UT, such as the simplex UT, that use fewer sigma points to represent the distribution of , thereby reducing the computational resources needed to evaluate summations. In this manuscript, only the "vanilla" UT as just described is considered.
VI.1 Propagation
This section is devoted to describing the propagation step within the TSF using the CTUT sarkka2007unscented . However, the CTUT within sarkka2007unscented did not consider state-dependent diffusion, as the term in (60) now demands. Hence, it is necessary to derive a generalization to (sarkka2007unscented, , Algorithm 4.4). This is done for a general (Stratonovich) SDE
(68) |
where is a vector-valued stochastic process, is a vector field governing the deterministic portion of the dynamics of , is the diffusion tensor, and is a Brownian motion with delta-correlated spectral density . Eqn. (60) is reproduced by setting , , , and . Interpreting (68) as a Stratonovich SDE, the FPE for the time-dependent transition probability density associated with (68) reads
(69) |
It is possible to rewrite this equation as
(70) |
where is a vector field whose th component is
(71) |
By multiplying (70) by and and integrating, one may derive the moment evolution equations for the mean and covariance corresponding to (70):
(74) |
where .
As discussed in (sarkka2007unscented, , Appendix), the CTUT propagation equations in (sarkka2007unscented, , Algorithm 4.4) are obtained by applying the UT to the expectations on the RHS of equations (74). However, as mentioned earlier, that derivation did not consider state-dependent in (74). Hence, the covariance evolution in (sarkka2007unscented, , Algorithm 4.4) must be augmented with the term , where the weights are defined in (67). With this, all the needed ingredients are present to write down the system of coupled ODEs governing the evolution of the sigma points during the propagation step of the TSF. The derivation follows that in (sarkka2007unscented, , Appendix) closely and is omitted. The end result is essentially (sarkka2007unscented, , Equation 35) with the matrix therein modified to include the term .
Depending on the dimension of and the technique used to numerically integrate the sigma point ODEs, it may be too computationally expensive to use the CTUT for real-time applications. In this regard, another approach for moment propagation through the TSSDE is briefly indicated. This approach is based on a "short-time expansion" of the FPE and may be seen as a generalization of the EKF to state-dependent diffusion. Assuming the initial probability density associated with in (60) is sufficiently localized around the origin of the Lie algebra, then a physically reasonable approximation is to assume it remains localized around the origin over propagation time scales for which (in appropriate units), where is the largest eigenvalue of , the spectral density associated with in (60). Thus, only the behavior of the diffusion tensor near the origin contributes to the evolution of the density under the FPE. More rigorously, for each , consider a rescaled density , where is assumed to satisfy (70). Rewriting the FPE (70) for , one may expand the coefficients of this transformed PDE in powers of . Removing terms that fall off faster than , one arrives at an "effective" FPE that captures the evolution of the density over "short" time scales. The result of this procedure is stated for the example in Section VII.1. Similar ideas are described in ye2023uncertainty ; ye2024uncertainty .
VI.2 Measurement update
The previous section was concerned with the prediction stage of the TSF. The present section is concerned with the measurement stage of filtering, in which our state estimates and their uncertainties are refined by an incoming measurement according to Bayes’ theorem. The measurement is assumed to be related to the group element through the equation , where and is assumed to be a zero-mean Gaussian distributed random vector. An example measurement model in the case of attitude filtering is a triaxial magnetometer which measures an ambient magnetic field in the coordinate frame to which the attitude is referenced. Rotating to the body frame coordinates using eqn. (1), , where is given by (1). Note that this requires a model of the ambient magnetic field .
Rarely will be equivalent to a linear function on the Lie algebra of . This means the class of CGs over the group are not, in general, invariant under the Bayesian update. Hence, nonlinear filtering techniques are required to approximate the posterior distribution over the group. To apply the UT to the measurement stage of filtering using a CG (46) as the reference distribution, start by generating sigma points according to (64), where the random vector in (46) is used in place of in (64). These sigma points are then transformed to "measurement space" via . Since both the measurement space and the Lie algebra where the sigma points are defined have the structure of a finite-dimensional vector space, the Bayesian fusion amounts to a standard Kalman filter update. These update equations are standard and omitted for brevity (see, e.g., (Crassidis_UKF, , Eqns. 2-4)). This procedure will produce an (approximate) posterior distribution over of the form , where and in general. In order to maintain consistency, this posterior must be transformed into the form of a CG. This process is called whitening and is described in the following Section.
VI.3 Whitening
In this section, a procedure is developed to approximate the distribution , with , by a CG (46). In contrast to eqn. (46), is not assumed to be zero. The approximation is accomplished by rotating by a predetermined amount, while applying the reverse rotation to the distribution , where the rotation is chosen in such a way that the new distribution over has zero mean to some specified tolerance. The resulting algorithm will be referred to as whitening.
The problem is to find a fixed vector such that the mean of
(75) |
is zero. If this can be accomplished, then write
where . Since has zero mean, may be interpreted as the MAP of the distribution over in accordance with the definition of a CG (46). Due to the nonlinear nature of the BCH expansion, it seems improbable that one may find a closed-form for an in terms of the moments of . Instead, a simpler route is taken that can be iterated upon until a desired convergence criteria is met. In this vein, an is sought such than the mean of is strictly smaller (in magnitude) that the mean of .
To proceed, first recall a few facts about the BCH expansion. Let , where is the Lie algebra of some (matrix) Lie group . Consider the series representation of BCH (hall2013lie, , Section 5.6), whose first few terms are given by
(76) |
If all the commutators in (76) that only contain a single are collected, then it is possible to write an alternative form of the BCH series given by
(77) |
where was introduced in (27).
Using formula (77) to expand the RHS of (75) and vectorizing, one finds
(78) |
where is defined similar to (58). If one takes the expectation of both sides of (78), and imposes the constraint that , then
(79) |
Ignoring higher-order terms in , solve (79) for to find
(80) |
In order to use (80), one must be able to compute when . Without making simplifying assumptions about the covariance of , it appears difficult to find a closed form. There are at least two routes to approximating this expectation value. One route is to use the UT, while another may use a first-order Taylor series approach, i.e. . Using the fact that for any , then under this first-order approximation, . However, if in (80) is computed using, for example, the UT, then, in general, .
To compute the covariance of , expand (75) as a series in , not . For this, note the BCH symmetry , true for any . Then, there is an expansion similar to (78), but now in powers of :
(81) |
The previous expression tells us that the covariance of is . Choose . Then .
If is computed based on an approximation such as (80), then the mean of in (75) will not be exactly zero. However, by analyzing higher-order terms in (81) using, say, , one may show that the new mean of has norm which is strictly smaller in norm than . This implies that if the mean and covariance of are (approximately) computed using, for example, the choice , then this whitening process may be repeated using this new mean and covariance until a desired convergence criteria is met. In practice, it has been observed that a small number of iterations, e.g. , will suffice to produce a distribution with a mean on the order of machine double precision.
VI.4 Summary of the TSF
Sections VI.1, VI.2, and VI.3 may be summarized to describe a complete TSF based on the UT. One recursive step of the filter proceeds as follows. Let be the prior CG. In particular, where . Suppose there is an incoming measurement . The first step is to update the distribution of with the measurement using the UT as described in Section VI.2. This produces a new distribution where, in general, . To remedy this, several iterations of the whitening procedure described in Section VI.3 are performed. This transforms the posterior distribution into an (roughly) equivalent CG distribution, . Next, one must propagate the MAP and the first two moments of until the next measurement is received. The MAP estimate is propagated forward by solving
over the necessary time step, where is the vector field governing the dynamics on the Lie group. The moments of are propagated forward in time under the appropriate TSSDE (60) using the CTUT as described in Section VI.1. Before the next incoming measurement is processed, another iteration of the whitening procedure is typically required due to the nonlinear, state-dependent diffusion tensor in (60). Once this is complete, the algorithm just outlined may be repeated until all measurements are exhausted.
VI.5 Performance metrics
In the context of Gaussian-based nonlinear filtering on , a standard filter performance metric in simulations is the (normalized) value. Specifically, if with , then follows a chi-squared distribution and has mean . If, after averaging over many filter realizations, the is substantially different from its theoretical expectation, then there is strong evidence that the distribution followed by the estimated error is not a Gaussian with covariance matrix given by the associated state error covariance. Hence, the value gives a measure of statistical consistency of the filter. Poor statistical consistency could be the result of flawed assumptions, poor parameter tuning, numerical issues, or genuine non-Gaussian behavior (just to name a few). A key aspect of the metric is that it takes into account the full state error covariance matrix , capturing important cross-correlations between states.
The goal here is to identify a similar metric when filtering with respect to CGs on Lie groups. Suppose one is attempting to estimate and, at any given time in the simulation, an estimate of , denoted , is computed and an associated covariance matrix (here, ). In analogy with filtering on Euclidean space, the error in our estimate is stipulated to be . If the TSF’s CG approximations of the true underlying probability distribution over are accurate, then and, in particular, , where . Hence, and so the quantity
(82) |
averaged over many random realizations should be (approximately) . The quantity will be used as the primary metric to evaluate simulated filter performance in Section VIII.1.
VII Examples
In the following sections, eqn. (60) is examined for some specific examples of Lie groups that are relevant for applications. Most of the work lies in determining the explicit form of the diffusion tensor (58) for the given example.
VII.1
Consider the case of unit quaternions, which define the Lie group under quaternionic multiplication. This Lie group is isomorphic to , hence the name of this example. As discussed in Sec. II, quaternions define a convenient parameterization of elements of , which are the primary mathematical objects of interest in attitude filtering. In this case, the relevant group equation of motion is eqn. (11), where is a noisy measurement provided by a gyroscope. This section does not consider the case where is corrupted by a bias, as this is reserved for Section VIII. Instead, simply assume the measured angular rate satisfies , where . Under the paradigm of DMR, the Langevin equation governing propagation in between attitude measurement updates then reads
(83) |
where is given by (8). Eqn. (83) is precisely of the form (52).
The first matter of business is to compute , defined in (57), for the Lie algebra . From the identity (21), it can be seen that for . Therefore, . To find a closed-form for , it is necessary to recall the Rodrigues’ rotation formula:
(84) |
Applying (84) to compute , taking the matrix inverse, and simplifying, yields
(85) |
where is the radial function
(86) |
As a final note, observe that in the -case, the “constant" piece of the Langevin equation , and so . Plugging the previous results into (59) yields
(87) |
The remainder of this example is devoted to a discussion of the FPE associated with (87), its fundamental solution, and the relevance to optimal attitude filtering. Much of what is covered here is in similar spirit to Markley_SO3 . In the interest of notational simplicity, for the remainder of this subsection, will be denoted by . For brevity, the interested reader is referred to do1992riemannian ; rosenberg1997laplacian ; berger2003panoramic ; faraut2008analysis for additional information regarding the following terminology.
Using the identity , the FPE for the transition probability density associated with (87) reads
(88) |
The second-order differential operator appearing in (88) is the Laplace-Beltrami operator associated with the Riemannian metric
(89) |
on . For , simplifies to
(90) |
where is the radial function
(91) |
Furthermore, the FPE (88) simplifies to
(92) |
where is the standard Laplacian on , and is the standard Laplace-Beltrami operator on the unit -sphere. Formula (90) is the expression for the standard round metric on written in exponential coordinates. In general, the metric (89) is obtained by left-translating the inner product induced by on the Lie algebra of . Therefore, the FPE (88) is closely related to the heat equation on equipped with an arbitrary left-invariant Riemannian metric on . This is natural considering that the SDE (87) was derived from a diffusive process on , i.e. eqn. (83).
Schulman Schulman68 was the first to write down a closed-form expression for the heat kernel for equipped with the round metric (90). Leveraging Schulman’s formula, it is possible to give the fundamental solution to (92) when :
(93) |
One may verify that and is normalized on , and hence defines a probability density on . If is some initial probability density it follows from the left-invariance of the heat kernel faraut2008analysis that
(94) |
is normalized, satisfies (92) (with ), and , where
(95) |
Formula (94) essentially reflects the fact that convolution against the heat kernel on a Lie group yields the propagated solution to the heat equation on the group. Convolution is in terms of the group operation, which in this case arises from the underlying group structure on . From the Chapman-Kolmogorov equation, (93) essentially gives the class of probability distributions which are invariant under propagation through (87) in the case of isotropic . This observation should be compared with the canonical example of the heat kernel on endowed with the standard Euclidean metric. This is the Gaussian function , for , which satisfies the heat equation on . The Chapman-Kolmogorov equation for the previous heat kernel encodes the well-known fact that the convolution of two Gaussians is again a Gaussian, a fact which is the essence of the propagation step in the classical linear Kalman filter. Notice that (93) is well-approximated by the Gaussian for and , which provides theoretical support for earlier claims that CGs are the appropriate class of distributions to filter with respect to on Lie groups, even in the presence of noise.
Using left-invariance, a relatively simple modification of (93) yields the fundamental solution, , for (92) when : , where was defined in (95). A further generalization of (93) to the case of non-isotropic (i.e., the fundamental solution to (88)) would require, at a minimum, a closed-form for the geodesic distance on equipped with an arbitrary left-invariant Riemannian metric, and such a formula appears to be unknown. A class of likelihood functions which leave (93) invariant under a Bayesian update is also unknown, and we suspect there is no such "natural" observation process (as compared to the classical linear Kalman filter where linear measurements with Gaussian distributed measurement noise leaves the class of Gaussian priors invariant under a Bayesian update). If these problems could be solved, however, then one would have a theory of attitude filtering on under the paradigm of DMR that is analogous to Kalman’s classical theory on Euclidean space.
As described at the end of Section VI.1, there is a short-time expansion that approximates (92) to yield a simpler FPE for which the moment evolution equations may be solved exactly. This approximate FPE is obtained by truncating the Taylor expansion of (86) and (91) to second-order, and then substituting these truncated Taylor expansions for and in (92). The resulting FPE reads
(96) |
where is defined via . The moment evolution equations for the density which satisfies (96) read
(99) |
where and . The evolution equations (99) are both linear, a desirable feature if computational speed up is required.
VII.2
Consider the special Euclidean group in dimensions, , which is the semidirect product of and , typically written as . An element will be represented as a matrix of the form
(102) |
where and . A physical context to the elements of may be ascribed as the position and attitude of a rigid body. If the rigid body is subject to an angular velocity and describes its position in the reference frame of the rigid body, then the attitude and position of the body at any instance in time satisfy
(105) |
The ODEs (105) may be related to the INS problem as follows. Let be the rotation from an inertial frame to a body frame. Then,
(106) |
where denotes the angular rate of the inertial frame with respect to the body frame, expressed in the inertial frame, and likewise for . Let , where , i.e. the position expressed in the body frame. Then,
(107) |
where is the velocity expressed in the body frame. From the previous two equations, and in (105).
The Lie algebra of consists of matrices of the form
(110) |
where . The exponential map is
(113) |
where
(114) |
Therefore, the curve
(117) |
is the solution of (105) written in a Lie algebraic way. The vector field in is of the form (36) (with ) and is therefore group-affine. Hence, in light of the group structure of , the "natural" state vector to estimate is , assuming one desires to use CGs over to capture the uncertainty associated with .
Similar to the -case, the Lie algebra coordinates are treated as the measured quantities. The measured angular rate and velocity satisfy and , respectively, with and . Under DMR, the Langevin equation governing propagation in between measurement updates then reads
(118) |
Eqn. (118) is of the form (52). Again, the first task is to compute for . A computation shows, for ,
(121) |
Using the Van Loan method van1978computing to compute the matrix exponential yields
(124) |
where
(125) |
and
(126) |
Note that is essentially given by (85) after a simple rescaling of variables. Plugging (121)-(124) into (59) yields the TSSDE for corresponding to (118):
(135) |
in which are the coordinates parameterizing the Lie algebra .
VII.3
Ultimately, is not the Lie group appropriate to describe state estimation from IMU measurements, since it is not the body frame velocity the IMU measures, but rather the body frame acceleration (minus gravity). This complicates things in an interesting way. First, an appropriate Lie group needs to be identified. Inspired by the results of Section VII.2, consider the state vector given by . This state vector form motivates the group given by matrices of the form
(139) |
where and (thought of as a column vectors). This makes . To the best of our knowledge, the group first appeared in barrau2016invariant , and is referred to as the group of double direct spatial isometries.
Notice that , where is the gravitational acceleration in an inertial frame and will be treated as constant (i.e., independent of ). (In what follows, drop the superscripts and for notational simplicity.) The equations of motion on read
(143) |
or, equivalently,
(144) |
where
(148) |
is a member of the Lie algebra of .
Eqn. (144) doesn’t fall into the class of constant dynamical systems because of the terms involving the gravitational acceleration and the velocity variables . This results in a distinct departure from the and examples considered so-far. In an example INS application under DMR (see Sec. II), and in (143) are replaced by and , where and are the measured angular rate and body frame acceleration, respectively, and and . and are treated as constants over the time interval of integration. A model for (in the inertial frame) is typically assumed. The vector field is verified to be group-affine, assuming independent of . The fidelity of this assumption depends on the scenario being considered. According to Theorem 1, there must exist a linear equation of the form (60) on the Lie algebra corresponding to the ODE . This will be shown to be the case. It is noteworthy that the DP group law on , corresponding to the state vector , fails to make the dynamics
group-affine.
Let the Lie algebra coordinates be partitioned as . The exponential map on reads
(152) |
where was introduced in (114). Supply (144) with the initial condition . The MAP is then given by
(156) |
where . Starting from eqn. (34), a derivation using (152)-(156) shows that eqn. (60) in the case of becomes
(169) |
where
(173) |
with given by (126). It is interesting that, even in the noiseless case, the evolution of the MAP is not "constant", but the evolution on the Lie algebra is linear. It is also noteworthy that the gravitational acceleration does not appear in (169). This provides a non-trivial example in which filtering on the group with a CG yields a propagation stage with complexity equal to that of standard Kalman filtering, but which exactly incorporates nonlinearity missed by conventional Kalman approaches.
VIII Gyro Bias Estimation
This section addresses the problem of jointly estimating both the attitude of a rigid body and the bias contributing to the measured angular rate from a gyroscope, as in the Farrenkopf model (15). The underlying state space here will be the dimensional manifold , where the state to-be-estimated is , with be the rotation matrix representing the attitude of the rigid body and representing the gyro bias. The Langevin equation describing the evolution of the state reads
(176) |
where are defined in (15).
There is more than one group law one may assign to pairs which turns into a Lie group. The simplest is the DP group law, in which two pairs are combined according to . Another is the group law (i.e., a semidirect product), where . Unfortunately, neither of these groups make the vector field group-affine. However, a time-parameterized family of group operations on is identified that ensures the vector field is "almost" group-affine (and is group-affine when restricted to ). This family corresponds to the group law to zeroth order in time, thus providing theoretical motivation for this choice of group law over, e.g., the DP group law. Furthermore, the numerical superiority of the group law over the DP group law for this state estimation problem is demonstrated in Section VIII.1.
Identify with via the map . Consider a family of Lie groups parameterized by time where for each the underlying manifold of is simply . The group operation is a function of , and is defined via
(183) |
It may verified that yields a genuine group law for each . (The only nontrivial aspect of this is checking the associativity of . This fact essentially boils down to the identity for , which, in turn, follows from associativity of matrix multiplication.) From the BCH series (76), observe that
(184) |
From the identity , the group law is seen to be equivalent to the group law at . Finally, we note that there is an explicit representation of the group law on that doesn’t pass through and utilizes the closed-form for the BCH series on . Indeed, let be the vector such that . Then, satisfies and
For simplicity, set . Let us denote by the flow associated with (176) in the absence of noise. In particular,
(187) |
From (barrau2019linear, , Proposition 8), group-affinity (under the group law (183)) would be equivalent to the existence of and such that the flow (187) satisfies . Since an automorphism maps the identity to itself, it must be the case that . In other words, the flow must be an automorphism of , if the associated vector field is group-affine. Let us now analyze how close in (187) is to being an automorphism.
The defining property of an automorphism asserts that
(188) |
for all . For the LHS of (188):
(191) |
while the RHS of (188) reads
(194) |
Analyzing (191)-(194), the bias terms differ in the first argument of BCH by a conjugation by . For the term in (194), notice that
Recalling the BCH symmetry , one concludes that
Hence, the pieces of (191) and (194) agree, ensuring that is an automorphism of when restricted to . A modification of the group law (183) which makes the bias terms in (191)-(194) agree while simultaneously making the terms agree has not been identified.
The question that now must be addressed is the derivation of SDEs on the Lie algebra corresponding to the Langevin equations (176), for both the and DP group laws on . Since neither of these group laws make the deterministic part of (176) group-affine, the TSSDE (60) derived in Section V does not directly apply. Instead, one must return to eqn. (30), which was derived without making any assumptions on . The computation of the terms in (30) will be carried out for the group law on , while the DP group law will follow in a similar fashion.
The vector field corresponding to the deterministic piece of (176) reads
(195) |
where is given by (110) and is given by
Suppose satisfies (176) without noise and where with and
is given. Further suppose for may be represented as , where equals . In other words, follows the same evolution equation as , except the initial condition is known and given by . From (30), then satisfies the ODE
(196) |
Let the Lie algebra coordinates be partitioned as . Using (195), the exponential map (113), and the braiding identity (22), the RHS of (196) may be simplified to read
where
(197) |
with given by (114). Vectorizing the previous equation, while noting that and , where is given by (124), results in
(202) |
The term represents the deviation that fails to make (202) a linear equation on the Lie algebra. This term may be interpreted as the difference between the gyro bias MAP and its random fluctuation .
Eqn. (202) is the desired ODE on the Lie algebra. Simplifying this equation further results in valuable insights. Using the formula (121) for on and the explicit formula for in (124), eqn. (202) may be simplified to yield the following set of equations for the evolution of :
(205) |
where is defined in (126) and is given by (125). Observe that the equation for in (205) is linear, while the equation for is nonlinear. This is also consistent with the observation that the vector field in (176) is group-affine for (to zeroth-order in ) for the attitude piece, but not the bias. Moreover, the second equation for in (205) notably depends on the MAP through . This is consistent the observation that, in the course of computing (34), the MAP was only eliminated from (30) because of the group-affine assumption on . Since the group structure doesn’t make the vector field in (176) group-affine, this is expected.
The derivation of eqn. (205) did not include the noise in (176). The noisy generalization of (205) will now be derived. The Langevin eqn. (176) may be written as
The new additional term on the RHS of (196) is
Upon vectorizing the previous equation and appropriately augmenting the RHS of (202) with the result yields the desired Langevin equation on the Lie algebra:
(210) |
Notice how the diffusion tensor depends on the MAP through , unlike (60). A compact formula for (126) that is convenient for the numerical propagation of moments through (210) is presented in Appendix Formula for .
If instead the DP group law on is used, then the Langevin equation on the Lie algebra would read
(213) |
Comparing (205) with the deterministic part of (213), the evolution of the Lie algebra coordinates associated with rotations (i.e., the ) is now nonlinear, while the evolution of the coordinates associated with the gyro bias is linear. This is a result of the DP group law making the vector field in (176) group-affine when restricted to the bias states. This highlights a key trade-off between working with the and DP group law on for filtering. In the following, a performance comparison of an attitude filter is made using these two different choices of group laws. A dramatic improvement when working with the group law versus the DP group law, as well as compared to the USQUE Crassidis_UKF , is observed.
VIII.1 Numerical experiment
As a proof-of-principle test of attitude filtering using CGs on , consider an example of a satellite undergoing purely Keplerian motion with inclination [deg] and radius [km] Crassidis_UKF . The spacecraft keeps its body -axis (roll) along the velocity vector, its body -axis (yaw) along the nadir, and the body -axis (pitch) along the negative orbit normal. With this geometry, the orbit period is exactly [min] = [s] and the body frame spacecraft angular velocity is [rad/sec]. The spacecraft is equipped with a strapdown, three axis gyroscope with its axes aligned with the spacecraft’s body frame. Gyro measurements are simulated in accordance with the Farrenkopf model (15) using the procedure outlined in Crassidis_SPKF .
Measurements of the spacecraft’s attitude are given by an unbiased triaxial magnetometer with zero-mean Gaussian distributed measurement noise with covariance , i.e. , where is Earth’s magnetic field in an inertial reference frame (to be defined), is given by is the rotation from the inertial frame to the spacecraft’s body frame, and . The Earth’s magnetic field along the circular orbit is modeled as a tilted dipole according to , in which is a unit vector from the center of the Earth to the spacecraft and the Earth dipole is , where [deg] and [deg/s].
The conditions of the spacecraft at the first measurement are that it is crossing the ascending node, and so the angle between the body axis and the equatorial plane is the orbit inclination of 35∘. To clarify, let us take the inertial () reference frame such that the ascending node lies on the negative axis. The unit vector pointing to the orbit is then defined as , and the velocity vector points along . The true initial attitude quaternion is .
Three attitude filtering approaches are analyzed for the scenario outlined above. Two of the filters are TSFs, one using the group law and the other the DP group law on the state space . These filters are dubbed TSF- and TSF-DP, respectively. Both filters follow the recursive algorithm summarized in Section VI.4, with the exception that the SDE used for propagation of the tangent space variables is (210) for the case, and (213) for the DP case. For both the TSF- and TSF-DP, a fourth order Runge-Kutta method is used to integrate the sigma point ODEs (see Section VI.1). The third filter is the USQUE described in Crassidis_UKF .
The primary metric used to compare filter performance is the statistical consistency as measured by the chi-squared value. For the CG-based filters, this is the quantity defined in (82). For the USQUE, the estimated state error covariance for the attitude state is over (generalized) Rodrigues parameters. Hence, to compute a chi-squared statistic for the USQUE, first compute the error quaternion and then map this error quaternion to its associated Rodrigues parameter. This gives a component error vector whose associated covariance is being computed by the USQUE. Together with the error in the estimated gyro bias, a chi-squared statistic similar to that of may be computed.
Table 1 gives a complete list of the simulation parameters used. The initial state covariance in Table 1 is for the TSF- filter. For the TSF-DP and USQUE, this initial state covariance is transformed into an equivalent covariance using the UT. The initial attitude was sampled from a CG with MAP given by the true initial attitude and tangent space covariance given by the initial attitude covariance in Table 1. The initial CG covariance matrix (attitude and bias) is chosen so that the initial statistical consistency value for all filters is approximately . All additional parameters for the USQUE were specifically chosen to reflect the analysis in Crassidis_UKF .
Figure 1 compares root-mean-square (RMS) of the metrics for statistical consistency between the three filters for Monte Carlo (MC) runs with randomly generated magnetometer measurements, gyro measurements, and initial attitude estimates. Smaller values indicate that the filter contains errors more faithfully, with an optimum value having a time average around . The TSF- clearly maintains close proximity to the expected value of , while the TSF-DP and USQUE show dramatic overconfidence in the first hour of the simulation and appear to never quite converge to the theoretically expected value by the end of the time span. The TSF- always maintains a value close , which points to a far more robust representation of attitude uncertainty for large initial errors. Rapid convergence and robustness is desirable for essentially all applications.
Figure 2 compares the RMS of the roll, pitch, and yaw errors and their associated values for each filter. Only the first hour of the simulation is shown in order to highlight the differences in the errors. One may observe a clear difference in the errors between the three filters, while the values appear to be roughly the same. The USQUE shows overconfidence in each roll, pitch, and yaw direction, which is consistent with Figure 1. The TSF-DP has slightly worse error in each direction compared to the TSF-, which has the lowest overall error and is consistent with the bounds (c.f. Figure 1). Even though the TSF-DP roll, pitch, and yaw errors appear to be contained within their bounds, the consistency metric for the TSF-DP in Figure 1 indicates that there are important cross correlations that are not being faithfully captured by this filter.
Figure 3 compares the total estimated gyro bias error and ( times) the square root of the largest eigenvalue of its associated covariance matrix for each filter. The story here is similar to that of Figures 1 and 2. The USQUE total estimated gyro bias error is not contained within the bound in the first hour of the simulation. The TSF- exhibits containment throughout, which is again consistent with Figure 1. Overall, the TSF- is exhibiting near optimal performance in estimating both attitude and gyro bias, as well as the cross-correlations between the two. As a final note, all three filters exhibit similar computational speed. The primary factor contributing to runtime in the TSF-based filters appears to be the propagation step based on the CTUT. Demanding a high degree of accuracy when integrating the sigma point ODEs will slow down the filter runtime. If this is a bottleneck for real-time applications, one may restore to more crude approaches when propagating the mean and covariance through the appropriate SDEs.
Simulation length | [Hr] |
---|---|
Gyro sampling rate | 10 [Hz] |
Gyro ARW | [rad/s1/2] |
Gyro RRW | [rad/s3/2] |
True Initial Gyro Bias | [deg/hr] |
Magnetometer sampling rate | 1 [Hz] |
Magnetometer measurement noise | 50 [nT] |
Initial Attitude Covariance | |
Initial Bias Covariance | |
Initial Attitude Estimate | Randomly Sampled |
Initial Gyro Bias Estimate | [deg/hr] |
Whitening Tolerance | |
UT | 0 |



References
References
- (1) Kalman, R. E., “A New Approach to Linear Filtering and Prediction Problems,” Journal of Basic Engineering, Vol. 82, No. 1, 1960, pp. 35–45, 10.1115/1.3662552.
- (2) Kalman, R. E. and Bucy, R. S., “New Results in Linear Filtering and Prediction Theory,” Journal of Basic Engineering, Vol. 83, No. 1, 1961, pp. 95–108, 10.1115/1.3658902.
- (3) Grewal, M. S., Weill, L. R., and Andrews, A. P., Global positioning systems, inertial navigation, and integration, John Wiley & Sons, 2007, 10.1002/0470099720.
- (4) Bar-Shalom, Y., Li, X. R., and Kirubarajan, T., Estimation with applications to tracking and navigation: theory algorithms and software, John Wiley & Sons, 2004, 10.1002/0471221279.
- (5) Ristic, B., Arulampalam, S., and Gordon, N., Beyond the Kalman filter: Particle filters for tracking applications, Vol. 685, Artech house Boston, 2004.
- (6) Julier, S. J. and Uhlmann, J. K., “New extension of the Kalman filter to nonlinear systems,” in Kadar, I., ed., “Signal Processing, Sensor Fusion, and Target Recognition VI,” International Society for Optics and Photonics, SPIE, Vol. 3068, 1997, pp. 182 – 193, 10.1117/12.280797.
- (7) Beneš, V. E., “Exact finite-dimensional filters for certain diffusions with nonlinear drift,” Stochastics, Vol. 5, No. 1-2, 1981, pp. 65–92, 10.1080/17442508108833174.
- (8) Daum, F., “Nonlinear filters: beyond the Kalman filter,” IEEE Aerospace and Electronic Systems Magazine, Vol. 20, No. 8, 2005, pp. 57–69, 10.1109/MAES.2005.1499276.
- (9) Gordon, N., Salmond, D., and Smith, A., “Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” IEE Proceedings F (Radar and Signal Processing), Vol. 140, 1993, pp. 107–113, 10.1049/ip-f-2.1993.0015.
- (10) Daum, F. and Huang, J., “Curse of dimensionality and particle filters,” in “2003 IEEE Aerospace Conference Proceedings (Cat. No.03TH8652),” Vol. 4, 2003, 10.1109/AERO.2003.1235126.
- (11) Barrau, A. and Bonnabel, S., “Linear observed systems on groups,” Systems & Control Letters, Vol. 129, 2019, pp. 36–42, https://doi.org/10.1016/j.sysconle.2019.05.005.
- (12) Sarkka, S., “On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems,” IEEE Transactions on Automatic Control, Vol. 52, No. 9, 2007, pp. 1631–1641, 10.1109/TAC.2007.904453.
- (13) Barrau, A. and Bonnabel, S., “The Geometry of Navigation Problems,” IEEE Transactions on Automatic Control, Vol. 68, No. 2, 2023, pp. 689–704, 10.1109/TAC.2022.3144328.
- (14) Crassidis, J. L. and Markley, F. L., “Unscented Filtering for Spacecraft Attitude Estimation,” Journal of Guidance, Control, and Dynamics, Vol. 26, No. 4, 2003, pp. 536–542, 10.2514/2.5102.
- (15) Shuster, M. D., “Survey of attitude representations,” Journal of the Astronautical Sciences, Vol. 41, No. 4, 1993, pp. 439–517.
- (16) Farrenkopf, R., “Analytic Steady-State Accuracy Solutions for Two Common Spacecraft Attitude Estimators,” Journal of Guidance and Control, Vol. 1, No. 4, 1978, pp. 282–284, 10.2514/3.55779.
- (17) Dianetti, A. D. and Crassidis, J. L., Extension of Farrenkopf Steady-State Solutions with Estimated Angular Rate, 10.2514/6.2018-2095.
- (18) Hall, B. C., Lie groups, Lie algebras, and representations, Springer, 2013, 10.1007/978-3-319-13467-3.
- (19) Müller, A., “Review of the exponential and Cayley map on as relevant for Lie group integration of the generalized Poisson equation and flexible multibody systems,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 477, No. 2253, 2021, p. 20210303, 10.1098/rspa.2021.0303.
- (20) Wolfe, K. C., Mashner, M., and Chirikjian, G. S., “Bayesian fusion on Lie groups,” Journal of Algebraic Statistics, Vol. 2, No. 1, 10.18409/JAS.V2I1.11.
- (21) Barrau, A. and Bonnabel, S., “Intrinsic Filtering on Lie Groups With Applications to Attitude Estimation,” IEEE Transactions on Automatic Control, Vol. 60, No. 2, 2015, pp. 436–449, 10.1109/TAC.2014.2342911.
- (22) Brossard, M., Bonnabel, S., and Condomines, J.-P., “Unscented Kalman filtering on Lie groups,” in “2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),” , 2017, pp. 2485–2491, 10.1109/IROS.2017.8206066.
- (23) Crassidis, J. L., Markley, F. L., and Cheng, Y., “Survey of Nonlinear Attitude Estimation Methods,” Journal of Guidance, Control, and Dynamics, Vol. 30, No. 1, 2007, pp. 12–28, 10.2514/1.22452.
- (24) Wang, W. and Lee, T., “Bingham-Gaussian Distribution on for Unscented Attitude Estimation,” in “2021 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI),” , 2021, pp. 1–7, 10.1109/MFI52462.2021.9591164.
- (25) Barfoot, T. D. and Furgale, P. T., “Associating Uncertainty With Three-Dimensional Poses for Use in Estimation Problems,” IEEE Transactions on Robotics, Vol. 30, No. 3, 2014, pp. 679–693, 10.1109/TRO.2014.2298059.
- (26) Julier, S. and Uhlmann, J., “Unscented filtering and nonlinear estimation,” Proceedings of the IEEE, Vol. 92, No. 3, 2004, pp. 401–422, 10.1109/JPROC.2003.823141.
- (27) Sjøberg, A. M. and Egeland, O., “Lie Algebraic Unscented Kalman Filter for Pose Estimation,” IEEE Transactions on Automatic Control, Vol. 67, No. 8, 2022, pp. 4300–4307, 10.1109/TAC.2021.3121247.
- (28) Ye, J., Jayaraman, A. S., and Chirikjian, G. S., “Uncertainty Propagation on Unimodular Matrix Lie Groups,” IEEE Transactions on Automatic Control, pp. 1–16, 10.1109/TAC.2024.3486652.
- (29) Ye, J. and Chirikjian, G. S., “Uncertainty Propagation and Bayesian Fusion on Unimodular Lie Groups from a Parametric Perspective,” arXiv preprint arXiv:2401.03425.
- (30) Markley, F. L., “Attitude filtering on ,” The Journal of the Astronautical Sciences, Vol. 54, No. 3, 2006, pp. 391–413, 10.1007/BF03256497.
- (31) Do Carmo, M. P. and Flaherty Francis, J., Riemannian geometry, Vol. 6, Springer, 1992.
- (32) Rosenberg, S., The Laplacian on a Riemannian Manifold: An Introduction to Analysis on Manifolds, London Mathematical Society Student Texts, Cambridge University Press, 1997, 10.1017/CBO9780511623783.
- (33) Berger, M., A panoramic view of Riemannian geometry, Springer, 2003, 10.1007/978-3-642-18245-7.
- (34) Faraut, J., Analysis on Lie Groups: An Introduction, Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2008, 10.1017/CBO9780511755170.
- (35) Schulman, L., “A Path Integral for Spin,” Phys. Rev., Vol. 176, 1968, pp. 1558–1569, 10.1103/PhysRev.176.1558.
- (36) Van Loan, C., “Computing integrals involving the matrix exponential,” IEEE Transactions on Automatic Control, Vol. 23, No. 3, 1978, pp. 395–404, 10.1109/TAC.1978.1101743.
- (37) Barrau, A. and Bonnabel, S., “The Invariant Extended Kalman Filter as a Stable Observer,” IEEE Transactions on Automatic Control, Vol. 62, No. 4, 2017, pp. 1797–1812, 10.1109/TAC.2016.2594085.
- (38) Crassidis, J., “Sigma-point Kalman filtering for integrated GPS and inertial navigation,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 42, No. 2, 2006, pp. 750–756, 10.1109/TAES.2006.1642588.
Formula for
This appendix provides a formula for in (126) that is suitable for numerical implementation when propagating moments through the SDE (210). This formula is based on the following integral identity for the matrix exponential.
Lemma 1
Let be a matrix which commutes with its Moore-Penrose pseudo-inverse, (e.g., a normal matrix). Then,
(214) |
and
(215) |
Proof. In general satisfies . From the assumption that , this implies . In other words, when , then is both the left and right inverse of when restricted to the orthogonal complement of . Therefore,
Hence,
The final step is to note . Replacing with results in (214). A similar argument proves (215).
Formula (214) does not directly apply with replaced by (121) because this matrix doesn’t commute with its Moore-Penrose pseudo-inverse. However, for , the Moore-Penrose pseudo-inverse of is . Together with (214), this implies
(216) |
Therefore,
(217) |
From (215),
(218) |
Plugging (218) into the first term on the RHS of (217) and simplify, one finds
For the next term on the RHS of (217), first observe that
(219) |
where the outer most brackets in (219) represents the matrix commutator. Hence,
In total,
(220) |
Formula (220) is convenient for numerical implementation because it doesn’t involve explicit trigonometric function through, e.g. the Rodrigues’ rotation formula (84), and it reuses quantities that will already be necessary to compute (e.g., ). Moreover, it is easier to analytically compute derivatives of (220), which is necessary to compute the drift correction in (71).