This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

An insightful approach to bearings-only tracking in log-polar coordinates

Athena Helena Xiourouppa 0000-0003-0033-744X Dmitry Mikhin  Melissa Humphries 0009-0002-6228-2958 0000-0002-0473-7611 and John Maclean 0000-0002-5533-0838
Abstract

The choice of coordinate system in a bearings-only (BO) tracking problem influences the methods used to observe and predict the state of a moving target. Modified Polar Coordinates (MPC) and Log-Polar Coordinates (LPC) have some advantages over Cartesian coordinates. In this paper, we derive closed-form expressions for the target state prior distribution after ownship manoeuvre: the mean, covariance, and higher-order moments in LPC. We explore the use of these closed-form expressions in simulation by modifying an existing BO tracker that uses the UKF. Rather than propagating sigma points, we directly substitute current values of the mean and covariance into the time update equations at the ownship turn. This modified UKF, the CFE-UKF, performs similarly to the pure UKF, verifying the closed-form expressions. The closed-form third and fourth central moments indicate non-Gaussianity of the target state when the ownship turns. By monitoring these metrics and appropriately initialising relative range error, we can achieve a desired output mean estimated range error (MRE). The availability of these higher-order moments facilitates other extensions of the tracker not possible with a standard UKF.

Index Terms:
Statistical analysis, bearings-only tracking, target tracking, Modified Polar Coordinates, Log-Polar Coordinates, Unscented Kalman Filter

I Introduction

Consider a sensing platform (ownship) tracking a vessel of interest (target) using a passive acoustic sensor.111For simplicity, we ignore the potential differences between the ownship coordinates / orientation and the sensor coordinates / orientation. Assume energy propagates in simple, horizontal straight lines, so that there is only one energy path (ray) from target to sensor. The sensor obtains the target bearing by measuring the direction of target noise arrival. We aim to recover the target’s full state, i.e., position and speed, from a sequence of this data.

Modified Polar Coordinates (MPC) have been established as a stable approach for bearings-only (BO) target tracking. State update equations in MPC are non-linear; to implement estimation for non-linear systems, existing literature suggests the use of the Extended Kalman Filter (EKF) [1, 2] or Unscented Kalman Filter (UKF) [3].

Log-Polar Coordinates (LPC) are another viable candidate [4, 5], especially for theoretical analysis; using LPC, one can obtain the posterior Cramér-Rao bound (PCRB) for the target state distribution [4]. This paper further explores the theoretical properties of the state distribution in LPC.

We do so in the context of a simple explicit ownship manoeuvre, described in Section II; broader scenarios can be approximated as combinations of simple manoeuvres.

The main contribution of this paper is proving that the target state distribution after an ownship manoeuvre can be described in LPC with closed-form expressions, derived in Section III. Closed-form expressions allow us to better understand the relationships between target state variables. We demonstrate the advantage of this for tracking simulation in Section IV by implementing a Kalman Filter that incorporates the closed-form expressions, CFE-UKF, and comparing its capabilities to a pure UKF. Conclusions are presented in Section V.

II Passive Bearings-Only Tracking

II-A Straight Line Movement

The target’s state in MPC [2] is described as a vector of

  • absolute bearing β\beta that is the angle from the receiver to the target measured from True North,

  • bearing rate β˙\dot{\beta},

  • scaled range rate ρ˙=r˙/r\dot{\rho}=\dot{r}/r, where rr is range, and

  • inverse range s=1/rs=1/r.

Here and below, the upper dot denotes the derivative with respect to time.

We define the target state at time tt as 𝒙=[β,β˙,ρ˙,s]\bm{x}=[\beta,\dot{\beta},\dot{\rho},s]. If we assume that the ownship and target do not manoeuvre, the vessel trajectory in MPC is:

β+\displaystyle\beta^{+} =β+arctantβ1+tρ,\displaystyle=\beta+\arctan{\frac{t_{\beta}}{1+t_{\rho}}}, s+\displaystyle s^{+} =s(1+tρ)2+tβ2,\displaystyle=\frac{s}{\sqrt{{(1+t_{\rho})}^{2}+{t_{\beta}}^{2}}}, (1)
β˙+\displaystyle\dot{\beta}^{+} =β˙(1+tρ)2+tβ2,\displaystyle=\frac{\dot{\beta}}{{(1+t_{\rho})}^{2}+{t_{\beta}}^{2}}, ρ˙+\displaystyle\dot{\rho}^{+} =ρ˙+Δt(ρ˙2+β˙2)(1+tρ)2+tβ2.\displaystyle=\frac{\dot{\rho}+\Delta t(\dot{\rho}^{2}+\dot{\beta}^{2})}{{(1+t_{\rho})}^{2}+{t_{\beta}}^{2}}.

Here t+t^{+} is some later time, Δt=t+t\Delta t=t^{+}-t, tβ=β˙Δtt_{\beta}=\dot{\beta}\Delta t and tρ=ρ˙Δtt_{\rho}=\dot{\rho}\Delta t. The update equations in Eq. 1 are non-linear. To use these equations for target state updates in a sequential estimator, we must apply the EKF, UKF, or other advanced methods of non-linear estimation (e.g.[6]). By taking Δt\Delta t as the update interval of the EKF or UKF, Eq. 1 would define the state transition function for the filter.

The bearing transition equation β+\beta^{+} in Eq. 1 does not contain the initial value of inverse range ss. Hence, we cannot determine the range from BO measurements when the ownship and target move straight.

II-B Equations of Motion for Manoeuvring Platforms

Assume that at the initial time tt the ownship position and speed in a fixed Cartesian coordinate system are xOx_{\text{O}}, yOy_{\text{O}}, vxOv_{x\text{O}}, and vyOv_{y\text{O}}, and the corresponding position and speed at a later time t+t^{+} are xO+x^{+}_{\text{O}}, yO+y^{+}_{\text{O}}, vxO+v^{+}_{x\text{O}}, and vyO+v^{+}_{y\text{O}}. Define the ownship displacement vector and speed change vector as

𝒘O\displaystyle\bm{w}_{\text{O}} =[xO+xOvxOΔt,yO+yOvyOΔt],\displaystyle=\left[x^{+}_{\text{O}}-x_{\text{O}}-v_{x\text{O}}\Delta t,y^{+}_{\text{O}}-y_{\text{O}}-v_{y\text{O}}\Delta t\right], (2)
𝒘˙O\displaystyle\dot{\bm{w}}_{\text{O}} =[vxO+vxO,vyO+vyO].\displaystyle=\left[v^{+}_{x\text{O}}-v_{x\text{O}},v^{+}_{y\text{O}}-v_{y\text{O}}\right].

Similarly, define 𝒘T\bm{w}_{\text{T}} and 𝒘˙T\dot{\bm{w}}_{\text{T}} for the target displacement. The overall acceleration-related displacement and speed change are

𝒂\displaystyle\bm{a} =𝒘T𝒘O,\displaystyle=\bm{w}_{\text{T}}-\bm{w}_{\text{O}}, (3)
𝒂˙\displaystyle\dot{\bm{a}} =𝒘˙T𝒘˙O.\displaystyle=\dot{\bm{w}}_{\text{T}}-\dot{\bm{w}}_{\text{O}}.

Using these definitions, target movement in MPC in the presence of target and ownship manoeuvres follows the equations

β+\displaystyle\beta^{+} =β+arctanDβDρ,\displaystyle=\beta+\arctan{\frac{D_{\beta}}{D_{\rho}}}, s+\displaystyle s^{+} =s(Dβ2+Dρ2)1/2,\displaystyle=\frac{s}{{\left(D^{2}_{\beta}+D^{2}_{\rho}\right)}^{1/2}}, (4)
β˙+\displaystyle\dot{\beta}^{+} =D˙βDρD˙ρDβDβ2+Dρ2,\displaystyle=\frac{\dot{D}_{\beta}D_{\rho}-\dot{D}_{\rho}D_{\beta}}{D^{2}_{\beta}+D^{2}_{\rho}}, ρ˙+\displaystyle\dot{\rho}^{+} =D˙ρDρ+D˙βDβDβ2+Dρ2,\displaystyle=\frac{\dot{D}_{\rho}D_{\rho}+\dot{D}_{\beta}D_{\beta}}{D^{2}_{\beta}+D^{2}_{\rho}},

where

Dβ\displaystyle D_{\beta} =tβ+s(𝒂𝒏),\displaystyle=t_{\beta}+s(\bm{a}\cdot\bm{n}), D˙β\displaystyle\dot{D}_{\beta} =β˙+s(𝒂˙𝒏),\displaystyle=\dot{\beta}+s(\dot{\bm{a}}\cdot\bm{n}), (5)
Dρ\displaystyle D_{\rho} =1+tρ+s(𝒂𝒒),\displaystyle=1+t_{\rho}+s(\bm{a}\cdot\bm{q}), D˙ρ\displaystyle\dot{D}_{\rho} =ρ˙+s(𝒂˙𝒒),\displaystyle=\dot{\rho}+s(\dot{\bm{a}}\cdot\bm{q}),

and the unit vectors 𝒒\bm{q} and 𝒏\bm{n} point from the ownship to the target and orthogonally at time tt:

𝒒\displaystyle\bm{q} =[sinβ,cosβ],\displaystyle=\left[\sin{\beta},\cos{\beta}\right], (6)
𝒏\displaystyle\bm{n} =[cosβ,sinβ].\displaystyle=\left[\cos{\beta},-\sin{\beta}\right].

The bearing update equation 4 depends on the initial target range ss. If we know the ownship manoeuvres, we can estimate ss, assuming that the target moves straight. We often generalise this conclusion as “the ownship must outmanoeuvre the target to determine its range” [2]. For the non-manoeuvring target 𝒘T=𝒘˙T=0\bm{w}_{\text{T}}=\dot{\bm{w}}_{\text{T}}=0, so the subscript “O” can be omitted in Eqs. 2 and 3 without causing ambiguity. From the Kalman Filter viewpoint, the acceleration-related terms (𝒂𝒏)(\bm{a}\cdot\bm{n}), (𝒂˙𝒏)(\dot{\bm{a}}\cdot\bm{n}), (𝒂𝒒)(\bm{a}\cdot\bm{q}), and (𝒂˙𝒒)(\dot{\bm{a}}\cdot\bm{q}) now represent control inputs to the system.

II-C Modified Polar Coordinates Versus Log-Polar Coordinates

BO trackers commonly use bearing, bearing rate, and scaled range rate as state variables because these values are directly observable on a straight leg of the ownship [2]. The choice of the fourth state component is not fixed. One conventional choice is MPC, in which we use inverse range ss as the fourth coordinate, as in Sections II-A and II-B. An alternative is to select the logarithm of range ρ=lnr\rho=\ln r as the fourth state component; this gives LPC with the state vector 𝒙=[β,β˙,ρ˙,ρ]\bm{x}=[\beta,\dot{\beta},\dot{\rho},\rho].

The choice of LPC has several theoretical advantages over MPC. Firstly, it allows the derivation of closed-form expressions for the posterior Cramér-Rao bound (PCRB) in a BO tracking problem [4]. Compared to MPC, the LPC state vector appears more intuitive because it contains two values (bearing and log-range) and their time derivatives, while MPC mixes ss and ρ˙\dot{\rho}. Also, a Gaussian distribution of the fourth state vector component ρ\rho never results in zero or negative range values, which theoretically is always possible in MPC at the tails of the distribution. A corollary is that sigma points of the UKF algorithm [7] would never end up at infeasible negative ranges.

The transition from MPC to LPC is simple: we describe the target motion in LPC by the system of differential equations

ddt[ββ˙ρ˙ρ]=[β˙2β˙ρ˙β˙2ρ˙2ρ˙],\frac{d}{dt}\begin{bmatrix}\beta\\ \dot{\beta}\\ \dot{\rho}\\ \rho\end{bmatrix}=\begin{bmatrix}\dot{\beta}\\ -2\dot{\beta}\dot{\rho}\\ \dot{\beta}^{2}-\dot{\rho}^{2}\\ \dot{\rho}\end{bmatrix}, (7)

obtained from Eqs. (11) and (18) of [2] by substituting s=eρs=e^{-\rho}. The same substitution into Eqs. 1, 4 and 5 provides the solutions of Eq. 7 for straight and manoeuvring vessels, respectively. For use in the UKF, one should also adapt the range-related elements of the process noise matrix QQ.

In accordance with Kalman Filter theory assumptions, we model the target state vector in LPC, 𝒙=[β,β˙,ρ˙,ρ]\bm{x}=[\beta,\dot{\beta},\dot{\rho},\rho] as a Gaussian random variable:

𝒙𝒩(𝒙;𝝁,Σ).\bm{x}\sim\mathcal{N}(\bm{x};\bm{\mu},\Sigma). (8)

With LPC in mind, we return to the equations of motion and make a key simplification.

II-D Instant Manoeuvres

Assume that the ownship makes an elementary manoeuvre: an instant turn. The state space variables after the manoeuvre, [β+,β˙+,ρ˙+,ρ+][\beta^{+},\dot{\beta}^{+},\dot{\rho}^{+},\rho^{+}], are related to the corresponding values before the turn, [β,β˙,ρ˙,ρ][\beta,\dot{\beta},\dot{\rho},\rho] by Eqs. 4 and 5 with Δt=0\Delta t=0. The target coordinates and speed do not change, and neither do the ownship coordinates, as the turn is instant. Therefore, the only non-zero acceleration-related term is 𝒂˙=𝒘˙=[Δvx,Δvy]\dot{\bm{a}}=-\dot{\bm{w}}=-\left[\Delta v_{x},\Delta v_{y}\right], and Eq. 4 simplifies to

β+\displaystyle\beta^{+} =β,\displaystyle=\beta, ρ+\displaystyle\rho^{+} =ρ,\displaystyle=\rho, (9)
β˙+\displaystyle\dot{\beta}^{+} =β˙(𝒘˙𝒏)r,\displaystyle=\dot{\beta}-\frac{(\dot{\bm{w}}\cdot\bm{n})}{r}, ρ˙+\displaystyle\dot{\rho}^{+} =ρ˙(𝒘˙𝒒)r.\displaystyle=\dot{\rho}-\frac{(\dot{\bm{w}}\cdot\bm{q})}{r}.

The properties of instant manoeuvres can be used to model non-instant, gradual manoeuvres by discretisation. Any ownship trajectory can be approximated as a series of instant manoeuvres interwoven with straight leg motion. The total cumulative duration of the straight legs should equal the duration of the original complete manoeuvre. Using sufficiently short legs, we can achieve the desired accuracy of approximation.

With all the preliminaries in place, we now begin to explore the state space distribution from a statistical perspective.

III Closed-Form Expressions for the Post-Manoeuvre Distribution

III-A The Post-Manoeuvre Distribution

Gaussian approximations of the distribution of state space variables are ubiquitous in filtering, for example, when using UKF [7] or ensemble methods [8]. Assume the pre-manoeuvre distribution of the state space variables is Gaussian. What about the post-manoeuvre distribution?

Proposition 1 (Post-manoeuvre distribution).

Let fxf_{x} refer to the pre-manoeuvre probability distribution function for the target in coordinates 𝒙\bm{x} (either MPC or LPC). Then, the post-manoeuvre distribution is given by

fx+(𝒙+)=fx(𝒉1(𝒙+)),f_{x^{+}}(\bm{x}^{+})=f_{x}(\bm{h}^{-1}(\bm{x}^{+})),

where

𝒉1(𝒙+)=[β+β˙++f1(β+,ρ+)ρ˙++f2(β+,ρ+)ρ+],\bm{h}^{-1}(\bm{x}^{+})=\begin{bmatrix}\beta^{+}\\ \dot{\beta}^{+}+f_{1}(\beta^{+},\rho^{+})\\ \dot{\rho}^{+}+f_{2}(\beta^{+},\rho^{+})\\ \rho^{+}\end{bmatrix}, (10)

and

f1(β,ρ)\displaystyle f_{1}(\beta,\rho) =eρ(ΔvxcosβΔvysinβ),\displaystyle=e^{-\rho}(\Delta v_{x}\cos\beta-\Delta v_{y}\sin\beta), (11a)
f2(β,ρ)\displaystyle f_{2}(\beta,\rho) =eρ(Δvxsinβ+Δvycosβ).\displaystyle=e^{-\rho}(\Delta v_{x}\sin\beta+\Delta v_{y}\cos\beta). (11b)

If working in MPC, then the fourth coordinate of 𝒙+\bm{x}^{+} is s+s^{+} and factors eρe^{-\rho} in f1f_{1} and f2f_{2} are rewritten as ss.

Proof.

Define the coordinate transformation by Eq. 9 from the pre-manoeuvre distribution in 𝒙\bm{x} to the post-manoeuvre distribution in 𝒙+\bm{x}^{+}, as the mapping 𝒉(𝒙)\bm{h}(\bm{x}):

𝒉(𝒙)=[ββ˙f1(β,ρ)ρ˙f2(β,ρ)ρ].\bm{h}(\bm{x})=\begin{bmatrix}\beta\\ \dot{\beta}-f_{1}(\beta,\rho)\\ \dot{\rho}-f_{2}(\beta,\rho)\\ \rho\end{bmatrix}. (12)

By observation, 𝒉1\bm{h}^{-1} defined in the proposition is the inverse of 𝒉\bm{h}. The multivariate change of variable theorem then expresses the post-manoeuvre distribution:

fx+(𝒙+)=fx(𝒉1(𝒙+))|det𝒥(𝒙+)|,f_{x^{+}}(\bm{x}^{+})=f_{x}(\bm{h}^{-1}(\bm{x}^{+}))\lvert\det\mathcal{J}(\bm{x}^{+})\rvert, (13)

where 𝒥\mathcal{J} is the Jacobian matrix

𝒥(𝒙+)=[1000f2(β,ρ)10f1(β,ρ)f1(β,ρ)01f2(β,ρ)0001].\mathcal{J}(\bm{x}^{+})=\begin{bmatrix}1&0&0&0\\ -f_{2}(\beta,\rho)&1&0&-f_{1}(\beta,\rho)\\ f_{1}(\beta,\rho)&0&1&-f_{2}(\beta,\rho)\\ 0&0&0&1\\ \end{bmatrix}.

It is easy to check that 𝒥\mathcal{J} has determinant 11, so Eq. 13 reduces to the appropriate form. ∎

As the transformation by Eq. 12 does not alter the bearing β+\beta^{+} or the range coordinate (ρ+\rho^{+} or s+s^{+}), we anticipate that the marginal post-manoeuvre distribution of these two variables is unchanged from the pre-manoeuvre distribution. Information about the target range comes from the entanglement of ρ+\rho^{+} or s+s^{+} with β˙+\dot{\beta}^{+} and ρ˙+\dot{\rho}^{+}. We now lay out some special cases in which the post-manoeuvre distribution can be found exactly.

Proposition 2.

Assume that the pre-manoeuvre distribution (in either LPC or MPC) is well approximated by the Gaussian distribution 𝒙𝒩(𝝁,Σ)\bm{x}\sim\mathcal{N}(\bm{\mu},\Sigma), so Proposition 1 provides

fx+(𝒙+)exp{12[𝒚TΣ1𝒚]},f_{x^{+}}(\bm{x}^{+})\propto\text{exp}\left\{-\frac{1}{2}\left[\bm{y}^{\text{T}}\Sigma^{-1}\bm{y}\right]\right\}, (14)

where 𝒚=h1(𝒙+)𝝁\bm{y}=h^{-1}(\bm{x}^{+})-\bm{\mu}.

  1. (1)

    If the coordinate system is MPC, x4=sx_{4}=s, then

    1. (i)

      the post-manoeuvre distribution is non-Gaussian, but

    2. (ii)

      the post-manoeuvre distribution is Gaussian if conditioned on the bearing rate β\beta. In this case, the post-manoeuvre distribution in MPC is

      (β˙ρ˙s)𝒩((μβ˙c1μsμρ˙c2μsμs),𝐏Σβ𝐏T).\begin{pmatrix}\dot{\beta}\\ \dot{\rho}\\ s\end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu_{\dot{\beta}}-c_{1}\mu_{s}\\ \mu_{\dot{\rho}}-c_{2}\mu_{s}\\ \mu_{s}\end{pmatrix},\mathbf{P}\Sigma_{-\beta}\mathbf{P}^{\text{T}}\right).

      In this distribution, c1=ΔvxcosβΔvysinβc_{1}=\Delta v_{x}\cos\beta-\Delta v_{y}\sin\beta and c2=Δvxsinβ+Δvycosβc_{2}=\Delta v_{x}\sin\beta+\Delta v_{y}\cos\beta denote the constant factors in Eq. 11, Σβ\Sigma_{-\beta} means the covariance matrix Σ\Sigma with the first row and column removed, and

      𝐏=(10c101c2001).\mathbf{P}=\begin{pmatrix}1&0&-c_{1}\\ 0&1&-c_{2}\\ 0&0&1\end{pmatrix}.

      It is important to recall that ss is strictly positive and any significant probability mass for s<0s<0 should formally be treated as follows: ss,ββ±πs\rightarrow-s,\,\beta\rightarrow\beta\pm\pi.

  2. (2)

    If the coordinate system is LPC, x4=ρx_{4}=\rho, then

    1. (i)

      the post-manoeuvre distribution is non-Gaussian, but

    2. (ii)

      the post-manoeuvre distribution is Gaussian if conditioned on β\beta and ρ\rho, and in this case

      (β˙ρ˙)𝒩((μβ˙+f1(β,ρ)μρ˙+f2(β,ρ)),Σβρ),\begin{pmatrix}\dot{\beta}\\ \dot{\rho}\end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu_{\dot{\beta}}+f_{1}(\beta,\,\rho)\\ \mu_{\dot{\rho}}+f_{2}(\beta,\,\rho)\end{pmatrix},\Sigma_{-\beta\rho}\right),

      where Σβρ\Sigma_{-\beta\rho} refers to the 2×22\times 2 covariance matrix formed by omitting the first and last row and column of Σ\Sigma.

The proof of Proposition 2 is in Appendix A-1. In order to understand the Proposition clearly, let us consider an illustrative example. Consider the situation of a nearby, poorly estimated target. In Cartesian coordinates, the pre-manoeuvre distribution estimates that the target is at (100,150)(100,150) m from the ownship, but with a standard deviation of 400400 m. Assume that β\beta is fairly accurately known from data. Then Fig. 1 shows how unusual the MPC and LPC distributions appear in practice. The LPC distribution, which is the topic of this paper, has many samples with great deviation from the sample mean; that is, the LPC distribution has large kurtosis.

Refer to caption
Figure 1: Visualisation by samples of the post-manoeuvre distribution in MPC and LPC. The colour of each sample indicates the estimated range of the target in ss or ρ\rho. As described below Proposition 2, the unusual situation in play is one in which the actual distance to the target is less than the uncertainty in the target position. The MPC distribution consists of two roughly Gaussian blobs, which are separated by π\pi in β\beta and represent the probability mass on each side of the ownship. The LPC distribution has clear skew and kurtosis; that is, there are rare, large deviations from the sample mean. Notice in both plots that there is a clear correlation between β˙\dot{\beta} and the range, shown by the colour banding; as explained in Proposition 2, this correlation has been created by the manoeuvre.

Altering the initial position of the target to (1000,1500)(1000,1500) m, and preserving all other parameters, we obtain roughly identical Gaussian distributions for MPC and LPC in Fig. 2. Full details for both Figs. 2 and 1 are given in Appendix A-2.

Refer to caption
Figure 2: Visualisation by samples of the post-manoeuvre distribution in MPC and LPC. The colour of each sample indicates the estimated range. As described below Proposition 2, this target is located far from the ownship. The MPC and LPC distribution are approximately identical.

Our goal in investigating the post-manoeuvre distribution is to assess the viability of Kalman Filter approaches to target tracking. Such approaches work by estimating the target mean and covariance. Our concern is that those estimates can be inadequate or misleading if the underlying distribution has, for example, significant kurtosis. Proposition 2 establishes some special cases in which we do understand the distribution, but Figs. 1 and 2 demonstrate that, in the general case, the LPC post-manoeuvre distribution both can, and can not, resemble a Gaussian distribution. In order to assuredly use a Kalman Filter in LPC in the general case, we need to know whether a Gaussian approximation is defensible. For this reason, the main body of the paper derives the moments of the post-manoeuvre distribution. The skew and kurtosis of the post-manoeuvre distribution allow monitoring whether a Kalman Filter estimate can be trusted.

III-B Mean and Covariance of the Post-Manoeuvre Distribution

Define the pre-manoeuvre mean and covariance of the target state vector as 𝝁x\bm{\mu}_{x} and Σx\Sigma_{x}; we seek the corresponding post-manoeuvre mean and covariance, 𝝁x+\bm{\mu}^{+}_{x} and Σx+\Sigma^{+}_{x}.

From Eq. 12, observe that any post-manoeuvre state space variable xk+x_{k}^{+}, where k{0,1,2,3}k\in\{0,1,2,3\}, is defined in terms of its pre-manoeuvre value, bearing, and range as

xk+(xk)=xk+aksinβr+bkcosβr,x_{k}^{+}(x_{k})=x_{k}+a_{k}\frac{\sin\beta}{r}+b_{k}\frac{\cos\beta}{r}, (15)

where a0=a3=b0=b3=0a_{0}=a_{3}=b_{0}=b_{3}=0, a1=Δvya_{1}=\Delta v_{y} and b1=Δvxb_{1}=-\Delta v_{x}, a2=Δvxa_{2}=-\Delta v_{x} and b2=Δvyb_{2}=-\Delta v_{y}.

The following results present the exact mean and covariance for the post-manoeuvre distribution.

Theorem 1 (Mean of the post-manoeuvre distribution).

The expected value of a post-manoeuvre variable xk+x_{k}^{+} defined in Eq. 15 is:

𝔼[xk+]=\displaystyle\mathbb{E}\left[x_{k}^{+}\right]=\; 𝔼[xk]+eμρ12(σββσρρ)(aksin(μβσβρ)\displaystyle\mathbb{E}\left[x_{k}\right]+e^{-\mu_{\rho}-\frac{1}{2}(\sigma_{\beta\beta}-\sigma_{\rho\rho})}(a_{k}\sin(\mu_{\beta}-\sigma_{\beta\rho})
+bkcos(μβσβρ)).\displaystyle+b_{k}\cos(\mu_{\beta}-\sigma_{\beta\rho})). (16)
Proof.

The integrals for the mean values take the form

𝔼[xk+]=xk+(𝒙)𝒩(𝒙;𝝁,Σ)𝑑𝒙.\mathbb{E}\left[x_{k}^{+}\right]=\int x_{k}^{+}(\bm{x})\mathcal{N}(\bm{x};\bm{\mu},\Sigma)d\bm{x}. (17)

For k{0,3}k\in\{0,3\} (bearing and log-range), xk+=xkx_{k}^{+}=x_{k}, hence 𝔼[xk+]=μk\mathbb{E}\left[x^{+}_{k}\right]=\mu_{k}.

Now consider the mean for bearing rate, k=1k=1. Substituting Eq. 9 for β˙+\dot{\beta}^{+} into Eq. 17, we get

𝔼[β˙+]\displaystyle\mathbb{E}\left[\dot{\beta}^{+}\right] =β˙+𝒩(𝒙;𝝁,Σ)𝑑𝒙\displaystyle=\int\dot{\beta}^{+}\mathcal{N}\!\left(\bm{x};\bm{\mu},\Sigma\right)d\bm{x}
=(β˙𝒘˙𝒏r)𝒩(𝒙;𝝁,Σ)𝑑𝒙\displaystyle=\int\left(\dot{\beta}-\frac{\dot{\bm{w}}\cdot\bm{n}}{r}\right)\mathcal{N}\!\left(\bm{x};\bm{\mu},\Sigma\right)d\bm{x}
=𝔼[β˙]𝒘˙𝔼[𝒏r].\displaystyle=\mathbb{E}\left[\dot{\beta}\right]-\dot{\bm{w}}\cdot\mathbb{E}\left[\frac{\bm{n}}{r}\right]. (18)

If we substitute the inverse range r1r^{-1} with eρe^{-\rho} and the sine and cosine components of 𝒏\bm{n} with real and imaginary parts of eiβe^{i\beta}, the last term becomes a linear combination of the real and imaginary parts of 𝔼[eρ+iβ]\mathbb{E}\left[e^{-\rho+i\beta}\right].

Note how this last expected value is similar to the moment-generating function (of the multivariate normal distribution), just for a complex-valued argument. As the case of a complex argument appears unnamed in the literature, we use the phrase ‘Generalised Gaussian MGF’.

Proposition 3 (The ‘Generalised Gaussian MGF’ (GGMGF)).

Let 𝒉n\bm{h}\in\mathbb{C}^{n} be fixed and 𝒙n\bm{x}\in\mathbb{R}^{n} be a Gaussian random variable. Then

𝔼[e𝒉T𝒙]=e𝝁T𝒉+12𝒉TΣ𝒉.\mathbb{E}\left[e^{\bm{h}^{\text{T}}\bm{x}}\right]=e^{\bm{\mu}^{\text{T}}\bm{h}+\frac{1}{2}\bm{h}^{\text{T}}\Sigma\bm{h}}\text{.} (19)

This generalises the Gaussian moment-generating function (MGF) to complex arguments. The proof is described in §1.2 of [9] and §2 of [10].

Corollary 1 (GGMGF for specific 𝒉\bm{h}).

For any 𝒉=[in,0,0,m]\bm{h}=[in,0,0,-m], where m,n{0}m,n\in\mathbb{N}\cup\{0\}, the GGMGF has the form

𝔼[rmeinβ]=emμρ12(n2σββm2σρρ)+i(nμβmnσβρ).\!\!\mathbb{E}\!\left[r^{-m}e^{in\beta}\right]=e^{-m\mu_{\rho}-\frac{1}{2}\left(n^{2}\sigma_{\beta\beta}-m^{2}\sigma_{\rho\rho}\right)+i(n\mu_{\beta}-mn\sigma_{\beta\rho})}. (20)

The real and imaginary parts of Eq. 20 give 𝔼[rmcos(nβ)]\mathbb{E}\left[r^{-m}\cos\!{(n\beta)}\right] and 𝔼[rmsin(nβ)]\mathbb{E}\left[r^{-m}\sin\!{(n\beta)}\right], respectively.

Now, continuing the proof, to obtain the expected value of eρ+iβe^{-\rho+i\beta}, we use Corollary 1 with 𝒉=[i,0,0,1]\bm{h}=[i,0,0,-1] to find

𝔼[eρ+iβ]=exp(μρσββσρρ2+i(μβσβρ)).\!\mathbb{E}\!\left[e^{-\rho+i\beta}\right]=\exp{\!\big{(}\!-\mu_{\rho}-\frac{\sigma_{\beta\beta}-\sigma_{\rho\rho}}{2}+i(\mu_{\beta}-\sigma_{\beta\rho})\big{)}}. (21)

The respective real and imaginary parts are 𝔼[r1cosβ]\mathbb{E}\left[r^{-1}\cos\beta\right] and 𝔼[r1sinβ]\mathbb{E}\left[r^{-1}\sin\beta\right]. Substitution of these expressions into Section III-B produces the mean of the post-manoeuvre bearing rate:

𝔼[β˙+]=𝔼[β˙](Δvxcos(μβσβρ)Δvysin(μβσβρ))eμρ12(σββσρρ).\mathbb{E}\!\left[\dot{\beta}^{+}\right]=\mathbb{E}\!\left[\dot{\beta}\right]-\left(\Delta v_{x}\cos(\mu_{\beta}-\sigma_{\beta\rho})\right.-\\ \left.\Delta v_{y}\sin(\mu_{\beta}-\sigma_{\beta\rho})\right)e^{-\mu_{\rho}-\frac{1}{2}(\sigma_{\beta\beta}-\sigma_{\rho\rho})}. (22)

We can further simplify Eq. 22 by describing the ownship change in velocity through the turning angle α\alpha. Then, Δvx=Δvcosα\Delta v_{x}=\Delta v\cos\alpha and Δvy=Δvsinα\Delta v_{y}=\Delta v\sin\alpha, so that

𝔼[β˙+]=𝔼[β˙]Δveμρσββσρρ2cos(α+μβσβρ).\!\!\mathbb{E}\!\left[\dot{\beta}^{+}\right]\!=\mathbb{E}\!\left[\dot{\beta}\right]-\Delta ve^{-\mu_{\rho}-\frac{\sigma_{\beta\beta}-\sigma_{\rho\rho}}{2}}\cos{(\alpha+\mu_{\beta}-\sigma_{\beta\rho})}. (23)

Finally, consider scaled range rate, k=2k=2. Using Eq. 9 for ρ˙+\dot{\rho}^{+} in Eq. 17, and again employing Proposition 3, we obtain

𝔼[ρ˙+]\displaystyle\mathbb{E}\!\left[\dot{\rho}^{+}\right] =ρ˙+𝒩(𝒙;𝝁,Σ)𝑑𝒙\displaystyle=\int\dot{\rho}^{+}\mathcal{N}\!\left(\bm{x};\bm{\mu},\Sigma\right)d\bm{x}
=(ρ˙s(𝒘˙𝒒))𝒩(𝒙;𝝁,Σ)𝑑𝒙\displaystyle=\int\left(\dot{\rho}-s(\dot{\bm{w}}\cdot\bm{q})\right)\mathcal{N}\!\left(\bm{x};\bm{\mu},\Sigma\right)d\bm{x}
=𝔼[ρ˙]𝒘˙𝔼[eρ𝒒]\displaystyle=\mathbb{E}\left[\dot{\rho}\right]-\dot{\bm{w}}\cdot\mathbb{E}\left[e^{-\rho}\bm{q}\right]
=𝔼[ρ˙](Δvxsin(μβσβρ)\displaystyle=\mathbb{E}\left[\dot{\rho}\right]-\left(\Delta v_{x}\sin(\mu_{\beta}-\sigma_{\beta\rho})\right.
+Δvycos(μβσβρ))eμρ12(σββσρρ)\displaystyle\quad\quad\left.+\Delta v_{y}\cos(\mu_{\beta}-\sigma_{\beta\rho})\right)e^{-\mu_{\rho}-\frac{1}{2}(\sigma_{\beta\beta}-\sigma_{\rho\rho})} (24)
=𝔼[ρ˙]Δveμρ12(σββσρρ)sin(α+μβσβρ).\displaystyle=\mathbb{E}\left[\dot{\rho}\right]-\Delta ve^{-\mu_{\rho}-\frac{1}{2}(\sigma_{\beta\beta}-\sigma_{\rho\rho})}\sin{(\alpha+\mu_{\beta}-\sigma_{\beta\rho})}.

Equations 22 and 24 match Theorem 1 with the coefficients aka_{k} and bkb_{k} from Eq. 15, thus completing the proof. ∎

Now derive the second raw moments for the post-manoeuvre distribution. By combining these second raw moments with the first raw moments 1, we will also obtain the post-manoeuvre covariance matrix Σ+\Sigma^{+}.

Theorem 2 (Second raw moment of the post-manoeuvre distribution).

For any two post-manoeuvre state space variables xj+x_{j}^{+} and xk+x_{k}^{+}, their second raw moment is:

𝔼[xj+xk+]=\displaystyle\mathbb{E}\!\left[x_{j}^{+}x_{k}^{+}\right]=\; 𝔼[xjxk]+ak𝔼[xjsinβr]+bk𝔼[xjcosβr]\displaystyle\mathbb{E}\left[x_{j}x_{k}\right]+a_{k}\mathbb{E}\!\left[x_{j}\frac{\sin{\beta}}{r}\right]+b_{k}\mathbb{E}\!\left[x_{j}\frac{\cos{\beta}}{r}\right]
+aj𝔼[xksinβr]+bj𝔼[xjcosβr]\displaystyle+a_{j}\mathbb{E}\!\left[x_{k}\frac{\sin{\beta}}{r}\right]+b_{j}\mathbb{E}\!\left[x_{j}\frac{\cos{\beta}}{r}\right]
+ajak+bjbk2𝔼[1r2]\displaystyle+\frac{a_{j}a_{k}+b_{j}b_{k}}{2}\mathbb{E}\!\left[\frac{1}{r^{2}}\right]
+bjbkajak2𝔼[cos(2β)r2]\displaystyle+\frac{b_{j}b_{k}-a_{j}a_{k}}{2}\mathbb{E}\!\left[\frac{\cos{\!(2\beta)}}{r^{2}}\right]
+ajbk+akbj2𝔼[sin(2β)r2].\displaystyle+\frac{a_{j}b_{k}+a_{k}b_{j}}{2}\mathbb{E}\!\left[\frac{\sin{\!(2\beta)}}{r^{2}}\right]. (25)

All the expected values in this expression can be obtained in closed form using the GGMGF 19.

Corollary 2 (k=jk=j case of second raw moment).

If k=jk=j for the two state space variables in Theorem 2, then

𝔼[(xj+)2]=\displaystyle\mathbb{E}\!\left[\left(x_{j}^{+}\right)^{2}\right]=\; 𝔼[xj2]+2aj𝔼[xjsinβr]+2bj𝔼[xjcosβr]\displaystyle\mathbb{E}\!\left[x_{j}^{2}\right]+2a_{j}\mathbb{E}\!\left[x_{j}\frac{\sin{\beta}}{r}\right]+2b_{j}\mathbb{E}\!\left[x_{j}\frac{\cos{\beta}}{r}\right]
+aj2+bj22𝔼[1r2]+bj2aj22𝔼[cos(2β)r2]\displaystyle+\frac{a_{j}^{2}+b_{j}^{2}}{2}\mathbb{E}\!\left[\frac{1}{r^{2}}\right]+\frac{b_{j}^{2}-a_{j}^{2}}{2}\mathbb{E}\!\left[\frac{\cos{\!(2\beta)}}{r^{2}}\right]
+ajbj𝔼[sin(2β)r2].\displaystyle+a_{j}b_{j}\mathbb{E}\!\left[\frac{\sin{\!(2\beta)}}{r^{2}}\right]. (26)

Before presenting the proof of Theorem 2, we need one more auxiliary result. The theorem proof follows.

Lemma 1 (First derivative of GGMGF for specific 𝒉\bm{h}).

For any 𝒉=[in,0,0,m]\bm{h}=[in,0,0,-m], where m,n{0}m,n\in\mathbb{N}\cup\{0\}, the first derivative of the GGMGF with respect to xjx_{j} has the form

𝔼[xjrmeinβ]=(μjmσρj+inσβj)𝔼[rmeinβ].\mathbb{E}\left[x_{j}r^{-m}e^{in\beta}\right]=(\mu_{j}-m\sigma_{\rho j}+in\sigma_{\beta j})\mathbb{E}\left[r^{-m}e^{in\beta}\right]. (27)
Proof.

Consider the partial derivative of the GGMGF with respect to hjh_{j}. Swapping the order of expectation and differentiation, we have

𝔼[xje𝒉T𝒙]\displaystyle\mathbb{E}\left[x_{j}e^{\bm{h}^{\text{T}}\bm{x}}\right] =hj(𝔼[e𝒉T𝒙])\displaystyle=\frac{\partial}{\partial h_{j}}\left(\mathbb{E}\left[e^{\bm{h}^{\text{T}}\bm{x}}\right]\right)
=hje𝝁T𝒉+12𝒉TΣ𝒉\displaystyle=\frac{\partial}{\partial h_{j}}e^{\bm{\mu}^{\text{T}}\bm{h}+\frac{1}{2}\bm{h}^{\text{T}}\Sigma\bm{h}}
=(μj+Σj𝒉)e𝝁T𝒉+12𝒉TΣ𝒉,\displaystyle=\left(\mu_{j}+\Sigma_{j}\bm{h}\right)e^{\bm{\mu}^{\text{T}}\bm{h}+\frac{1}{2}\bm{h}^{\text{T}}\Sigma\bm{h}},

where Σj\Sigma_{j} denotes the jthj^{\text{th}} row of the matrix Σ\Sigma. Without loss of generality, substitute 𝒉=[in,0,0,m]\bm{h}=[in,0,0,-m] into this expression to obtain Eq. 27. ∎

Proof of Theorem 2.

In the integral form, the second-order raw moments of the post-manoeuvre distribution are

𝔼[xj+xk+]=xj+xk+𝒩(𝒙;𝝁,Σ)𝑑𝒙.\mathbb{E}\!\left[x_{j}^{+}x_{k}^{+}\right]=\int x_{j}^{+}x_{k}^{+}\mathcal{N}\!\left(\bm{x};\bm{\mu},\Sigma\right)d\bm{x}. (28)

Substituting Eq. 15 for xj+x_{j}^{+}, xk+x_{k}^{+}, expanding the product, then interchanging the order of integration and expectation, and grouping the terms with the same expected values we obtain the decomposition 2. All the expected values in Theorem 2 are given in closed form by the real and imaginary parts of  Eqs. 20 and 27 for the respective mm and nn indices. ∎

Corollary 2 immediately follows from Theorem 2 for j=kj=k. Therefore, we have derived closed-form expressions for all the post-manoeuvre first and second (raw and central) moments.

III-C Higher-Order Moments of the Post-Manoeuvre Distribution

In Theorems 1 and 2, we see that the first two moments of the post-manoeuvre distribution are linear combinations of pre-manoeuvre moments. We generalise this for the arbitrary NthN^{\text{th}} raw moment of the post-manoeuvre distribution.

Proposition 4 (NthN^{\text{th}} raw moment of the post-manoeuvre distribution).

For post-manoeuvre state space variables xk+x_{k}^{+} each raised to powers qkq_{k}, k=0,,3k=0,\dots,3, such that j=03qk=N\sum_{j=0}^{3}q_{k}=N, the NthN^{\text{th}} raw moment is

𝔼[k=03(xk+)qk]=𝔼[k=03(xk+aksinβr+bkcosβr)qk],\displaystyle\!\!\!\mathbb{E}\!\left[\prod_{k=0}^{3}(x_{k}^{+})^{q_{k}}\right]\!=\mathbb{E}\!\left[\prod_{k=0}^{3}\!\left(x_{k}+a_{k}\frac{\sin\beta}{r}+b_{k}\frac{\cos\beta}{r}\!\right)^{q_{k}}\right]\!\!, (29)

which, when expanded, yields a linear combination of

Ep0,p1,p2,p3,m,n(s)\displaystyle E_{p_{0},p_{1},p_{2},p_{3},m,n}^{(s)} =𝔼[k=03xkpkrmsin(nβ)], and\displaystyle=\mathbb{E}\left[\prod_{k=0}^{3}x_{k}^{p_{k}}r^{-m}\sin(n\beta)\right]\text{, and}
Ep0,p1,p2,p3,m,n(c)\displaystyle E_{p_{0},p_{1},p_{2},p_{3},m,n}^{(c)} =𝔼[k=03xkpkrmcos(nβ)],\displaystyle=\mathbb{E}\left[\prod_{k=0}^{3}x_{k}^{p_{k}}r^{-m}\cos(n\beta)\right], (30)

where k=03pk+m=N\sum_{k=0}^{3}p_{k}+m=N, for 0nm0\leq n\leq m.

Proof.

Consider the product

k=03(xk+aksinβr+bkcosβr)qk\prod_{k=0}^{3}\left(x_{k}+a_{k}\frac{\sin\beta}{r}+b_{k}\frac{\cos\beta}{r}\right)^{q_{k}}

from the right-hand side of Eq. 29. Expansion via the binomial theorem yields a linear combination of terms

k=03xkpkr(c+d)sincβcosdβ,\prod_{k=0}^{3}x_{k}^{p_{k}}r^{-(c+d)}\sin^{c}{\beta}\cos^{d}{\beta}, (31)

where k=03pk+c+d=N\sum_{k=0}^{3}p_{k}+c+d=N.

Using De Moivre’s formula, Euler’s formula, and the binomial theorem, we convert the powers sincβ\sin^{c}{\beta} and cosdβ\cos^{d}{\beta} to sums of sin(cβ)\sin(c^{*}\beta) and cos(dβ)\cos(d^{*}\beta), where c=0,,cc^{*}=0,\dots,c and d=0,,dd^{*}=0,\dots,d. Expanding again the product of these sums, and using the trigonometric addition identities, each element 31 becomes a linear combination of terms

k=03xkpkrmsin(nβ),k=03xkpkrmcos(nβ),\prod_{k=0}^{3}x_{k}^{p_{k}}r^{-m}\sin\!{(n\beta)},\quad\prod_{k=0}^{3}x_{k}^{p_{k}}r^{-m}\cos\!{(n\beta)}, (32)

where m=c+dm=c+d, k=03pk+m=N\sum_{k=0}^{3}p_{k}+m=N, and 0nm0\leq n\leq m.

Therefore, using the linearity of expectation, the right-hand side of Eq. 29 is a linear combination of Ep0,p1,p2,p3,m,n(s)E_{p_{0},p_{1},p_{2},p_{3},m,n}^{(s)} and Ep0,p1,p2,p3,m,n(c)E_{p_{0},p_{1},p_{2},p_{3},m,n}^{(c)} defined in Proposition 4, which are the expected values of expressions 32. ∎

The expected values in Proposition 4 can be found in closed form by recursive application of the following Lemma.

Lemma 2 (NthN^{\text{th}} partial derivative of the GMGF).

The NthN^{\text{th}} partial derivative of the GMGF, conditioned by k=03pk+m=N\sum_{k=0}^{3}p_{k}+m=N, pj0p_{j}\geq 0, and m0m\geq 0, is

𝔼[e𝒉T𝒙k=03xkpk]=\displaystyle\mathbb{E}\left[e^{\bm{h}^{\text{T}}\bm{x}}\prod_{k=0}^{3}x_{k}^{p_{k}}\right]= NΦ𝒙(𝒉)hjpjhkpk\displaystyle\frac{\partial^{N}\Phi_{\bm{x}}(\bm{h})}{\partial h_{j}^{p_{j}}\partial h_{k}^{p_{k}}\dots}
=\displaystyle= jkpjσjkN2Φ𝒙(𝒉)hjpj1hkpk1\displaystyle\sum_{j\neq k}p_{j}\sigma_{jk}\frac{\partial^{N-2}\Phi_{\bm{x}}(\bm{h})}{\partial h_{j}^{p_{j}-1}\partial h_{k}^{p_{k}-1}\dots} (33a)
+(pk1)σkkN2Φ𝒙(𝒉)hjpjhkpk2\displaystyle+(p_{k}-1)\sigma_{kk}\frac{\partial^{N-2}\Phi_{\bm{x}}(\bm{h})}{\partial h_{j}^{p_{j}}\partial h_{k}^{p_{k}-2}\dots} (33b)
+(μk+Σk𝒉)N1Φ𝒙(𝒉)hjpjhkpk1,\displaystyle+\left(\mu_{k}+\Sigma_{k}\bm{h}\right)\frac{\partial^{N-1}\Phi_{\bm{x}}(\bm{h})}{\partial h_{j}^{p_{j}}\partial h_{k}^{p_{k}-1}\dots}, (33c)

where kk is selected such that pk>1p_{k}>1, which is always possible for N1N\geq 1. The index kk is not specific, any index can be selected due to the symmetry of derivatives.

For the sake of brevity, the notation in Eq. 33 technically allows negative derivatives of order 1-1 if pj=0p_{j}=0 for some elements in Eq. 33a, or if pk=1p_{k}=1 in Eq. 33b. However, such terms are eliminated by the corresponding pj=0p_{j}=0 and pk1p_{k}-1 multipliers in front.

Proof.

For N=1N=1, we re-write Eq. 27 as

hkΦ𝒙(𝒉)=(μk+Σk𝒉)Φ𝒙(𝒉).\frac{\partial}{\partial h_{k}}\Phi_{\bm{x}}(\bm{h})=\left(\mu_{k}+\Sigma_{k}\bm{h}\right)\Phi_{\bm{x}}(\bm{h}). (34)

For N=2N=2, we take the second partial derivative to obtain

2hkhjΦ𝒙(𝒉)\displaystyle\frac{\partial^{2}}{\partial h_{k}\partial h_{j}}\Phi_{\bm{x}}(\bm{h}) =hk(μj+Σj𝒉)Φ𝒙(𝒉)\displaystyle=\frac{\partial}{\partial h_{k}}\left(\mu_{j}+\Sigma_{j}\bm{h}\right)\Phi_{\bm{x}}(\bm{h})
=(σjk+(μj+Σj𝒉)(μk+Σk𝒉))Φ𝒙(𝒉)\displaystyle=\bigl{(}\sigma_{jk}+\left(\mu_{j}+\Sigma_{j}\bm{h}\right)\left(\mu_{k}+\Sigma_{k}\bm{h}\right)\bigr{)}\Phi_{\bm{x}}(\bm{h})
=σjkΦ𝒙(𝒉)+(μk+Σk𝒉)hjΦ𝒙(𝒉).\displaystyle=\leavevmode\resizebox{258.36667pt}{}{$\sigma_{jk}\Phi_{\bm{x}}(\bm{h})+\left(\mu_{k}+\Sigma_{k}\bm{h}\right)\frac{\partial}{\partial h_{j}}\Phi_{\bm{x}}(\bm{h})$}. (35)

These expressions start the recursion of Eq. 33: for jkj\neq k, the first term of Section III-C yields Eq. 33a, and for j=kj=k, it yields Eq. 33b. Eq. 34 and the last term of Section III-C produce Eq. 33c for the respective N=1,2N=1,2.

The proof for a general order NN is obtained through direct differentiation of Eq. 33 by hmh_{m} for some arbitrary mm: the partial derivatives in Eqs. 33a to 33c increase the order of their hmh_{m}-th derivatives by one. Then the derivative of the μk+Σk𝒉\mu_{k}+\Sigma_{k}\bm{h} term in Eq. 33c yields one more summand with a σkm\sigma_{km} multiplier. If kmk\neq m, it is combined with the corresponding j=mj=m term of Eq. 33a to increase its pmp_{m} multiplier in front to pm=pm+1p^{\prime}_{m}=p_{m}+1. Alternatively, for k=mk=m the new summand is combined with Eq. 33b to increase the pk1=pm1p_{k}-1=p_{m}-1 multiplier to pm=pm1p_{m}=p^{\prime}_{m}-1. In both cases, we obtain Eq. 33 with substitutions NN=N+1N\Rightarrow N^{\prime}=N+1 and pmpm=pm+1p_{m}\Rightarrow p^{\prime}_{m}=p_{m}+1, which is the next order of recursion. ∎

Substituting 𝒉=[in,0,0,m]\bm{h}=[in,0,0,-m] into Eq. 33 recursively yields the expected values from Proposition 4.

By Proposition 4, any NthN^{\text{th}} order raw moment is

𝔼[k=03(xk+)qk]\displaystyle\mathbb{E}\left[\prod_{k=0}^{3}(x_{k}^{+})^{q_{k}}\right] ={q,m}n=0m(Sp0,,p3,m,n(p0,,p3)Ep0,,p3,m,n(s)\displaystyle=\sum_{\{q,m\}}\sum_{n=0}^{m}\left(S_{p_{0},\dots,p_{3},m,n}^{(p_{0},\dots,p_{3})}E_{p_{0},\dots,p_{3},m,n}^{(s)}\right.
+Cp0,,p3,m,n(p0,,p3)Ep0,,p3,m,n(c)).\displaystyle\left.+C_{p_{0},\dots,p_{3},m,n}^{(p_{0},\dots,p_{3})}E_{p_{0},\dots,p_{3},m,n}^{(c)}\right). (36)

The first sum considers all possible combinations of pkp_{k}, k=0,,3k=0,\dots,3 and mm subject to the conditions k=03pk+m=N\sum_{k=0}^{3}p_{k}+m=N, pk0p_{k}\geq 0, m0m\geq 0. The coefficients Sp0,,p3,m,n(p0,,p3)S_{p_{0},\dots,p_{3},m,n}^{(p_{0},\dots,p_{3})} and Cp0,,p3,m,n(p0,,p3)C_{p_{0},\dots,p_{3},m,n}^{(p_{0},\dots,p_{3})} are products of aka_{k} and bkb_{k} and can be found from Eqs. 29 and 15.

We demonstrated that any higher-order moment of the post-manoeuvre distribution can be found in closed form by recursive application of Eqs. 33 and III-C. However, as the order of the moment increases, so does the complexity of its closed-form expression. Explicit calculation of the coefficients Sp0,,p3,m,n(p0+,,p3+)S_{p_{0},\dots,p_{3},m,n}^{(p_{0}^{+},\dots,p_{3}^{+})} and Cp0,,p3,m,n(p0+,,p3+)C_{p_{0},\dots,p_{3},m,n}^{(p_{0}^{+},\dots,p_{3}^{+})} quickly becomes prohibitive. Combined with the non-Gaussianity proven in Section III-A, such that we have no existing statistical results to draw on, we turn to computer algebra for deriving specific cases of Section III-C. We have developed a package in SageMath [11] to generate these moments and export them as LaTeX or Python code. A demonstration of this package is shown in Appendix B.

The first two moments are explicitly given by Theorems 1 and 2. In the next section, we apply these two moments in a modified UKF estimator, while using the higher-order moments to monitor its performance.

IV Applications of the Closed-Form Expressions in Tracking Simulation

IV-A UKF using Closed-Form Expressions for Mean and Covariance Prediction

The Kalman Filter provides a minimum square error estimate for the mean and covariance of the target location [12]. For our non-linear dynamical system, the UKF is appropriate. It uses the UT to propagate deterministic sigma points that accurately capture the target state mean and covariance [7].

In order to test our closed-form expressions in tracking simulation, we will implement a pure UKF and a modified UKF that incorporates our closed-form expressions. We assume the ownship alternates between straight legs and instant turns. The modified UKF employs the first two moments of the post-manoeuvre distribution to calculate the target state and covariance after each ownship turn.

We define the ownship manoeuvre at time step ii as a tuple of three values:

  • Δ𝒓i\Delta\bm{r}_{i}, the accelerated displacement during the step illustrated in Fig. 3.

  • Δ𝒗i\Delta\bm{v}_{i}, the speed change during the step; the speed changes instantly at the point of the turn, at unknown time within the step.

  • Δti\Delta t_{i}, the time duration of the step.

As the turn may happen at a time instance within the step, we seek to split the manoeuvre into elementary movements, or sub-manoeuvres: a straight leg, an instant turn, and another straight leg after the turn. The duration of the two straight legs adds up to the total duration of the time step Δti\Delta t_{i}. Computation of these sub-manoeuvres is described in Theorem 3.

𝒓1\bm{r}_{1}𝒓1\bm{r}_{1}^{\prime}𝒓3\bm{r}_{3}𝒗1\bm{v}_{1}Δt1\Delta t_{1}Δt3\Delta t_{3}Δt3\Delta t_{3}𝒗3\bm{v}_{3}
Figure 3: A diagram of the predicted ownship position, 𝒓1\bm{r}_{1}^{\prime}, versus the true position, 𝒓3\bm{r}_{3}, where the black node indicates the point of the instant turn. Note the jump from index 11 to 33 as the second sub-manoeuvre, the instant turn, has zero duration, zero accelerated displacement (hence, zero length on the plot), but non-zero change in velocity: Δ𝒗=𝒗3𝒗1\Delta\bm{v}=\bm{v}_{3}-\bm{v}_{1}. The accelerated displacement is Δ𝒓=𝒓3𝒓1\Delta\bm{r}=\bm{r}_{3}-\bm{r}_{1}^{\prime}.
Theorem 3 (Manoeuvre splitting).

The manoeuvre containing a turn M(Δ𝒓,Δ𝒗,Δt)M(\Delta\bm{r},\Delta\bm{v},\Delta t) is equivalently described as a list of three sub-manoeuvres:

  1. 1.

    M1(0,0,Δt1)M_{1}(0,0,\Delta t_{1}),

  2. 2.

    M2(0,Δ𝒗,0)M_{2}(0,\Delta\bm{v},0), and

  3. 3.

    M3(0,0,Δt3)M_{3}(0,0,\Delta t_{3})

where

Δt3=Δ𝒓Δ𝒗|Δ𝒗|2\Delta t_{3}=\frac{\Delta\bm{r}\cdot\Delta\bm{v}}{|\Delta\bm{v}|^{2}} (37)

and Δt1=ΔtΔt3\Delta t_{1}=\Delta t-\Delta t_{3}.

Proof.

We write the final position in terms of the initial position and two velocities.

𝒓3=𝒓1+Δt1𝒗1+Δt3𝒗3.\bm{r}_{3}=\bm{r}_{1}+\Delta t_{1}\bm{v}_{1}+\Delta t_{3}\bm{v}_{3}.

Substituting Δt3=ΔtΔt1\Delta t_{3}=\Delta t-\Delta t_{1} and regrouping, we get

𝒓3(𝒓1+Δt𝒗1)=Δt3(𝒗3𝒗1).\bm{r}_{3}-(\bm{r}_{1}+\Delta t\bm{v}_{1})=\Delta t_{3}(\bm{v}_{3}-\bm{v}_{1}).

As illustrated in Fig. 3, the left-hand side is the accelerated displacement Δ𝒓\Delta\bm{r}; the term in brackets in the right-hand side is the speed change Δ𝒗\Delta\bm{v}. Therefore,

Δ𝒓=Δt3Δ𝒗.\Delta\bm{r}=\Delta t_{3}\Delta\bm{v}.

By assumption of a non-zero turn, |Δ𝒗|0|\Delta\bm{v}|\neq 0. Thus, multiplying both sides by Δ𝒗\Delta\bm{v} we obtain Eq. 37. ∎

In cases where the turn is at the start or at the end of the total manoeuvre, without loss of generality, we split the total manoeuvre into two: a turn and a straight leg.

Having established the approach for isolating instant turns, we present the modified filter, CFE-UKF, as Algorithm 1.

Algorithm 1 The modified UKF algorithm, CFE-UKF, that uses the closed-form expressions 1 and 2 to update the mean and covariance over instant turns.
1:function CFE-UKF
2:     Initialise at time t0t_{0}, iteration 0
3:     NTN_{T} is the total count of time steps
4:     for i[1,NT]i\in[1,N_{T}] do
5:         Advance platforms forward by time step Δti\Delta t_{i}
6:         Compute the manoeuvre Mi(Δ𝒓i,Δ𝒗i,Δti)M_{i}(\Delta\bm{r}_{i},\Delta\bm{v}_{i},\Delta t_{i})
7:         if |Δ𝒗i|<ϵ|\Delta\bm{v}_{i}|<\epsilon then \triangleright no turn
8:              Time update over Δti\Delta t_{i} using UT
9:         else
10:              Split MiM_{i} into sub-manoeuvres [Mj][M_{j}]
11:              for all Mj[Mj]doM_{j}\in[M_{j}]\ \textbf{do}
12:                  if Δ𝒗j𝟎\Delta\bm{v}_{j}\neq\bm{0} then \triangleright ownship turned
13:                       State update by Theorems 1 and 2
14:                  else\triangleright straight leg
15:                       Time update over Δtj\Delta t_{j} using UT
16:                  end if
17:              end for
18:         end if
19:         Measurement update of the UKF as usual
20:     end for
21:end function
Refer to caption
Figure 4: Motion of the ownship and target in the simulated scenario. In Cartesian coordinates, the ownship velocity along straight legs is [±10,10][\pm 10,10] m/s and the initial target velocity is [0,12.5][0,12.5] m/s. Each straight leg is 2 hours long. The short cyan segment shows the stretch of time used for mean range error computations in Section IV-B.

To compare the CFE-UKF with a pure UKF, we use a scenario presented in Fig. 4. The ownship moves deterministically along straight lines, and makes exact turns, while the target movement is subject to the process noise provided by the continuous-time white noise model with nearly constant velocity [13]. The time increment Δt\Delta t is 10 seconds.

For the purposes of verification, we first run the CFE-UKF and UKF with artificially precise parameters. The process noise intensity is set to 10810^{-8} and the sensor standard deviation of bearing measurements to 0.001°. We set the initial track range to 150150 km (the true range from ownship to target) and initialise the range variance to 1010% of the true range. In this idealised case, we find both trackers run very similarly: the averaged absolute difference between their means is only [1.3109,1.81011,3.91010,1.8105][1.3\cdot 10^{-9},1.8\cdot 10^{-11},3.9\cdot 10^{-10},1.8\cdot 10^{-5}].

The CFE-UKF is slightly less efficient than a pure UKF; for each turning point, we perform up to two UTs, depending on whether there is a preceding and succeeding straight leg, and then additionally compute the closed-form expressions to update the mean and covariance over the turn. However, due to our small state space dimension nx=4n_{x}=4, and the small proportion of ownship turns relative to the total scenario, this increase in complexity has minimal effect on total run-time.

IV-B Monitoring Gaussianity Using the Third and Fourth Central Moments of the Post-Manoeuvre Distribution

In Section III-A we have shown that the post-manoeuvre distribution is not Gaussian. Now we can quantify this statement. When the ownship turns, we know the higher-order moments of the post-manoeuvre state distribution, and can use them to monitor how close is this distribution to Gaussian. The turning points are critical for passive BO tracking because only there the range becomes observable, and therefore, it is vital for the estimator assumptions, e.g. Gaussianity, to be satisfied.

We consider two metrics for use as a monitoring tool. The first it is the Frobenius norm of the third central moment tensor

M3=𝔼[(𝒙+𝝁+)3].M_{3}=\big{\lVert}\,\mathbb{E}\left[(\bm{x}^{+}-\bm{\mu}^{+})^{3}\right]\!\big{\rVert}. (38)

For a Gaussian distribution M30M_{3}\equiv 0, hence a non-zero value is a measure of non-Gaussianity.

The second metric is the norm of the deviation of the post-manoeuvre fourth central moment, a 4th4^{\text{th}}-order tensor, from the expected fourth central moment of a Gaussian distribution:

M4=𝔼[(𝒙+𝝁+)4]𝔼[(𝒚𝝁+)4],M_{4}=\big{\lVert}\,\mathbb{E}\!\left[(\bm{x}^{+}-\bm{\mu}^{+})^{4}\right]-\mathbb{E}\!\left[(\bm{y}-\bm{\mu}^{+})^{4}\right]\!\big{\rVert}, (39)

where y𝒩(𝝁+,Σ+)y\sim\mathcal{N}(\bm{\mu}^{+},\Sigma^{+}). Again, M40M_{4}\equiv 0 for a Gaussian distribution; a non-zero value quantifies non-Gaussianity.

Figure 5 shows the time series of the proposed metrics for the scenario of Fig. 4. To make the simulations somewhat less idealised, from now on the process noise intensity is increased to 10610^{-6} and the bearing measurement error to 0.1°. Clearly, the distribution is not Gaussian when the ownship turns. On the positive side, we observe that with every subsequent turn the metrics decrease, implying that the distribution becomes more Gaussian with time as the estimators converge. Consequently, it may imply that the observed non-Gaussianity is related to the estimator initialisation, and that is what we examine next.

Refer to caption
Figure 5: Time series of the two non-Gaussianity metrics. The three peaks correspond to the three ownship turns (bars have been offset for visibility).

IV-C Effect of Range Error Initialisation on the Post-Manoeuvre Distribution

To start the estimator, we need to provide the initial state 𝒙0\bm{x}_{0} and covariance Σ0\Sigma_{0}. In the examples presented in this paper, we set 𝒙0\bm{x}_{0} using the first meaured bearing for β0\beta_{0}, zeros for the derivatives β˙0\dot{\beta}_{0} and ρ˙0\dot{\rho}_{0}, and the true value for the initial range r0r_{0}. The covariance Σ0\Sigma_{0} is set to a diagonal matrix of

  • the sensor measurement variance for bearing,

  • (vmax/r0)2{(v_{\text{max}}/r_{0})}^{2} for the bearing and scaled range rates, where vmaxv_{\text{max}} is some “large” speed value, and

  • σrr=log(1+(δr)2)\sigma_{rr}=\log{(1+{(\delta r)}^{2})} for log-range, were we introduce δr\delta r as the initial relative range error.

We express the log-range error in terms of δr\delta r, because in the limit of small errors, the range variance of (r0δr)2{(r_{0}\delta r)}^{2} indeed corresponds to the log-range variance of σrr\sigma_{rr}. For larger errors the relationship breaks down, but δr\delta r still provides a convenient description of the error in human terms, as opposed to the less intuitive variance of the range logarithm.

To examine the impact of initialisation on the tracking accuracy and on the properties of the post-manoeuvre distribution, we consider the estimator output after the first manoeuvre of our scenario, where the non-Gaussianity is most pronounced. As a measure of accuracy, we use the mean estimated range error (MRE) calculated over ten minutes of the scenario at the end of the second leg (highlighted in Fig. 4 with cyan).

Refer to caption\phantomsubcaption
Refer to caption\phantomsubcaption
Refer to caption\phantomsubcaption
Figure 6: Effect of the initial range variance on MRE (a, top), third central moment (b, middle), and fourth central moment difference (c, bottom). The dashed and dotted lines respectively show the thresholds to achieve the MRE of 5%5\% and 10%10\% of the initial range.

The dependence of the MRE on tracker initialisation is presented in Fig. 6. The corresponding two metrics of target state Gaussianity after the first ownship turn are shown in Figs. 6 and 6. As expected, estimation error grows with initialisation ambiguity. Correspondingly, the post-manoeuvre state distribution becomes less Gaussian, thus violating the UKF assumptions and reducing estimator accuracy.

A corollary of the presented result is that monitoring non-Gaussianity enables a measure of control of the estimator performance. This is the motivation for the CFE-UKF. While it may not perform that differently from the pure UKF, the new estimator provides the user with a capability to monitor whether the underlying assumptions of the Gaussian estimator are being met.

Whenever the non-Gaussianity metrics exceed certain thresholds222 The specific thresholds for M3M_{3} and M4M_{4} metrics can be established by Monte-Carlo modelling, but are beyond the scope of this paper., the user would know that the target range estimate after the manoeuvre cannot be trusted, even if the estimated range variance is low. In such a case, the ownship may, for example, attempt the next manoeuvre to improve the solution. Figure 5 shows how Gaussianity of the post-manoeuvre distribution improves on each turn, thus validating the estimator convergence. Alternatively, the same Gaussianity metrics can guide the choice of the initial range variance δr\delta r.

The focus of this section has been the use of our results in a Kalman Filter framework. One may consider more powerful filtering algorithms that do not assume Gaussianity of the target state; these are discussed in the next section.

V Conclusions and Future Work

The paper extends the theoretical understanding of object tracking in LPC. In order to make the theory tractable, we assume an instantaneous manoeuvre and a target moving in a straight line. Under these simplifications, we obtain some strong results. The main contribution of the paper is Proposition 4: we derive all moments of the post-manoeuvre distribution in closed form. Additionally, Proposition 2 derives, for both LPC and MPC, special cases in which the post-manoeuvre distribution is Gaussian, with known parameters. Our results, while formally obtained for instantaneous manoeuvres, may be applied in the general, non-instantaneous case through discretisation.

As the contributions of the paper are primarily theoretical, let us now discuss possible applications of the theory. The most common class of tracking algorithms, consisting of extensions of the Kalman Filter for non-linear system dynamics, work best when the state distribution of the target is approximately unimodal and without large outliers. Our contribution allows a user to track the third and fourth moments of the LPC distribution, and thereby assess when these conditions are met.

We demonstrate the utility of the obtained results in filtering algorithms. For state and covariance prediction, we define a new algorithm, the CFE-UKF, that uses the closed-form expressions for mean and covariance in place of the UT-based time update over an instant ownship turn. By comparing with the well-understood UKF in a simple tracking scenario, we confirm our results for the first two moments of the post-manoeuvre distribution are correct. However, in addition to the mean and covariance, our estimator also computes the third and fourth post-manoeuvre moments to allow monitoring of the quality of the Gaussian approximation, an ability that the pure UKF lacks. This insight gives the CFE-UKF an edge over the UT-based estimator.

With this understanding, we conclude that there is a need to evaluate algorithms that do not blindly assume Gaussianity of the underlying target state distribution. Similar to the approaches presented in [14], we could use our higher-order moment metrics as splitting criteria for a Gaussian sum filter; a threshold established by Monte-Carlo modelling can determine when the estimator would benefit from a split of the prior when the ownship turns.

Alternatively, the conditional Gaussianity properties proved in Proposition 2 empower the use of a Rao-Blackwell-type particle filter; a particle filter would track the bearing rate and scaled range rate, while bearing and log-range are conditionally Gaussian and can be tracked with a Kalman Filter.

Acknowledgements

We thank Acacia Systems for their support and funding.

Appendix A Supplemental details for Section III-A

1. Proof of Proposition 2

Proof.

Claims (1)(i) and (2)(i), that the post-manoeuvre distributions of MPC and LPC are non-Gaussian, are clearly visible from the terms in cosβ\cos\beta and sinβ\sin\beta, which are entangled with β˙\dot{\beta} and ρ˙\dot{\rho} in the exponent of the post-manoeuvre distribution. Let Pi,jP_{i,j} refer to the i,jthi,j^{\text{th}} entry of the precision matrix Σ1\Sigma^{-1}; then the exponent of Eq. 14 contains a term

(ρ˙μρ+sΔvxsinβ+sΔvycosβ)2P2,2.(\dot{\rho}-\mu_{\rho}+s\Delta v_{x}\sin\beta+s\Delta v_{y}\cos\beta)^{2}P_{2,2}\;.

Expanding this quadratic reveals terms like, for example, ρ˙sP2,2sinβ\dot{\rho}sP_{2,2}\sin\beta, that render the distribution non-Gaussian. One might wonder whether these terms can cancel with other terms in the distribution — but cancellation is impossible without requiring special conditions on the precision matrix entries.

Claim (1)(ii), that the MPC distribution is known exactly if we condition on the bearing β\beta, is proven as follows. As in the Proposition, let c1c_{1} and c2c_{2} refer to the constant factors in f1f_{1} and f2f_{2}. Accordingly, Eq. 10 in MPC becomes

𝒉1(𝒙+)=[β+β˙++c1s+ρ˙++c2s+s+],\bm{h}^{-1}(\bm{x}^{+})=\begin{bmatrix}\beta^{+}\\ \dot{\beta}^{+}+c_{1}s^{+}\\ \dot{\rho}^{+}+c_{2}s^{+}\\ s^{+}\end{bmatrix}\;,

and on substitution into Eq. 14 and conditioning on β\beta, the exponent of the post-manoeuvre distribution is

12[([β˙++c1s+ρ˙++c2s+s+]𝝁)TΣβ1([β˙++c1s+ρ˙++c2s+s+]𝝁)].-\frac{1}{2}\left[\left(\begin{bmatrix}\dot{\beta}^{+}+c_{1}s^{+}\\ \dot{\rho}^{+}+c_{2}s^{+}\\ s^{+}\end{bmatrix}-\bm{\mu}\right)^{{}^{\text{T}}}\Sigma^{-1}_{-\beta}\left(\begin{bmatrix}\dot{\beta}^{+}+c_{1}s^{+}\\ \dot{\rho}^{+}+c_{2}s^{+}\\ s^{+}\end{bmatrix}-\bm{\mu}\right)\right].

We now write the vectors that pre- and post-multiply Σ\Sigma as linear transformations of Gaussian innovations. Let d3=s+μsd_{3}=s^{+}-\mu_{s}, then

β˙++c1s+μβ˙=β˙(μβ˙c1μs)+c1d3,\dot{\beta}^{+}+c_{1}s^{+}-\mu_{\dot{\beta}}=\dot{\beta}-(\mu_{\dot{\beta}}-c_{1}\mu_{s})+c_{1}d_{3},
ρ˙++c2s+μρ˙=ρ˙(μρ˙c2μs)+c2d3.\dot{\rho}^{+}+c_{2}s^{+}-\mu_{\dot{\rho}}=\dot{\rho}-(\mu_{\dot{\rho}}-c_{2}\mu_{s})+c_{2}d_{3}.

Accordingly, let d1=β˙(μβ˙c1μs)d_{1}=\dot{\beta}-(\mu_{\dot{\beta}}-c_{1}\mu_{s}) and d2=ρ˙(μρ˙c2μs)d_{2}=\dot{\rho}-(\mu_{\dot{\rho}}-c_{2}\mu_{s}), then the exponent simplifies to

12[(d1+c1d3d2+c2d3d3)TΣ1(d1+c1d3d2+c2d3d3)].-\frac{1}{2}\left[\begin{pmatrix}d_{1}+c_{1}d_{3}\\ d_{2}+c_{2}d_{3}\\ d_{3}\end{pmatrix}^{\text{T}}\Sigma^{-1}\begin{pmatrix}d_{1}+c_{1}d_{3}\\ d_{2}+c_{2}d_{3}\\ d_{3}\end{pmatrix}\right].

Clearly, the vectors are a linear combination of d1,d2,d3d_{1},\,d_{2},\,d_{3}. Recalling the definition of 𝐏\mathbf{P} from the Proposition, we get

(d1+c1d3d2+c2d3d3)=(10c101c2001)(d1d2d3)=𝐏1(d1d2d3)\begin{pmatrix}d_{1}+c_{1}d_{3}\\ d_{2}+c_{2}d_{3}\\ d_{3}\end{pmatrix}=\begin{pmatrix}1&0&c_{1}\\ 0&1&c_{2}\\ 0&0&1\end{pmatrix}\begin{pmatrix}d_{1}\\ d_{2}\\ d_{3}\end{pmatrix}=\mathbf{P}^{-1}\begin{pmatrix}d_{1}\\ d_{2}\\ d_{3}\end{pmatrix}

and simplify the exponent to

12[(d1d2d3)T(𝐏1)TΣ1𝐏1(d1d2d3)].-\frac{1}{2}\left[\begin{pmatrix}d_{1}\\ d_{2}\\ d_{3}\end{pmatrix}^{\text{T}}\left(\mathbf{P}^{-1}\right)^{\text{T}}\Sigma^{-1}\mathbf{P}^{-1}\begin{pmatrix}d_{1}\\ d_{2}\\ d_{3}\end{pmatrix}\right]\;.

Recognising that (𝐏1)TΣ1𝐏1=(𝐏Σ𝐏T)1\left(\mathbf{P}^{-1}\right)^{\text{T}}\Sigma^{-1}\mathbf{P}^{-1}=\left(\mathbf{P}\Sigma\mathbf{P}^{\text{T}}\right)^{-1}, we conclude the stated result.

Claim (2)(ii), on the Gaussian distribution for LPC if conditioned on β\beta and ρ\rho, is trivial: simply recognise that the terms f1(β,ρ)f_{1}(\beta,\rho) and f2(β,ρ)f_{2}(\beta,\rho) are fixed and can be thought of as modifications to the means of β˙\dot{\beta} and ρ˙\dot{\rho}. ∎

2. Details of Figs. 1 and 2

Each Figure considers a Gaussian prior in MPC / LPC coordinates using a diagonal Σ\Sigma. The main challenge is to select the mean and variance for ss and ρ\rho so that the priors are roughly comparable between the two coordinate systems. In order to do so, we select an initial location for the target in Cartesian coordinates — (0.1,0.15)(0.1,0.15) km in Fig. 1 and (1,1.5)(1,1.5) km in Fig. 2 — and an initial standard deviation for target location uncertainty of 0.40.4 km in Cartesian space. We sample 10,000 times from this Gaussian prior in Cartesian coordinates, transform each sample into ss and ρ\rho, and use the variance of the transformed samples as the variance of the prior in LPC / MPC. The resulting standard deviations, and all other parameters for the figures, are shown in Table I. The figures are then obtained by sampling 10,000 times from each prior and transforming those samples: label the ithi^{\text{th}} sample 𝒙i\bm{x}_{i}, then the figures plot 𝒙i+=𝒉(𝒙i)\bm{x}_{i}^{+}=\bm{h}(\bm{x}_{i}) for all ii.

μβ\mu_{\beta} μβ˙\mu_{\dot{\beta}} μρ˙\mu_{\dot{\rho}} μs/ρ\mu_{s/\rho} σβ2\sigma^{2}_{\beta} σβ˙2\sigma^{2}_{\dot{\beta}} σρ˙2\sigma^{2}_{\dot{\rho}} σs/ρ2\sigma^{2}_{s/\rho}
 Fig. 1 MPC 0.58 1.1 0.1 5.5 0.05 0.2 0.2 4.0
LPC 0.58 1.1 0.1 -1.7 0.05 0.2 0.2 0.63
 Fig. 2 MPC 0.58 1.1 0.1 0.55 0.05 0.2 0.2 0.14
LPC 0.58 1.1 0.1 0.59 0.05 0.2 0.2 0.23
TABLE I: Means and variances for Gaussian priors, obtained as described in Appendix A.

Appendix B Demonstration of Higher-Order Moment Generation via Computer Algebra

Our SageMath package, seaweed \faGitSquare, exports any moment of the post-manoeuvre target state distribution as Python or LaTeX code. For example, we present the third raw moment of bearing rate in LaTeX in Appendix B. We have manually written the expected value on the left-hand side, but we directly obtain the right-hand side by calling the function moment_0300/latex_0300 in the pre-generated post_manoeuvre module.

There is a minor stylistic difference between the generated moments and the main body of this paper; the two components of ownship speed change, Δvx\Delta v_{x} and Δvy\Delta v_{y}, are respectively written as Δv0\Delta v_{0} and Δv1\Delta v_{1}. We have also manually added line breaks and alignment markers to fit the equation to the page.

𝔼[x13]=\displaystyle\mathbb{E}\!\left[x^{3}_{1}\right]= 34\displaystyle-\frac{3}{4}\, Δv03cos(μ03σ30)e(3μ312σ00+92σ33)34Δv0Δv12cos(μ03σ30)e(3μ312σ00+92σ33)\displaystyle{\Delta v_{0}}^{3}\cos\left(\mu_{0}-3\,\sigma_{30}\right)e^{\left(-3\,\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}-\frac{3}{4}\,{\Delta v_{0}}{\Delta v_{1}}^{2}\cos\left(\mu_{0}-3\,\sigma_{30}\right)e^{\left(-3\,\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}
14\displaystyle-\frac{1}{4}\, Δv03cos(3μ09σ30)e(3μ392σ00+92σ33)+34Δv0Δv12cos(3μ09σ30)e(3μ392σ00+92σ33)\displaystyle{\Delta v_{0}}^{3}\cos\left(3\,\mu_{0}-9\,\sigma_{30}\right)e^{\left(-3\,\mu_{3}-\frac{9}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}+\frac{3}{4}\,{\Delta v_{0}}{\Delta v_{1}}^{2}\cos\left(3\,\mu_{0}-9\,\sigma_{30}\right)e^{\left(-3\,\mu_{3}-\frac{9}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}
+34\displaystyle+\frac{3}{4}\, Δv02Δv1e(3μ392σ00+92σ33)sin(3μ09σ30)14Δv13e(3μ392σ00+92σ33)sin(3μ09σ30)\displaystyle{\Delta v_{0}}^{2}{\Delta v_{1}}e^{\left(-3\,\mu_{3}-\frac{9}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}\sin\left(3\,\mu_{0}-9\,\sigma_{30}\right)-\frac{1}{4}\,{\Delta v_{1}}^{3}e^{\left(-3\,\mu_{3}-\frac{9}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}\sin\left(3\,\mu_{0}-9\,\sigma_{30}\right)
+34\displaystyle+\frac{3}{4}\, Δv02Δv1e(3μ312σ00+92σ33)sin(μ03σ30)+34Δv13e(3μ312σ00+92σ33)sin(μ03σ30)\displaystyle{\Delta v_{0}}^{2}{\Delta v_{1}}e^{\left(-3\,\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}\sin\left(\mu_{0}-3\,\sigma_{30}\right)+\frac{3}{4}\,{\Delta v_{1}}^{3}e^{\left(-3\,\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{9}{2}\,\sigma_{33}\right)}\sin\left(\mu_{0}-3\,\sigma_{30}\right)
+32\displaystyle+\frac{3}{2}\, Δv02(μ12σ31)e(2μ3+2σ33)+32Δv12(μ12σ31)e(2μ3+2σ33)\displaystyle{\Delta v_{0}}^{2}{\left(\mu_{1}-2\,\sigma_{31}\right)}e^{\left(-2\,\mu_{3}+2\,\sigma_{33}\right)}+\frac{3}{2}\,{\Delta v_{1}}^{2}{\left(\mu_{1}-2\,\sigma_{31}\right)}e^{\left(-2\,\mu_{3}+2\,\sigma_{33}\right)}
+32\displaystyle+\frac{3}{2}\, ((μ12σ31)cos(2μ04σ30)e(2μ32σ00+2σ33)2σ10e(2μ32σ00+2σ33)sin(2μ04σ30))Δv02\displaystyle{}{\left({\left(\mu_{1}-2\,\sigma_{31}\right)}\cos\left(2\,\mu_{0}-4\,\sigma_{30}\right)e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}-2\,\sigma_{10}e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}\sin\left(2\,\mu_{0}-4\,\sigma_{30}\right)\right)}{\Delta v_{0}}^{2}
3\displaystyle-3\, (2σ10cos(2μ04σ30)e(2μ32σ00+2σ33)+(μ12σ31)e(2μ32σ00+2σ33)sin(2μ04σ30))Δv0Δv1\displaystyle{}{\left(2\,\sigma_{10}\cos\left(2\,\mu_{0}-4\,\sigma_{30}\right)e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}+{\left(\mu_{1}-2\,\sigma_{31}\right)}e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}\sin\left(2\,\mu_{0}-4\,\sigma_{30}\right)\right)}{\Delta v_{0}}{\Delta v_{1}}
32\displaystyle-\frac{3}{2}\, ((μ12σ31)cos(2μ04σ30)e(2μ32σ00+2σ33)2σ10e(2μ32σ00+2σ33)sin(2μ04σ30))Δv12\displaystyle{}{\left({\left(\mu_{1}-2\,\sigma_{31}\right)}\cos\left(2\,\mu_{0}-4\,\sigma_{30}\right)e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}-2\,\sigma_{10}e^{\left(-2\,\mu_{3}-2\,\sigma_{00}+2\,\sigma_{33}\right)}\sin\left(2\,\mu_{0}-4\,\sigma_{30}\right)\right)}{\Delta v_{1}}^{2}
3\displaystyle-3\, (σ11cos(μ0σ30)e(μ312σ00+12σ33)+((μ1σ31)cos(μ0σ30)e(μ312σ00+12σ33)\displaystyle{}{\left(\sigma_{11}\cos\left(\mu_{0}-\sigma_{30}\right)e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}+{\left({\left(\mu_{1}-\sigma_{31}\right)}\cos\left(\mu_{0}-\sigma_{30}\right)e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}\right.}\right.}
σ10e(μ312σ00+12σ33)sin(μ0σ30))(μ1+σ10+σ31))Δv0\displaystyle-{\left.{\left.\sigma_{10}e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}\sin\left(\mu_{0}-\sigma_{30}\right)\right)}{\left(\mu_{1}+\sigma_{10}+\sigma_{31}\right)}\right)}{\Delta v_{0}}
+3\displaystyle+3\, (σ11e(μ312σ00+12σ33)sin(μ0σ30)+(σ10cos(μ0σ30)e(μ312σ00+12σ33)\displaystyle{}{\left(\sigma_{11}e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}\sin\left(\mu_{0}-\sigma_{30}\right)+{\left(\sigma_{10}\cos\left(\mu_{0}-\sigma_{30}\right)e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}\right.}\right.}
+(μ1σ31)e(μ312σ00+12σ33)sin(μ0σ30))(μ1σ10+σ31))Δv1+(μ12+σ11)μ1+2μ1σ11.\displaystyle+{\left.{\left.{\left(\mu_{1}-\sigma_{31}\right)}e^{\left(-\mu_{3}-\frac{1}{2}\,\sigma_{00}+\frac{1}{2}\,\sigma_{33}\right)}\sin\left(\mu_{0}-\sigma_{30}\right)\right)}{\left(\mu_{1}-\sigma_{10}+\sigma_{31}\right)}\right)}{\Delta v_{1}}+{\left(\mu_{1}^{2}+\sigma_{11}\right)}\mu_{1}+2\,\mu_{1}\sigma_{11}\text{.} (40)

References

  • [1] V. Aidala and S. Hammel. Utilization of modified polar coordinates for bearings-only tracking. IEEE Transactions on Automatic Control, 28(3):283–294, March 1983.
  • [2] Hans D. Hoelzer, G. W. Johnson, and A. O. Cohen. Modified Polar Coordinates - The Key to Well Behaved Bearings Only Ranging. Technical Report IBM IR&D Report 78-M19-0001A, IBM Federal Systems Division, 1978.
  • [3] Dan Wang, Hongyan Hua, and Haiwang Cao. Algorithm of modified polar coordinates UKF for bearings-only target tracking. In 2010 2nd International Conference on Future Computer and Communication, volume 3, pages V3–557–V3–560, 2010.
  • [4] T. Brehard and J. R. Le Cadre. Closed-form posterior Cramér-Rao bounds for bearings-only tracking. IEEE Transactions on Aerospace and Electronic Systems, 42(4):1198–1223, October 2006.
  • [5] Mahendra Mallick, Sanjeev Arulampalam, Lyudmila Mihaylova, and Yanjun Yan. Angle-only filtering in 3D using Modified Spherical and Log Spherical Coordinates. In Proceedings of the 14th International Conference on Information Fusion (FUSION), pages 1–8, Chicago, IL, July 2011. IEEE.
  • [6] Simon Haykin, editor. Kalman Filtering and Neural Networks. John Wiley & Sons, Inc., October 2001.
  • [7] E. A. Wan and R. van der Merwe. The unscented Kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373). IEEE, 2000.
  • [8] P. L. Houtekamer and Herschel L. Mitchell. Data assimilation using an ensemble kalman filter technique. Monthly Weather Review, 126(3):796–811, March 1998.
  • [9] Volker Schmidt. Stochastics III lecture notes. Ulm University, 2012.
  • [10] Wlodzimierz Bryc. The Normal Distribution: Characterizations with Applications. Springer-Verlag, 1995.
  • [11] The Sage Developers, William Stein, David Joyner, David Kohel, John Cremona, and Burçin Eröcal. SageMath, the Sage Mathematics Software System (Version 9.8), 2023. https://www.sagemath.org.
  • [12] Roger Labbe. Kalman and Bayesian Filters in Python. https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python, 2014.
  • [13] Yaakov Bar-Shalom, X. Rong Li, and Thiagalingam Kirubarajan. Estimation with Applications to Tracking and Navigation. John Wiley and Sons, 2001.
  • [14] Renato Zanetti and Kirsten Tuggle. A novel Gaussian Mixture approximation for nonlinear estimation. In 2018 21st International Conference on Information Fusion (FUSION). IEEE, July 2018.

Biography Section

Athena Helena Xiourouppa Having completed a Bachelor of Mathematical Sciences (Advanced) at the University of Adelaide, Athena is pursuing postgraduate research in Statistics and Applied Mathematics via a Master of Philosophy. Her studies are supported by Acacia Systems. Athena has proficiency in statistical modelling, simulation, and dynamical systems. Outside of her work and study, Athena holds a position on the Women in Science, Technology, Engineering, and Mathematics Society (WISTEMS) committee at the University of Adelaide. This allows her to meaningfully apply her experience in academia and industry to inspire women and other minorities in STEM.
Dmitry Mikhin Dmitry started his research career in early 90’ working on sound propagation modelling in the ocean using ray acoustics, and later, parabolic equation methods. Since then he worked in areas as varied as acoustical modelling, infrared propagation in the atmosphere, flight planning and route search, asset optimisation and planning in mining, agricultural robotics, and lately, estimation, tracking, and data fusion. When wearing his second hat, Dmitry is a professional software developer combining his research and coding skills to bridge the vast badlands separating the academic science and industry applications.
Melissa Humphries Dr. Melissa Humphries is a statistician — she creates, and applies, analytical tools that make sense of our world. Working in areas like forensic science, health and psychology, the goal is always to increase efficiency, accuracy, and transparency. Building bridges between machines and experts, Melissa’s work aims to support experts in making decisions in an explainable way. Working at the interface between statistics and AI, Melissa’s research draws meaning out of complex systems by leveraging the explainable components of statistical and physical models and elevating them using novel techniques. She strongly advocates the integration of the end-user from the outset of all research and is committed to finding impactful ways to implement science. Melissa is also a passionate advocate of equity across all areas of her work and life.
John Maclean I use Bayesian methods to attack inference problems in which one has access to forecasts from a noisy, incorrect, chaotic model, and also access to noisy, incomplete data. The key question is how to use forecasts and data together in order to infer model states or parameters. The methods I use work very well in low dimensional problems, and my research attacks the curse of dimensionality found when employing high dimensional data.